Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

2014-03-13 Thread Renat Akhmerov
Folks,

Mistral and TaskFlow are significantly different technologies. With different 
set of capabilities, with different target audience.

We may not be doing enough to clarify all the differences, I admit that. The 
challenge here is that people tend to judge having minimal amount of 
information about both things. As always, devil in the details. Stan is 100% 
right, “seems” is not an appropriate word here. Java seems to be similar to C++ 
at the first glance for those who have little or no knowledge about them.

To be more consistent I won’t be providing all the general considerations that 
I’ve been using so far (in etherpads, MLs, in personal discussions), it doesn’t 
seem to be working well, at least not with everyone. So to make it better, like 
I said in that different thread: we’re evaluating TaskFlow now and will share 
the results. Basically, it’s what Boris said about what could and could not be 
implemented in TaskFlow. But since the very beginning of the project I never 
abandoned the idea of using TaskFlow some day when it’s possible. 

So, again: Joshua, we hear you, we’re working in that direction.

> 
> I'm reminded of
> http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-trac
> k/2 where it seemed like we were doing much better collaboration, what has
> happened to break this continuity?

Not sure why you think something is broken. We just want to finish the pilot 
with all the ‘must’ things working in it. This is a plan. Then we can revisit 
and change absolutely everything. Remember, to the great extent this is 
research. Joshua, this is what we talked about and agreed on many times. I know 
you might be anxious about that given the fact it’s taking more time than 
planned but our vision of the project has drastically evolved and gone far far 
beyond the initial Convection proposal. So the initial idea of POC is no longer 
relevant. Even though we finished the first version in December, we realized it 
wasn’t something that should have been shared with the community since it 
lacked some essential things.


Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-13 Thread sxmatch


于 2014-03-14 11:59, Zhangleiqiang (Trump) 写道:

From: sxmatch [mailto:sxmatch1...@gmail.com]
Sent: Friday, March 14, 2014 11:08 AM
To: Zhangleiqiang (Trump)
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
protection


于 2014-03-11 19:24, Zhangleiqiang 写道:

From: Huang Zhiteng [mailto:winsto...@gmail.com]
Sent: Tuesday, March 11, 2014 5:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

On Tue, Mar 11, 2014 at 5:09 PM, Zhangleiqiang

wrote:

From: Huang Zhiteng [mailto:winsto...@gmail.com]
Sent: Tuesday, March 11, 2014 4:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
 wrote:

Hi all,



Besides the "soft-delete" state for volumes, I think there is need
for introducing another "fake delete" state for volumes which have

snapshot.


Current Openstack refuses the delete request for volumes which
have snapshot. However, we will have no method to limit users to
only use the specific snapshot other than the original volume ,
because the original volume is always visible for the users.



So I think we can permit users to delete volumes which have
snapshots, and mark the volume as "fake delete" state. When all of
the snapshots of the volume have already deleted, the original
volume will be removed automatically.


Can you describe the actual use case for this?  I not sure I follow
why operator would like to limit the owner of the volume to only
use specific version of snapshot.  It sounds like you are adding
another layer.  If that's the case, the problem should be solved at
upper layer

instead of Cinder.

For example, one tenant's volume quota is five, and has 5 volumes
and 1

snapshot already. If the data in base volume of the snapshot is
corrupted, the user will need to create a new volume from the
snapshot, but this operation will be failed because there are already
5 volumes, and the original volume cannot be deleted, too.
Hmm, how likely is it the snapshot is still sane when the base volume
is corrupted?

If the snapshot of volume is COW, then the snapshot will be still sane when

the base volume is corrupted.
So, if we delete volume really, just keep snapshot alive, is it possible? User
don't want to use this volume at now, he can take a snapshot and then delete
volume.


If we delete volume really, the COW snapshot cannot be used. But if the data in 
base volume is corrupt, we can use the snapshot normally or create an available 
volume from the snapshot.

The "COW" means copy-on-write, when the data-block in base volume is being to 
written, this block will first copy to the snapshot.

Hope it helps.

Thanks for your explain,it's very helpful.

If he want it again, can create volume from this snapshot.

Any ideas?

Even if this case is possible, I don't see the 'fake delete' proposal
is the right way to solve the problem.  IMO, it simply violates what
quota system is designed for and complicates quota metrics
calculation (there would be actual quota which is only visible to
admin/operator and an end-user facing quota).  Why not contact
operator to bump the upper limit of the volume quota instead?

I had some misunderstanding on Cinder's snapshot.
"Fake delete" is common if there is "chained snapshot" or "snapshot tree"

mechanism. However in cinder, only volume can make snapshot but snapshot
cannot make snapshot again.

I agree with your bump upper limit method.

Thanks for your explanation.





Any thoughts? Welcome any advices.







--

zhangleiqiang



Best Regards



From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Thursday, March 06, 2014 8:38 PM


To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection







On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt


wrote:

On 6 March 2014 08:50, zhangyu (AI)  wrote:

It seems to be an interesting idea. In fact, a China-based public
IaaS, QingCloud, has provided a similar feature to their virtual
servers. Within 2 hours after a virtual server is deleted, the
server owner can decide whether or not to cancel this deletion
and re-cycle that "deleted" virtual server.

People make mistakes, while such a feature helps in urgent cases.
Any idea here?

Nova has soft_delete and restore for servers. That sounds similar?

John



-Original Message-
From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
Sent: Thursday, March 06, 2014 2:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

Hi all,

Current openstack provide the delete volume function to the user.
But it seems there is no any protection for user's delete operation

miss.


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-13 Thread Roman Podoliaka
Hi all,

I think it's actually not that hard to fix the errors we have when
using SQLAlchemy 0.9.x releases.

I uploaded two changes two Nova to fix unit tests:
- https://review.openstack.org/#/c/80431/ (this one should also fix
the Tempest test run error)
- https://review.openstack.org/#/c/80432/

Thanks,
Roman

On Thu, Mar 13, 2014 at 7:41 PM, Thomas Goirand  wrote:
> On 03/14/2014 02:06 AM, Sean Dague wrote:
>> On 03/13/2014 12:31 PM, Thomas Goirand wrote:
>>> On 03/12/2014 07:07 PM, Sean Dague wrote:
 Because of where we are in the freeze, I think this should wait until
 Juno opens to fix. Icehouse will only be compatible with SQLA 0.8, which
 I think is fine. I expect the rest of the issues can be addressed during
 Juno 1.

 -Sean
>>>
>>> Sean,
>>>
>>> No, it's not fine for me. I'd like things to be fixed so we can move
>>> forward. Debian Sid has SQLA 0.9, and Jessie (the next Debian stable)
>>> will be released SQLA 0.9 and with Icehouse, not Juno.
>>
>> We're past freeze, and this requires deep changes in Nova DB to work. So
>> it's not going to happen. Nova provably does not work with SQLA 0.9, as
>> seen in Tempest tests.
>>
>>   -Sean
>
> I'd be nice if we considered more the fact that OpenStack, at some
> point, gets deployed on top of distributions... :/
>
> Anyway, if we can't do it because of the freeze, then I will have to
> carry the patch in the Debian package. Never the less, someone will have
> to work and fix it. If you know how to help, it'd be very nice if you
> proposed a patch, even if we don't accept it before Juno opens.
>
> Thomas Goirand (zigo)
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-13 Thread weiyuanke
之前百度的张磊强师兄??

你现在在华为搞open stack?


---
韦远科
010 5881 3749
中国科学院 计算机网络信息中心
云计算平台:eccp.csdb.cn





On 2014年3月6日, at 下午2:19, Zhangleiqiang  wrote:

> Hi all,
> 
> Current openstack provide the delete volume function to the user.
> But it seems there is no any protection for user's delete operation miss.
> 
> As we know the data in the volume maybe very important and valuable. 
> So it's better to provide a method to the user to avoid the volume delete 
> miss.
> 
> Such as:
> We can provide a safe delete for the volume.
> User can specify how long the volume will be delay deleted(actually deleted) 
> when he deletes the volume.
> Before the volume is actually deleted, user can cancel the delete operation 
> and find back the volume.
> After the specified time, the volume will be actually deleted by the system.
> 
> Any thoughts? Welcome any advices.
> 
> Best regards to you.
> 
> 
> --
> zhangleiqiang
> 
> Best Regards
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-13 Thread Zhangleiqiang (Trump)
> From: sxmatch [mailto:sxmatch1...@gmail.com]
> Sent: Friday, March 14, 2014 11:08 AM
> To: Zhangleiqiang (Trump)
> Cc: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
> protection
> 
> 
> 于 2014-03-11 19:24, Zhangleiqiang 写道:
> >> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> >> Sent: Tuesday, March 11, 2014 5:37 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
> >> delete protection
> >>
> >> On Tue, Mar 11, 2014 at 5:09 PM, Zhangleiqiang
> >> 
> >> wrote:
>  From: Huang Zhiteng [mailto:winsto...@gmail.com]
>  Sent: Tuesday, March 11, 2014 4:29 PM
>  To: OpenStack Development Mailing List (not for usage questions)
>  Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
>  delete protection
> 
>  On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
>   wrote:
> > Hi all,
> >
> >
> >
> > Besides the "soft-delete" state for volumes, I think there is need
> > for introducing another "fake delete" state for volumes which have
> >> snapshot.
> >
> >
> > Current Openstack refuses the delete request for volumes which
> > have snapshot. However, we will have no method to limit users to
> > only use the specific snapshot other than the original volume ,
> > because the original volume is always visible for the users.
> >
> >
> >
> > So I think we can permit users to delete volumes which have
> > snapshots, and mark the volume as "fake delete" state. When all of
> > the snapshots of the volume have already deleted, the original
> > volume will be removed automatically.
> >
>  Can you describe the actual use case for this?  I not sure I follow
>  why operator would like to limit the owner of the volume to only
>  use specific version of snapshot.  It sounds like you are adding
>  another layer.  If that's the case, the problem should be solved at
>  upper layer
> >> instead of Cinder.
> >>> For example, one tenant's volume quota is five, and has 5 volumes
> >>> and 1
> >> snapshot already. If the data in base volume of the snapshot is
> >> corrupted, the user will need to create a new volume from the
> >> snapshot, but this operation will be failed because there are already
> >> 5 volumes, and the original volume cannot be deleted, too.
> >> Hmm, how likely is it the snapshot is still sane when the base volume
> >> is corrupted?
> > If the snapshot of volume is COW, then the snapshot will be still sane when
> the base volume is corrupted.
> So, if we delete volume really, just keep snapshot alive, is it possible? User
> don't want to use this volume at now, he can take a snapshot and then delete
> volume.
> 
If we delete volume really, the COW snapshot cannot be used. But if the data in 
base volume is corrupt, we can use the snapshot normally or create an available 
volume from the snapshot.

The "COW" means copy-on-write, when the data-block in base volume is being to 
written, this block will first copy to the snapshot.

Hope it helps.

> If he want it again, can create volume from this snapshot.
> 
> Any ideas?
> >
> >> Even if this case is possible, I don't see the 'fake delete' proposal
> >> is the right way to solve the problem.  IMO, it simply violates what
> >> quota system is designed for and complicates quota metrics
> >> calculation (there would be actual quota which is only visible to
> >> admin/operator and an end-user facing quota).  Why not contact
> >> operator to bump the upper limit of the volume quota instead?
> > I had some misunderstanding on Cinder's snapshot.
> > "Fake delete" is common if there is "chained snapshot" or "snapshot tree"
> mechanism. However in cinder, only volume can make snapshot but snapshot
> cannot make snapshot again.
> >
> > I agree with your bump upper limit method.
> >
> > Thanks for your explanation.
> >
> >
> >
> >
> >
> > Any thoughts? Welcome any advices.
> >
> >
> >
> >
> >
> >
> >
> > --
> >
> > zhangleiqiang
> >
> >
> >
> > Best Regards
> >
> >
> >
> > From: John Griffith [mailto:john.griff...@solidfire.com]
> > Sent: Thursday, March 06, 2014 8:38 PM
> >
> >
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
> > delete protection
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt
> > 
>  wrote:
> > On 6 March 2014 08:50, zhangyu (AI)  wrote:
> >> It seems to be an interesting idea. In fact, a China-based public
> >> IaaS, QingCloud, has provided a similar feature to their virtual
> >> servers. Within 2 hours after a virtual server is deleted, the
> >> server owner can decide whethe

Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-13 Thread sxmatch


于 2014-03-11 19:24, Zhangleiqiang 写道:

From: Huang Zhiteng [mailto:winsto...@gmail.com]
Sent: Tuesday, March 11, 2014 5:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
protection

On Tue, Mar 11, 2014 at 5:09 PM, Zhangleiqiang 
wrote:

From: Huang Zhiteng [mailto:winsto...@gmail.com]
Sent: Tuesday, March 11, 2014 4:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
 wrote:

Hi all,



Besides the "soft-delete" state for volumes, I think there is need
for introducing another "fake delete" state for volumes which have

snapshot.



Current Openstack refuses the delete request for volumes which have
snapshot. However, we will have no method to limit users to only
use the specific snapshot other than the original volume ,  because
the original volume is always visible for the users.



So I think we can permit users to delete volumes which have
snapshots, and mark the volume as "fake delete" state. When all of
the snapshots of the volume have already deleted, the original
volume will be removed automatically.


Can you describe the actual use case for this?  I not sure I follow
why operator would like to limit the owner of the volume to only use
specific version of snapshot.  It sounds like you are adding another
layer.  If that's the case, the problem should be solved at upper layer

instead of Cinder.

For example, one tenant's volume quota is five, and has 5 volumes and 1

snapshot already. If the data in base volume of the snapshot is corrupted, the
user will need to create a new volume from the snapshot, but this operation
will be failed because there are already 5 volumes, and the original volume
cannot be deleted, too.
Hmm, how likely is it the snapshot is still sane when the base volume is
corrupted?

If the snapshot of volume is COW, then the snapshot will be still sane when the 
base volume is corrupted.
So, if we delete volume really, just keep snapshot alive, is it 
possible? User don't want to use this volume at now, he can take a 
snapshot and then delete volume.


If he want it again, can create volume from this snapshot.

Any ideas?



Even if this case is possible, I don't see the 'fake delete' proposal
is the right way to solve the problem.  IMO, it simply violates what quota
system is designed for and complicates quota metrics calculation (there would
be actual quota which is only visible to admin/operator and an end-user facing
quota).  Why not contact operator to bump the upper limit of the volume
quota instead?

I had some misunderstanding on Cinder's snapshot.
"Fake delete" is common if there is "chained snapshot" or "snapshot tree" 
mechanism. However in cinder, only volume can make snapshot but snapshot cannot make snapshot again.

I agree with your bump upper limit method.

Thanks for your explanation.






Any thoughts? Welcome any advices.







--

zhangleiqiang



Best Regards



From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Thursday, March 06, 2014 8:38 PM


To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection







On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt 

wrote:

On 6 March 2014 08:50, zhangyu (AI)  wrote:

It seems to be an interesting idea. In fact, a China-based public
IaaS, QingCloud, has provided a similar feature to their virtual
servers. Within 2 hours after a virtual server is deleted, the
server owner can decide whether or not to cancel this deletion and
re-cycle that "deleted" virtual server.

People make mistakes, while such a feature helps in urgent cases.
Any idea here?

Nova has soft_delete and restore for servers. That sounds similar?

John



-Original Message-
From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
Sent: Thursday, March 06, 2014 2:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova][Cinder] Feature about volume
delete protection

Hi all,

Current openstack provide the delete volume function to the user.
But it seems there is no any protection for user's delete operation miss.

As we know the data in the volume maybe very important and valuable.
So it's better to provide a method to the user to avoid the volume
delete miss.

Such as:
We can provide a safe delete for the volume.
User can specify how long the volume will be delay
deleted(actually
deleted) when he deletes the volume.
Before the volume is actually deleted, user can cancel the delete
operation and find back the volume.
After the specified time, the volume will be actually deleted by
the system.

Any thoughts? Welcome any advices.

Best regards to you.


--
zhangleiqiang

Best Regards



___
OpenStack-dev mai

Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-13 Thread Thomas Goirand
On 03/14/2014 02:06 AM, Sean Dague wrote:
> On 03/13/2014 12:31 PM, Thomas Goirand wrote:
>> On 03/12/2014 07:07 PM, Sean Dague wrote:
>>> Because of where we are in the freeze, I think this should wait until
>>> Juno opens to fix. Icehouse will only be compatible with SQLA 0.8, which
>>> I think is fine. I expect the rest of the issues can be addressed during
>>> Juno 1.
>>>
>>> -Sean
>>
>> Sean,
>>
>> No, it's not fine for me. I'd like things to be fixed so we can move
>> forward. Debian Sid has SQLA 0.9, and Jessie (the next Debian stable)
>> will be released SQLA 0.9 and with Icehouse, not Juno.
> 
> We're past freeze, and this requires deep changes in Nova DB to work. So
> it's not going to happen. Nova provably does not work with SQLA 0.9, as
> seen in Tempest tests.
> 
>   -Sean

I'd be nice if we considered more the fact that OpenStack, at some
point, gets deployed on top of distributions... :/

Anyway, if we can't do it because of the freeze, then I will have to
carry the patch in the Debian package. Never the less, someone will have
to work and fix it. If you know how to help, it'd be very nice if you
proposed a patch, even if we don't accept it before Juno opens.

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] All LDAP users returned using keystone v3/users API

2014-03-13 Thread Nathan Kinder
On 03/13/2014 08:36 AM, Anna A Sortland wrote:
> [A] The current keystone LDAP community driver returns all users that
> exist in LDAP via the API call v3/users, instead of returning just users
> that have role grants (similar processing is true for groups). This
> could potentially be a very large number of users. We have seen large
> companies with LDAP servers containing hundreds and thousands of users.
> We are aware of the filters available in keystone.conf
> ([ldap].user_filter and [ldap].query_scope) to cut down on the number of
> results, but they do not provide sufficient filtering (for example, it
> is not possible to set user_filter to members of certain known groups
> for OpenLDAP without creating a memberOf overlay on the LDAP server).
> 
> [Nathan Kinder] What attributes would you filter on?  It seems to me
> that LDAP would need to have knowledge of the roles to be able to filter
> based on the roles.  This is not necessarily the case, as identity and
> assignment can be split in Keystone such that identity is in LDAP and
> role assignment is in SQL.  I believe it was designed this way to deal
> with deployments
> where LDAP already exists and there is no need (or possibility) of
> adding role info into LDAP.
> 
> [A] That's our main use case. The users and groups are in LDAP and role
> assignments are in SQL.
> You would filter on role grants and this information is in SQL backend.
> So new API would need to query both identity and assignment drivers.
> 
> [Nathan Kinder] Without filtering based on a role attribute in LDAP, I
> don't think that there is a good solution if you have OpenStack and
> non-OpenStack users mixed in the same container in LDAP.
> If you want to first find all of the users that have a role assigned to
> them in the assignments backend, then pull their information from LDAP,
> I think that you will end up with one LDAP search operation per user.
> This also isn't a very scalable solution.
> 
> [A] What was the reason the LDAP driver was written this way, instead of
> returning just the users that have OpenStack-known roles? Was the
> creation of a separate API for this function considered?
> Are other exploiters of OpenStack (or users of Horizon) experiencing
> this issue? If so, what was their approach to overcome this issue? We
> have been prototyping a keystone extension that provides an API that
> provides this filtering capability, but it seems like a function that
> should be generally available in keystone.
> 
> [Nathan Kinder] I'm curious to know how your prototype is looking to
> handle this.
> 
> [A] The prototype basically first calls assignment API
> list_role_assignments() to get a list of users and groups with role
> grants. It then iterates the retrieved list and calls identity API
> list_users_in_group() to get the list of users in these groups with
> grants and get_user() to get users that have role grants but do not
> belong to the groups with role grants (a call for each user). Both calls
> ignore groups and users that are not found in the LDAP registry but
> exist in SQL (this could be the result of a user or group being removed
> from LDAP, but the corresponding role grant was not revoked). Then the
> code removes duplicates if any and returns the combined list.

My main concern about this is that you have a single LDAP search
operation per user.  This will get you the correct results, but it isn't
very efficient for the LDAP server if you have a large number of users.
 Performing a single LDAP search operation will perform better if there
is some attribute you can use to filter on, as the connection handling
and operation parsing overhead will be much less.  If you are unable to
add an attribute in LDAP that identifies users that Keystone should list
(such as memberOf), you may not have much choice other than your proposal.

> 
> The new extension API is /v3/my_new_extension/users. Maybe the better
> naming would be v3/roles/users (list users with any role) - compare to
> existing v3/roles/​{role_id}​/users  (list users with a specified role).
> 
> Another alternative that we've tried is just a new identity driver that
> inherits from keystone.identity.backends.ldap.LDAPIdentity and overrides
> just the list_users() function. That's probably not the best approach
> from OpenStack standards point of view but I would like to get
> community's feedback on whether this is acceptable.
> 
> 
> I've posted this question to openstack-security last week but could not
> get any feedback after Nathan's first reply. Reposting to openstack-dev..

Sorry for the delay in replying.  This list is a better place to discuss
this anyway, as you will get more visibility.

Thanks,
-NGK
> 
> 
> 
> Anna Sortland
> Cloud Systems Software Development
> IBM Rochester, MN
> annas...@us.ibm.com
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/op

Re: [openstack-dev] [keystone] 5 unicode unit test failures when building Debian package

2014-03-13 Thread Thomas Goirand
On 03/14/2014 05:20 AM, John Dennis wrote:
> On 03/13/2014 12:31 AM, Thomas Goirand wrote:
>> Hi,
>>
>> Since Havana, I've been ignoring the 5 unit test failures that I always
>> get. Though I think it'd be nice to have them fixed. The log file is
>> available over here:
>>
>> https://icehouse.dev-debian.pkgs.enovance.com/job/keystone/59/console
>>
>> Does anyone know what's going on? It'd be nice if I could solve these.
> 
> I've been fixing unicode errors in keystone (not these however). Please
> open a bug for these and you can assign the bug to me.

Hi,

Thanks for raising your hand! FYI:
https://bugs.launchpad.net/keystone/+bug/1292311

I couldn't find you and therefore, didn't assign the bug to you.

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Duplicate code for processing REST APIs

2014-03-13 Thread Matt Riedemann



On 3/13/2014 4:13 PM, Roman Podoliaka wrote:

Hi Steven,

Code from openstack/common/ dir is 'synced' from oslo-incubator. The
'sync' is effectively a copy of oslo-incubator subtree into a project
source tree. As syncs are not done at the same time, the code of
synced modules may indeed by different for each project depending on
which commit of oslo-incubator was synced.

Thanks,
Roman

On Thu, Mar 13, 2014 at 2:03 PM, Steven Kaufer  wrote:

While investigating some REST API updates I've discovered that there is a
lot of duplicated code across the various OpenStack components.

For example, the paginate_query function exists in all these locations and
there are a few slight differences between most of them:

https://github.com/openstack/ceilometer/blob/master/ceilometer/openstack/common/db/sqlalchemy/utils.py#L61
https://github.com/openstack/cinder/blob/master/cinder/openstack/common/db/sqlalchemy/utils.py#L37
https://github.com/openstack/glance/blob/master/glance/openstack/common/db/sqlalchemy/utils.py#L64
https://github.com/openstack/heat/blob/master/heat/openstack/common/db/sqlalchemy/utils.py#L62
https://github.com/openstack/keystone/blob/master/keystone/openstack/common/db/sqlalchemy/utils.py#L64
https://github.com/openstack/neutron/blob/master/neutron/openstack/common/db/sqlalchemy/utils.py#L61
https://github.com/openstack/nova/blob/master/nova/openstack/common/db/sqlalchemy/utils.py#L64

Does anyone know if there is any work going on to move stuff like this into
oslo and then deprecate these functions?  There are also many functions that
process the REST API request parameters (getting the limit, marker, sort
data, etc.) that are also replicated across many components.

If no existing work is done in this area, how should this be tackled?  As a
blueprint for Juno?

Thanks,

Steven Kaufer
Cloud Systems Software
kau...@us.ibm.com 507-253-5104
Dept HMYS / Bld 015-2 / G119 / Rochester, MN 55901


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Steve, more info here on oslo-incubator:

https://wiki.openstack.org/wiki/Oslo#Incubation

Welcome! :)

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-13 Thread Luohao (brian)
1.  fsfreeze with vss has been added to qemu upstream, see 
http://lists.gnu.org/archive/html/qemu-devel/2013-02/msg01963.html for usage.
2.  libvirt allows a client to send any commands to qemu-ga, see 
http://wiki.libvirt.org/page/Qemu_guest_agent
3.  linux fsfreeze is not equivalent to windows fsfreeze+vss. Linux fsreeze 
offers fs consistency only, while windows vss allows agents like sqlserver to 
register their plugins to flush their cache to disk when a snapshot occurs.
4.  my understanding is xenserver does not support fsfreeze+vss now, because 
xenserver normally does not use block backend in qemu.

-Original Message-
From: Bruce Montague [mailto:bruce_monta...@symantec.com] 
Sent: Thursday, March 13, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Hi, about OpenStack and VSS. Does anyone have experience with the qemu 
project's implementation of VSS support? They appear to have a within-guest 
agent, qemu-ga, that perhaps can work as a VSS requestor. Does it also work 
with KVM? Does qemu-ga work with libvirt (can VSS quiesce be triggered via 
libvirt)? I think there was an effort for qemu-ga to use fsfreeze as an 
equivalent to VSS on Linux systems, was that done?  If so, could an OpenStack 
API provide a generic quiesce request that would then get passed to libvirt? 
(Also, the XenServer VSS support seems different than qemu/KVM's, is this true? 
Can it also be accessed through libvirt?

Thanks,

-bruce

-Original Message-
From: Alessandro Pilotti [mailto:apilo...@cloudbasesolutions.com]
Sent: Thursday, March 13, 2014 6:49 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Those use cases are very important in enterprise scenarios requirements, but 
there's an important missing piece in the current OpenStack APIs: support for 
application consistent backups via Volume Shadow Copy (or other solutions) at 
the instance level, including differential / incremental backups.

VSS can be seamlessly added to the Nova Hyper-V driver (it's included with the 
free Hyper-V Server) with e.g. vSphere and XenServer supporting it as well 
(quescing) and with the option for third party vendors to add drivers for their 
solutions.

A generic Nova backup / restore API supporting those features is quite 
straightforward to design. The main question at this stage is if the OpenStack 
community wants to support those use cases or not. Cinder backup/restore 
support [1] and volume replication [2] are surely a great starting point in 
this direction.

Alessandro

[1] https://review.openstack.org/#/c/69351/
[2] https://review.openstack.org/#/c/64026/


> On 12/mar/2014, at 20:45, "Bruce Montague"  
> wrote:
>
>
> Hi, regarding the call to create a list of disaster recovery (DR) use cases ( 
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html ), 
> the following list sketches some speculative OpenStack DR use cases. These 
> use cases do not reflect any specific product behavior and span a wide 
> spectrum. This list is not a proposal, it is intended primarily to solicit 
> additional discussion. The first basic use case, (1), is described in a bit 
> more detail than the others; many of the others are elaborations on this 
> basic theme.
>
>
>
> * (1) [Single VM]
>
> A single Windows VM with 4 volumes and VSS (Microsoft's Volume Shadowcopy 
> Services) installed runs a key application and integral database. VSS can 
> quiesce the app, database, filesystem, and I/O on demand and can be invoked 
> external to the guest.
>
>   a. The VM's volumes, including the boot volume, are replicated to a remote 
> DR site (another OpenStack deployment).
>
>   b. Some form of replicated VM or VM metadata exists at the remote site. 
> This VM/description includes the replicated volumes. Some systems might use 
> cold migration or some form of wide-area live VM migration to establish this 
> remote site VM/description.
>
>   c. When specified by an SLA or policy, VSS is invoked, putting the VM's 
> volumes in an application-consistent state. This state is flushed all the way 
> through to the remote volumes. As each remote volume reaches its 
> application-consistent state, this is recognized in some fashion, perhaps by 
> an in-band signal, and a snapshot of the volume is made at the remote site. 
> Volume replication is re-enabled immediately following the snapshot. A backup 
> is then made of the snapshot on the remote site. At the completion of this 
> cycle, application-consistent volume snapshots and backups exist on the 
> remote site.
>
>   d.  When a disaster or firedrill happens, the replication network 
> connection is cut. The remote site VM pre-created or defined so as to use the 
> replicated volumes is then booted, using the latest application-consistent 
> state of the replicated volumes. Th

Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

2014-03-13 Thread Boris Pavlovic
Stan,


There is a big difference between TaskFlow and Mistral.
TaskFlow is implemented + it's part of OpenStack (already).
Mistral is under development and it's not a part of OpenStack.

If Mistral would like someday to make incubation request it will get
reasonable question: "Why Mistral reimplemented already existing OpenStack
project."

I just would like to say that your question, about "Mistral arch based on
TaskFlow" should be readdressed to Renat.

Because he is PTL of Mistral and he should use TaskFlow or say, TaskFlow
doesn't work for us in case A, case B, case C. And these cases couldn't be
implemented in TaskFlow (or Joshua will be totally disagree).

Otherwise seems like Mistral team didn't spend time to investigate already
existing OpenStack solutions..


Best regards,
Boris Pavlovic



On Fri, Mar 14, 2014 at 4:19 AM, Stan Lagun  wrote:

> Joshua, Boris, Renat,
>
> I call for this discussion to be technical rather than emotional. "Seems"
> is not appropriate word here. It seems like both Mistral and TaskFlow
> duplicate many similar services and libraries outside of OpenStack (BTW,
> what exactly in TaskFlow is OpenStack-specific?). So it is not a good idea
> to judge who duplicates who and focus on how TaskFlow can help Mistral (or
> vice versa)
>
> Let us first list what can be considered to be inalienable parts of
> Mistral (features and use cases that are vital to Mistral paradigm) and
> then suggest how those use cases can be addressed by TaskFlow. I think it
> would be best if Joshua propose detailed Mistral design based on TaskFlow
> that would have all the listed futures and addresses all principal use
> cases. After all it really doesn't matter if Mistral and TaskFlow has
> similar pieces of code in their implementation or similar concepts unless
> those pieces of code and concepts can be extracted from TaskFlow and reused
> in Mistral
>
>
> On Fri, Mar 14, 2014 at 3:17 AM, Boris Pavlovic  wrote:
>
>> Joshua,
>>
>>
>> Fully agree, seems like Mistral duplicates a lot of efforts of TaskFlow.
>> I just don't see any reason why Mistral is reimplementing TaskFlow and
>> not just adding new features to...
>>
>>
>>
>> Best regards,
>> Boris Pavlovic
>>
>>
>> On Fri, Mar 14, 2014 at 3:02 AM, Joshua Harlow wrote:
>>
>>> Separating from the following:
>>>
>>> *
>>> http://lists.openstack.org/pipermail/openstack-dev/2014-March/029870.html
>>>
>>> What can we do to resolve this, and reduce duplication in effort, code
>>> and
>>> so on.
>>>
>>> I believe that we are really doing the same thing, but it appears that
>>> something (imho) is wrong with the process here.
>>>
>>> I'm unsure how to resolve the situationŠ
>>>
>>> I believe since as a community we should be working together instead of
>>> working apart in silos creating the same thing.
>>>
>>> I'm reminded of
>>>
>>> http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-trac
>>> k/2where
>>>  it seemed like we were doing much better collaboration, what has
>>> happened to break this continuity?
>>>
>>> How can we fix it?
>>>
>>> -Josh
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Sincerely yours
> Stanislav (Stan) Lagun
> Senior Developer
> Mirantis
> 35b/3, Vorontsovskaya St.
> Moscow, Russia
> Skype: stanlagun
> www.mirantis.com
> sla...@mirantis.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

2014-03-13 Thread Stan Lagun
Joshua, Boris, Renat,

I call for this discussion to be technical rather than emotional. "Seems"
is not appropriate word here. It seems like both Mistral and TaskFlow
duplicate many similar services and libraries outside of OpenStack (BTW,
what exactly in TaskFlow is OpenStack-specific?). So it is not a good idea
to judge who duplicates who and focus on how TaskFlow can help Mistral (or
vice versa)

Let us first list what can be considered to be inalienable parts of Mistral
(features and use cases that are vital to Mistral paradigm) and then
suggest how those use cases can be addressed by TaskFlow. I think it would
be best if Joshua propose detailed Mistral design based on TaskFlow that
would have all the listed futures and addresses all principal use cases.
After all it really doesn't matter if Mistral and TaskFlow has similar
pieces of code in their implementation or similar concepts unless those
pieces of code and concepts can be extracted from TaskFlow and reused in
Mistral


On Fri, Mar 14, 2014 at 3:17 AM, Boris Pavlovic  wrote:

> Joshua,
>
>
> Fully agree, seems like Mistral duplicates a lot of efforts of TaskFlow.
> I just don't see any reason why Mistral is reimplementing TaskFlow and not
> just adding new features to...
>
>
>
> Best regards,
> Boris Pavlovic
>
>
> On Fri, Mar 14, 2014 at 3:02 AM, Joshua Harlow wrote:
>
>> Separating from the following:
>>
>> *
>> http://lists.openstack.org/pipermail/openstack-dev/2014-March/029870.html
>>
>> What can we do to resolve this, and reduce duplication in effort, code and
>> so on.
>>
>> I believe that we are really doing the same thing, but it appears that
>> something (imho) is wrong with the process here.
>>
>> I'm unsure how to resolve the situationŠ
>>
>> I believe since as a community we should be working together instead of
>> working apart in silos creating the same thing.
>>
>> I'm reminded of
>>
>> http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-trac
>> k/2where
>>  it seemed like we were doing much better collaboration, what has
>> happened to break this continuity?
>>
>> How can we fix it?
>>
>> -Josh
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 48 hours failures

2014-03-13 Thread Salvatore Orlando
Hi Rossella,

thanks for doing this work!
I think your results are pretty much consistent with what we've been
empirically observing in recent days.

elastic-recheck is flagging a resurgence of bug 1253896, and we need to
have somebody look at it as soon as possib.e
For bug 1291922, I've marked it as a duplicate (so far it's been reported
independently by at least 4 people). A patch for it has already merged.
Similarly, I've marked bug 1291918 as a duplicate of bug 1283522. Looking
at the logs for this failure, I found out that the actual root cause is a
"lock wait timeout". We've seen a spike in this kind of failures in the
last 36 hours; for this reason the severity of bug 1283522 was raised to
"critical".

I am a bit surprised by seeing an high occurrence of bug 1281969, perhaps
because I have no idea how it relates with neutron!
However, pushing Elastic Recheck queries won't be such a bad idea.
Finally, some of the bugs you found do not seem to be exclusive to neutron
(such as bug 1291947). In that case an Elastic Recheck query would be even
more useful.


Thanks again,
Salvatore


On 13 March 2014 13:08, Rossella Sblendido  wrote:

> Hello devs,
>
> I wanted the update the analysis performed by Salvatore Orlando few weeks
> ago [1]
> I used the following query for Logstash [2] to detect the failures of the
> last 48 hours.
>
> There were 77 failures (40% of the total).
> I classified them and obtained the following:
>
> 21% due to infra issues
> 16% https://bugs.launchpad.net/tempest/+bug/1253896
> 14% https://bugs.launchpad.net/neutron/+bug/1291922
> 12% https://bugs.launchpad.net/tempest/+bug/1281969
> 10% https://bugs.launchpad.net/neutron/+bug/1291920
> 7% https://bugs.launchpad.net/neutron/+bug/1291918
> 7% https://bugs.launchpad.net/neutron/+bug/1291926
> 5% https://bugs.launchpad.net/neutron/+bug/1291947
> 3% https://bugs.launchpad.net/neutron/+bug/1277439
> 3% https://bugs.launchpad.net/neutron/+bug/1283599
> 2% https://bugs.launchpad.net/nova/+bug/1255627
>
> I had to file 5 new bugs, that are on the previous list and can be viewed
> here [3].
>
> cheers,
>
> Rossella
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2014-
> February/027862.html
> [2] http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOl
> wiY2hlY2stdGVtcGVzdC1kc3ZtLW5ldXRyb24tZnVsbFwiICBBTkQgcHJvam
> VjdDpcIm9wZW5zdGFjay9uZXV0cm9uXCIgQU5EIG1lc3NhZ2U6XCJGaW5pc2
> hlZDpcIiBBTkQgYnVpbGRfc3RhdHVzOlwiRkFJTFVSRVwiIEFORCBidWlsZF
> 9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidG
> ltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIj
> p7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5NDcwNzAzODk5NywibW
> 9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
> [3] https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-full-job
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [db][all] (Proposal) Restorable & Delayed deletion of OS Resources

2014-03-13 Thread Joshua Harlow
Seems ok to me, and likely a good start, although I'm still not very 
comfortable with the effects of soft_deletion (unless its done by admins only), 
to me it complicates scheduling (can u schedule to something that has been 
soft_deleted, likely not). It also creates a  pool of resources that can't be 
used but can't be deleted either, that sounds a little bad and wastes companies 
$$ and it reinforces non-cloudly concepts. It also seems very complex, 
especially when your start connecting more and more resources together via heat 
or other system (the whole graph of resources now must be soft_deleted, wasting 
more $$, and how does one restore the graph of resources if some of them were 
also hard_deleted).

-Josh

From: Mike Wilson mailto:geekinu...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, March 13, 2014 at 1:26 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [db][all] (Proposal) Restorable & Delayed deletion 
of OS Resources

After a read through seems pretty good.

+1


On Thu, Mar 13, 2014 at 1:42 PM, Boris Pavlovic 
mailto:bpavlo...@mirantis.com>> wrote:
Hi stackers,

As a result of discussion:
[openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step 
by step)
http://osdir.com/ml/openstack-dev/2014-03/msg00947.html

I understood that there should be another proposal. About how we should 
implement Restorable & Delayed Deletion of OpenStack Resource in common way & 
without these hacks with soft deletion in DB.  It is actually very simple, take 
a look at this document:

https://docs.google.com/document/d/1WGrIgMtWJqPDyT6PkPeZhNpej2Q9Mwimula8S8lYGV4/edit?usp=sharing


Best regards,
Boris Pavlovic

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

2014-03-13 Thread Boris Pavlovic
Joshua,


Fully agree, seems like Mistral duplicates a lot of efforts of TaskFlow.
I just don't see any reason why Mistral is reimplementing TaskFlow and not
just adding new features to...



Best regards,
Boris Pavlovic


On Fri, Mar 14, 2014 at 3:02 AM, Joshua Harlow wrote:

> Separating from the following:
>
> *
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/029870.html
>
> What can we do to resolve this, and reduce duplication in effort, code and
> so on.
>
> I believe that we are really doing the same thing, but it appears that
> something (imho) is wrong with the process here.
>
> I'm unsure how to resolve the situationŠ
>
> I believe since as a community we should be working together instead of
> working apart in silos creating the same thing.
>
> I'm reminded of
> http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-trac
> k/2 where it seemed like we were doing much better collaboration, what has
> happened to break this continuity?
>
> How can we fix it?
>
> -Josh
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral][Taskflow][all] Mistral + taskflow

2014-03-13 Thread Joshua Harlow
Separating from the following:

* http://lists.openstack.org/pipermail/openstack-dev/2014-March/029870.html

What can we do to resolve this, and reduce duplication in effort, code and
so on.

I believe that we are really doing the same thing, but it appears that
something (imho) is wrong with the process here.

I'm unsure how to resolve the situationŠ

I believe since as a community we should be working together instead of
working apart in silos creating the same thing.

I'm reminded of 
http://www.slideshare.net/RenatAkhmerov/mistral-hong-kong-unconference-trac
k/2 where it seemed like we were doing much better collaboration, what has
happened to break this continuity?

How can we fix it?

-Josh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] does exception need localize or not?

2014-03-13 Thread Joshua Harlow
From: Doug Hellmann 
mailto:doug.hellm...@dreamhost.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, March 13, 2014 at 12:44 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] does exception need localize or not?




On Thu, Feb 27, 2014 at 3:45 AM, yongli he 
mailto:yongli...@intel.com>> wrote:
refer to :
https://wiki.openstack.org/wiki/Translations

now some exception use _ and some not.  the wiki suggest do not to do that. but 
i'm not sure.

what's the correct way?


F.Y.I

What To Translate

At present the convention is to translate all user-facing strings. This means 
API messages, CLI responses, documentation, help text, etc.

There has been a lack of consensus about the translation of log messages; the 
current ruling is that while it is not against policy to mark log messages for 
translation if your project feels strongly about it, translating log messages 
is not actively encouraged.

I've updated the wiki to replace that paragraph with a pointer to 
https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation which explains 
the log translation rules. We will be adding the job needed to have different 
log translations during Juno.



Exception text should not be marked for translation, becuase if an exception 
occurs there is no guarantee that the translation machinery will be functional.

This makes no sense to me. Exceptions should be translated. By far the largest 
number of errors will be presented to users through the API or through Horizon 
(which gets them from the API). We will ensure that the translation code does 
its best to fall back to the original string if the translation fails.

Doug




Regards
Yongli He


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I think this question comes up every 3 months, haha ;)

As we continue to expand all the libraries in 
https://github.com/openstack/requirements/blob/master/global-requirements.txt 
and knowing that those libraries likely don't translate there exceptions 
(probably in the majority of cases, especially in non-openstack/oslo 3rd party 
libraries) are we chasing a ideal that can not be caught?

Food for thought,

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-13 Thread Dmitri Zimine
We have access to all configuration parameters in the context of api.py. May be 
you don't pass it but just instantiate it where you need it? Or I may 
misunderstand what you're trying to do...

DZ> 

PS: can you generate and update mistral.config.example to include new oslo 
messaging options? I forgot to mention it on review on time. 


On Mar 13, 2014, at 11:15 AM, W Chan  wrote:

> On the transport variable, the problem I see isn't with passing the variable 
> to the engine and executor.  It's passing the transport into the API layer.  
> The API layer is a pecan app and I currently don't see a way where the 
> transport variable can be passed to it directly.  I'm looking at 
> https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50 and 
> https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.  Do 
> you have any suggestion?  Thanks. 
> 
> 
> On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov  
> wrote:
> 
> On 13 Mar 2014, at 10:40, W Chan  wrote:
> 
>> I can write a method in base test to start local executor.  I will do that 
>> as a separate bp.  
> Ok.
> 
>> After the engine is made standalone, the API will communicate to the engine 
>> and the engine to the executor via the oslo.messaging transport.  This means 
>> that for the "local" option, we need to start all three components (API, 
>> engine, and executor) on the same process.  If the long term goal as you 
>> stated above is to use separate launchers for these components, this means 
>> that the API launcher needs to duplicate all the logic to launch the engine 
>> and the executor. Hence, my proposal here is to move the logic to launch the 
>> components into a common module and either have a single generic launch 
>> script that launch specific components based on the CLI options or have 
>> separate launch scripts that reference the appropriate launch function from 
>> the common module.
> 
> Ok, I see your point. Then I would suggest we have one script which we could 
> use to run all the components (any subset of of them). So for those 
> components we specified when launching the script we use this local 
> transport. Btw, scheduler eventually should become a standalone component 
> too, so we have 4 components.
> 
>> The RPC client/server in oslo.messaging do not determine the transport.  The 
>> transport is determine via oslo.config and then given explicitly to the RPC 
>> client/server.  
>> https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31
>>  and 
>> https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63
>>  are examples for the client and server respectively.  The in process Queue 
>> is instantiated within this transport object from the fake driver.  For the 
>> "local" option, all three components need to share the same transport in 
>> order to have the Queue in scope. Thus, we will need some method to have 
>> this transport object visible to all three components and hence my proposal 
>> to use a global variable and a factory method. 
> I’m still not sure I follow your point here.. Looking at the links you 
> provided I see this:
> 
> transport = messaging.get_transport(cfg.CONF)
> 
> So my point here is we can make this call once in the launching script and 
> pass it to engine/executor (and now API too if we want it to be launched by 
> the same script). Of course, we’ll have to change the way how we initialize 
> these components, but I believe we can do it. So it’s just a dependency 
> injection. And in this case we wouldn’t need to use a global variable. Am I 
> still missing something?
> 
> 
> Renat Akhmerov
> @ Mirantis Inc.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] does exception need localize or not?

2014-03-13 Thread Doug Hellmann
On Thu, Mar 13, 2014 at 5:03 PM, Alexei Kornienko <
alexei.kornie...@gmail.com> wrote:

>  On 03/13/2014 10:44 PM, Doug Hellmann wrote:
>
>
>
>
> On Thu, Feb 27, 2014 at 3:45 AM, yongli he  wrote:
>
>>  refer to :
>> https://wiki.openstack.org/wiki/Translations
>>
>> now some exception use _ and some not.  the wiki suggest do not to do
>> that. but i'm not sure.
>>
>> what's the correct way?
>>
>>
>> F.Y.I
>>
>> What To Translate
>>
>> At present the convention is to translate *all* user-facing strings.
>> This means API messages, CLI responses, documentation, help text, etc.
>>
>> There has been a lack of consensus about the translation of log messages;
>> the current ruling is that while it is not against policy to mark log
>> messages for translation if your project feels strongly about it,
>> translating log messages is not actively encouraged.
>>
>
>  I've updated the wiki to replace that paragraph with a pointer to
> https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation which
> explains the log translation rules. We will be adding the job needed to
> have different log translations during Juno.
>
>
>
>>  Exception text should *not* be marked for translation, becuase if an
>> exception occurs there is no guarantee that the translation machinery will
>> be functional.
>>
>
>  This makes no sense to me. Exceptions should be translated. By far the
> largest number of errors will be presented to users through the API or
> through Horizon (which gets them from the API). We will ensure that the
> translation code does its best to fall back to the original string if the
> translation fails.
>
> There is another option: exception can contain non localized string and a
> thin wrapper will translate them on API layer right before output.
> Something like:
> print _(str(exception)).
>
> It seems a cleaner solution to me since we don't need to add translations
> all over the code and we call a gettext just once when it's actually needed.
>

Unfortunately, that's not how gettext works. Each string needs to be
marked up inline in order for the message catalog extraction tool to find
it and add it to the strings to be translated.

Doug



>
>
>  Doug
>
>
>
>>
>>
>> Regards
>> Yongli He
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Actions design BP

2014-03-13 Thread Dmitri Zimine
Thanks Renat for a clear design summary, 
thanks Joshua for the questions, 
+1 to "let's move TaskFlow vs Mistral" discussion to separate thread, 
and my questions/comments on 
https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign below: 

- Async actions: how do results of async action communicates back? 
My understanding it it assumes that the remote system will call back to mistral 
with action execution id, it's on the engine to call-back, and action needs to 
let the engine know to expect call-back. Let's put the explanation here. 

- is_sync() - consider using an attribute instead -  @mistral.async
- can we think of a way to unify sync and async actions from engine's 
standpoint? So that we don't special-case it in the engine? @ Joshua - does 
something similar exists in TaskFlow already?

- def dry_run() - maybe name "test", let's stress that this method should 
return a representative sample output. 

- Input - need a facility to declare, validate and list input parameters. Like 
VALID_KEYS=['url', 'parameters''] , def validate(): 

- class HTTPAction(object):
def __init__(self, url, params, method, headers, body):
Not happy about declaring parameters explicitly. How about using * args 
**kvargs, or 'parameters' dictionary? 

- DSL In-Place Declaration - I did minor edits in the section, please check. 

DZ. 
- 
On Mar 12, 2014, at 6:54 PM, Joshua Harlow  wrote:

> So taskflow has tasks, which seems comparable to actions?
> 
> I guess I should get tired of asking but why recreate the same stuff ;)
> 
> The questions listed:
> 
> - Does action need to have revert() method along with run() method?
> - How does action expose errors occurring during it's work?
> 
> - In what form does action return a result?
> 
> 
> And more @ https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign
> 
> And quite a few others that haven't been mentioned (how does a action
> retry? How does a action report partial progress? What's the
> intertask/state persistence mechanism?) have been worked on by the
> taskflow team for a while now...
> 
> https://github.com/openstack/taskflow/blob/master/taskflow/task.py#L31
> (and others...)
> 
> Anyways, I know mistral is still POC/pilot/prototype... but seems like
> more duplicate worked that could just be avoided ;)
> 
> -Josh
> 
> -Original Message-
> From: Renat Akhmerov 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Tuesday, March 11, 2014 at 11:32 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [Mistral] Actions design BP
> 
>> Team,
>> 
>> I started summarizing all the thoughts and ideas that we¹ve been
>> discussing for a while regarding actions. The main driver for this work
>> is that the system keeps evolving and we still don¹t have a comprehensive
>> understanding of that part. Additionally, we keep getting a lot of
>> requests and questions from our potential users which are related to
>> actions (Œwill they be extensible?¹, Œwill they have dry-run feature?¹,
>> Œwhat are the ways to configure and group them?¹ and so on and so forth).
>> So although we¹re still in a Pilot phase we need to start this work in
>> parallel. Even now lack of solid understanding of it creates a lot of
>> problems in pilot development.
>> 
>> I created a BP at launchpad [0] which has a reference to detailed
>> specification [1]. It¹s still in progress but you could already leave
>> your early feedback so that I don¹t go in a wrong direction too far.
>> 
>> The highest priority now is still finishing the pilot so we shouldn¹t
>> start implementing everything described in BP right now. However, some of
>> the things have to be adjusted asap (like Action interface and the main
>> implementation principles).
>> 
>> [0]: 
>> https://blueprints.launchpad.net/mistral/+spec/mistral-actions-design
>> [1]: https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign
>> 
>> Renat Akhmerov
>> @ Mirantis Inc.
>> 
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Carl Baldwin
Right, the L3 agent does do this already.  Agreed that the limiting
factor is the cumulative effect of the wrappers and executables' start
up overhead.

Carl

On Thu, Mar 13, 2014 at 9:47 AM, Brian Haley  wrote:
> Aaron,
>
> I thought the l3-agent already did this if doing a "full sync"?
>
> _sync_routers_task()->_process_routers()->spawn_n(self.process_router, ri)
>
> So each router gets processed in a greenthread.
>
> It seems like the other calls - sudo/rootwrap, /sbin/ip, etc are now the
> limiting factor, at least on network nodes with large numbers of namespaces.
>
> -Brian
>
> On 03/13/2014 10:48 AM, Aaron Rosen wrote:
>> The easiest/quickest thing to do for ice house would probably be to run the
>> initial sync in parallel like the dhcp-agent does for this exact reason. See:
>> https://review.openstack.org/#/c/28914/ which did this for thr dhcp-agent.
>>
>> Best,
>>
>> Aaron
>>
>> On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo > > wrote:
>>
>> Yuri, could you elaborate your idea in detail? , I'm lost at some
>> points with your unix domain / token authentication.
>>
>> Where does the token come from?,
>>
>> Who starts rootwrap the first time?
>>
>> If you could write a full interaction sequence, on the etherpad, from
>> rootwrap daemon start ,to a simple call to system happening, I think 
>> that'd
>> help my understanding.
>>
>>
>> Here it is: https://etherpad.openstack.org/p/rootwrap-agent
>> Please take a look.
>>
>> --
>>
>> Kind regards, Yuriy.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] 5 unicode unit test failures when building Debian package

2014-03-13 Thread John Dennis
On 03/13/2014 12:31 AM, Thomas Goirand wrote:
> Hi,
> 
> Since Havana, I've been ignoring the 5 unit test failures that I always
> get. Though I think it'd be nice to have them fixed. The log file is
> available over here:
> 
> https://icehouse.dev-debian.pkgs.enovance.com/job/keystone/59/console
> 
> Does anyone know what's going on? It'd be nice if I could solve these.

I've been fixing unicode errors in keystone (not these however). Please
open a bug for these and you can assign the bug to me.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [db][all] (Proposal) Restorable & Delayed deletion of OS Resources

2014-03-13 Thread Mike Wilson
After a read through seems pretty good.

+1


On Thu, Mar 13, 2014 at 1:42 PM, Boris Pavlovic wrote:

> Hi stackers,
>
> As a result of discussion:
> [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion
> (step by step)
> http://osdir.com/ml/openstack-dev/2014-03/msg00947.html
>
> I understood that there should be another proposal. About how we should
> implement Restorable & Delayed Deletion of OpenStack Resource in common way
> & without these hacks with soft deletion in DB.  It is actually very
> simple, take a look at this document:
>
>
> https://docs.google.com/document/d/1WGrIgMtWJqPDyT6PkPeZhNpej2Q9Mwimula8S8lYGV4/edit?usp=sharing
>
>
> Best regards,
> Boris Pavlovic
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-13 Thread Edgar Magana
That sounds like a good idea Mark!
Yes, please do not do it during the World Cup at least the meeting is in
Brazil  :-)

Edgar

On 3/13/14 2:11 PM, "Mark McClain"  wrote:

>
>On Mar 13, 2014, at 4:22 PM, Jay Pipes  wrote:
>
>> 
>> 
>> I personally would not be able to attend a mini-summit days before the
>> regular summit. I would, however, support a mini-summit about a month
>> after the regular summit, where the focus would be on implementing the
>> designs that are discussed at the regular summit.
>
>I¹ve been working some of the others on the core team to setup another
>Neutron mid-cycle meet up. Like the last one, this will be focused on
>writing/reviewing code for important Juno blueprints (so those who can¹t
>travel can still participate).  The trouble with finding dates in late
>May to early July dates is there are a number of large regional OpenStack
>events, other conferences, and the World Cup (we do have a several
>football fans on the team).  I hope that we¹ll be able to share the
>information with everyone soon.
>
>mark 
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Duplicate code for processing REST APIs

2014-03-13 Thread Kevin L. Mitchell
On Thu, 2014-03-13 at 16:03 -0500, Steven Kaufer wrote:
> While investigating some REST API updates I've discovered that there
> is a lot of duplicated code across the various OpenStack components.
> 
> For example, the paginate_query function exists in all these locations
> and there are a few slight differences between most of them:
> 
> https://github.com/openstack/ceilometer/blob/master/ceilometer/openstack/common/db/sqlalchemy/utils.py#L61
> https://github.com/openstack/cinder/blob/master/cinder/openstack/common/db/sqlalchemy/utils.py#L37
> https://github.com/openstack/glance/blob/master/glance/openstack/common/db/sqlalchemy/utils.py#L64
> https://github.com/openstack/heat/blob/master/heat/openstack/common/db/sqlalchemy/utils.py#L62
> https://github.com/openstack/keystone/blob/master/keystone/openstack/common/db/sqlalchemy/utils.py#L64
> https://github.com/openstack/neutron/blob/master/neutron/openstack/common/db/sqlalchemy/utils.py#L61
> https://github.com/openstack/nova/blob/master/nova/openstack/common/db/sqlalchemy/utils.py#L64
> 
> Does anyone know if there is any work going on to move stuff like this
> into oslo and then deprecate these functions?  

Um, all of the referenced files are already in oslo; that's what the
openstack/common subtree contains.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Duplicate code for processing REST APIs

2014-03-13 Thread Roman Podoliaka
Hi Steven,

Code from openstack/common/ dir is 'synced' from oslo-incubator. The
'sync' is effectively a copy of oslo-incubator subtree into a project
source tree. As syncs are not done at the same time, the code of
synced modules may indeed by different for each project depending on
which commit of oslo-incubator was synced.

Thanks,
Roman

On Thu, Mar 13, 2014 at 2:03 PM, Steven Kaufer  wrote:
> While investigating some REST API updates I've discovered that there is a
> lot of duplicated code across the various OpenStack components.
>
> For example, the paginate_query function exists in all these locations and
> there are a few slight differences between most of them:
>
> https://github.com/openstack/ceilometer/blob/master/ceilometer/openstack/common/db/sqlalchemy/utils.py#L61
> https://github.com/openstack/cinder/blob/master/cinder/openstack/common/db/sqlalchemy/utils.py#L37
> https://github.com/openstack/glance/blob/master/glance/openstack/common/db/sqlalchemy/utils.py#L64
> https://github.com/openstack/heat/blob/master/heat/openstack/common/db/sqlalchemy/utils.py#L62
> https://github.com/openstack/keystone/blob/master/keystone/openstack/common/db/sqlalchemy/utils.py#L64
> https://github.com/openstack/neutron/blob/master/neutron/openstack/common/db/sqlalchemy/utils.py#L61
> https://github.com/openstack/nova/blob/master/nova/openstack/common/db/sqlalchemy/utils.py#L64
>
> Does anyone know if there is any work going on to move stuff like this into
> oslo and then deprecate these functions?  There are also many functions that
> process the REST API request parameters (getting the limit, marker, sort
> data, etc.) that are also replicated across many components.
>
> If no existing work is done in this area, how should this be tackled?  As a
> blueprint for Juno?
>
> Thanks,
>
> Steven Kaufer
> Cloud Systems Software
> kau...@us.ibm.com 507-253-5104
> Dept HMYS / Bld 015-2 / G119 / Rochester, MN 55901
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-13 Thread Mark McClain

On Mar 13, 2014, at 4:22 PM, Jay Pipes  wrote:

> 
> 
> I personally would not be able to attend a mini-summit days before the
> regular summit. I would, however, support a mini-summit about a month
> after the regular summit, where the focus would be on implementing the
> designs that are discussed at the regular summit.

I’ve been working some of the others on the core team to setup another Neutron 
mid-cycle meet up. Like the last one, this will be focused on writing/reviewing 
code for important Juno blueprints (so those who can’t travel can still 
participate).  The trouble with finding dates in late May to early July dates 
is there are a number of large regional OpenStack events, other conferences, 
and the World Cup (we do have a several football fans on the team).  I hope 
that we’ll be able to share the information with everyone soon.

mark 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Duplicate code for processing REST APIs

2014-03-13 Thread Steven Kaufer


While investigating some REST API updates I've discovered that there is a
lot of duplicated code across the various OpenStack components.

For example, the paginate_query function exists in all these locations and
there are a few slight differences between most of them:

https://github.com/openstack/ceilometer/blob/master/ceilometer/openstack/common/db/sqlalchemy/utils.py#L61
https://github.com/openstack/cinder/blob/master/cinder/openstack/common/db/sqlalchemy/utils.py#L37
https://github.com/openstack/glance/blob/master/glance/openstack/common/db/sqlalchemy/utils.py#L64
https://github.com/openstack/heat/blob/master/heat/openstack/common/db/sqlalchemy/utils.py#L62
https://github.com/openstack/keystone/blob/master/keystone/openstack/common/db/sqlalchemy/utils.py#L64
https://github.com/openstack/neutron/blob/master/neutron/openstack/common/db/sqlalchemy/utils.py#L61
https://github.com/openstack/nova/blob/master/nova/openstack/common/db/sqlalchemy/utils.py#L64

Does anyone know if there is any work going on to move stuff like this into
oslo and then deprecate these functions?  There are also many functions
that process the REST API request parameters (getting the limit, marker,
sort data, etc.) that are also replicated across many components.

If no existing work is done in this area, how should this be tackled?  As a
blueprint for Juno?

Thanks,

Steven Kaufer
Cloud Systems Software
kau...@us.ibm.com 507-253-5104
Dept HMYS / Bld 015-2 / G119 / Rochester, MN 55901___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] does exception need localize or not?

2014-03-13 Thread Alexei Kornienko

On 03/13/2014 10:44 PM, Doug Hellmann wrote:




On Thu, Feb 27, 2014 at 3:45 AM, yongli he > wrote:


refer to :
https://wiki.openstack.org/wiki/Translations

now some exception use _ and some not.  the wiki suggest do not to
do that. but i'm not sure.

what's the correct way?


F.Y.I


What To Translate

At present the convention is to translate/all/user-facing strings.
This means API messages, CLI responses, documentation, help text, etc.

There has been a lack of consensus about the translation of log
messages; the current ruling is that while it is not against
policy to mark log messages for translation if your project feels
strongly about it, translating log messages is not actively
encouraged.


I've updated the wiki to replace that paragraph with a pointer to 
https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation which 
explains the log translation rules. We will be adding the job needed 
to have different log translations during Juno.


Exception text should/not/be marked for translation, becuase if an
exception occurs there is no guarantee that the translation
machinery will be functional.


This makes no sense to me. Exceptions should be translated. By far the 
largest number of errors will be presented to users through the API or 
through Horizon (which gets them from the API). We will ensure that 
the translation code does its best to fall back to the original string 
if the translation fails.
There is another option: exception can contain non localized string and 
a thin wrapper will translate them on API layer right before output.

Something like:
print _(str(exception)).

It seems a cleaner solution to me since we don't need to add 
translations all over the code and we call a gettext just once when it's 
actually needed.


Doug



Regards
Yongli He


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-13 Thread David Kranz

On 03/13/2014 04:56 PM, Sean Dague wrote:

On 03/13/2014 04:29 PM, David Kranz wrote:

On 03/13/2014 10:50 AM, Joe Hakim Rahme wrote:

On 10 Mar 2014, at 22:54, David Kranz  wrote:


There are a number of patches up for review that make various changes
to use "six" apis instead of Python 2 constructs. While I understand
the desire to get a head start on getting Tempest to run in Python 3,
I'm not sure it makes sense to do this work piecemeal until we are
near ready to introduce a py3 gate job. Many contributors will not be
aware of what all the differences are and py2-isms will creep back in
resulting in more overall time spent making these changes and
reviewing. Also, the core review team is busy trying to do stuff
important to the icehouse release which is barely more than 5 weeks
away. IMO we should hold off on various kinds of "cleanup" patches
for now.

+1 I agree with you David.

However, what's the best way we can go about making sure to make this a
goal for the next release cycle?

Basically we just need to decide that it is important. Then we would set
up a non-voting py3.3 job that fails miserably. We would have a list of
all the changes that are needed. Implement the changes and turn the
py3.3 job voting as soon as it passes. The more quickly this is done
once it starts, the better, both because it will cause rebase havoc and
new non-working-in-3.3 stuff will come in. So it is best done in a
highly coordinated way where the patches are submitted according to a
planned sequence and reviewed immediately.

So it's important that there is a full plan about how to get there,
including the python 3 story for everything in requirements.txt and
test-requirements.txt being resolved first.

Because partial work is pretty pointless, it bit rots. And if we can't
get to running tempest regularly with python3 then it will regress (I
would see us doing an extra python3 full run to prove that).

-Sean
Yes, and we are at the "top" of the tree in the sense that we depend on 
a lot of other packages but none (yet) depend on tempest.

It is not clear we will actually finish this in Juno.

 -David




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-13 Thread Sean Dague
On 03/13/2014 04:29 PM, David Kranz wrote:
> On 03/13/2014 10:50 AM, Joe Hakim Rahme wrote:
>> On 10 Mar 2014, at 22:54, David Kranz  wrote:
>>
>>> There are a number of patches up for review that make various changes
>>> to use "six" apis instead of Python 2 constructs. While I understand
>>> the desire to get a head start on getting Tempest to run in Python 3,
>>> I'm not sure it makes sense to do this work piecemeal until we are
>>> near ready to introduce a py3 gate job. Many contributors will not be
>>> aware of what all the differences are and py2-isms will creep back in
>>> resulting in more overall time spent making these changes and
>>> reviewing. Also, the core review team is busy trying to do stuff
>>> important to the icehouse release which is barely more than 5 weeks
>>> away. IMO we should hold off on various kinds of "cleanup" patches
>>> for now.
>> +1 I agree with you David.
>>
>> However, what’s the best way we can go about making sure to make this a
>> goal for the next release cycle?
> Basically we just need to decide that it is important. Then we would set
> up a non-voting py3.3 job that fails miserably. We would have a list of
> all the changes that are needed. Implement the changes and turn the
> py3.3 job voting as soon as it passes. The more quickly this is done
> once it starts, the better, both because it will cause rebase havoc and
> new non-working-in-3.3 stuff will come in. So it is best done in a
> highly coordinated way where the patches are submitted according to a
> planned sequence and reviewed immediately.

So it's important that there is a full plan about how to get there,
including the python 3 story for everything in requirements.txt and
test-requirements.txt being resolved first.

Because partial work is pretty pointless, it bit rots. And if we can't
get to running tempest regularly with python3 then it will regress (I
would see us doing an extra python3 full run to prove that).

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] does exception need localize or not?

2014-03-13 Thread Doug Hellmann
On Thu, Feb 27, 2014 at 3:45 AM, yongli he  wrote:

>  refer to :
> https://wiki.openstack.org/wiki/Translations
>
> now some exception use _ and some not.  the wiki suggest do not to do
> that. but i'm not sure.
>
> what's the correct way?
>
>
> F.Y.I
>
> What To Translate
>
> At present the convention is to translate *all* user-facing strings. This
> means API messages, CLI responses, documentation, help text, etc.
>
> There has been a lack of consensus about the translation of log messages;
> the current ruling is that while it is not against policy to mark log
> messages for translation if your project feels strongly about it,
> translating log messages is not actively encouraged.
>

I've updated the wiki to replace that paragraph with a pointer to 
https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation which
explains the log translation rules. We will be adding the job needed to
have different log translations during Juno.



> Exception text should *not* be marked for translation, becuase if an
> exception occurs there is no guarantee that the translation machinery will
> be functional.
>

This makes no sense to me. Exceptions should be translated. By far the
largest number of errors will be presented to users through the API or
through Horizon (which gets them from the API). We will ensure that the
translation code does its best to fall back to the original string if the
translation fails.

Doug



>
>
> Regards
> Yongli He
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-13 Thread Josh Durgin

On 03/13/2014 12:48 PM, Russell Bryant wrote:

On 03/13/2014 03:04 PM, Josh Durgin wrote:

These reverts are still confusing me. The use of glance's v2 api
is very limited and easy to protect from errors.

These patches use the v2 glance api for exactly one call - to get
image locations. This has been available and used by other
features in nova and cinder since 2012.

Jay's patch fixed the one issue that was found, and added tests for
several other cases. No other calls to glance v2 are used. The method
Jay fixed is the only one that accesses the response from glanceclient.
Furthermore, it's trivial to guard against more incompatibilities and
fall back to downloading normally if any errors occur. This already
happens if glance does not expose image locations.


There was some use of the v2 API, but not by default.  These patches
changed that, and it was broken.  We went from not requiring the v2 API
to requiring it, without a complete view for what that means, including
a severe lack of testing of that API.


That's my point - these patches did not need to require the v2 API. They
could easily try it and fall back, or detect when only the default
handler was enabled and not even try the v2 API in that case.

There is no hard requirement on the v2 API.


I think it's the right call to block any non-optional use of the API
until it's properly tested, and ideally, supported more generally in nova.


Can we consider adding this safety valve and un-reverting these patches?


No.  We're already well into the freeze and we can't afford risk or
distraction.  The time it took to deal with and discuss the issue this
caused is exactly why we're hesitant to approve FFEs at all.  It's a
distraction during critical time as we work toward the RC.


FWIW the patch that caused the issue was merged before FF.


The focus right now has to be on high/critical priority bugs and
roegressions.  We can revisit this properly in Juno.


Ok.

Josh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-13 Thread David Kranz

On 03/13/2014 10:50 AM, Joe Hakim Rahme wrote:

On 10 Mar 2014, at 22:54, David Kranz  wrote:


There are a number of patches up for review that make various changes to use "six" apis 
instead of Python 2 constructs. While I understand the desire to get a head start on getting 
Tempest to run in Python 3, I'm not sure it makes sense to do this work piecemeal until we are near 
ready to introduce a py3 gate job. Many contributors will not be aware of what all the differences 
are and py2-isms will creep back in resulting in more overall time spent making these changes and 
reviewing. Also, the core review team is busy trying to do stuff important to the icehouse release 
which is barely more than 5 weeks away. IMO we should hold off on various kinds of 
"cleanup" patches for now.

+1 I agree with you David.

However, what’s the best way we can go about making sure to make this a
goal for the next release cycle?
Basically we just need to decide that it is important. Then we would set 
up a non-voting py3.3 job that fails miserably. We would have a list of 
all the changes that are needed. Implement the changes and turn the 
py3.3 job voting as soon as it passes. The more quickly this is done 
once it starts, the better, both because it will cause rebase havoc and 
new non-working-in-3.3 stuff will come in. So it is best done in a 
highly coordinated way where the patches are submitted according to a 
planned sequence and reviewed immediately.


 -David


---
Joe H. Rahme
IRC: rahmu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting minutes March 13 [savanna]

2014-03-13 Thread Sergey Lukjanov
Heh, we should have a fathomless jar for it :(

On Thu, Mar 13, 2014 at 11:30 PM, Matthew Farrellee  wrote:
> On 03/13/2014 03:24 PM, Jay Pipes wrote:
>>
>> On Thu, 2014-03-13 at 23:13 +0400, Sergey Lukjanov wrote:
>>>
>>> Thanks everyone who have joined Savanna meeting.
>>
>>
>> You mean Sahara? :P
>>
>> -jay
>
>
> sergey now has to put some bitcoins in the jar...
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-13 Thread Jay Pipes
On Thu, 2014-03-13 at 20:06 +, Jorge Miramontes wrote:
> Now that the thread has had enough time for people to reply it appears
> that the majority of people that vocalized their opinion are in favor
> of a mini-summit, preferably to occur in Atlanta days before the
> Openstack summit. There are concerns however, most notably the concern
> that the mini-summit is not 100% inclusive (this seems to imply that
> other mini-summits are not 100% inclusive). Furthermore, there seems
> to be a concern about timing. I am relatively new to Openstack
> processes so I want to make sure I am following them. In this case,
> does majority vote win? If so, I'd like to further this discussion
> into actually planning a mini-summit. Thoughts?



I personally would not be able to attend a mini-summit days before the
regular summit. I would, however, support a mini-summit about a month
after the regular summit, where the focus would be on implementing the
designs that are discussed at the regular summit.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] An analysis of code review in Nova

2014-03-13 Thread Matt Riedemann



On 3/12/2014 7:29 PM, Arnaud Legendre wrote:

Hi Matt,

I totally agree with you and actually we have been discussing this a lot 
internally the last few weeks.
. As a top priority, the driver MUST integrate with oslo.vmware. This will be 
achieved through this chain of patches [1]. We want these patches to be merged 
before other things.
I think we should stop introducing more complexity which makes the task of refactoring 
more and more complicated. The integration with oslo.vmware is not a refactoring but 
should be seen as a way to get a more "lightweight" version of the driver which 
will make the task of refactoring a bit easier.
. Then, we want to actually refactor, we got several meetings to know what is 
the best strategy to adopt going forward (and avoid reproducing the same 
mistakes).
The highest priority is spawn(): we need to make it modular, remove nested 
methods. This refactoring work should include the integration with the image 
handler framework [2] and introducing the notion of image type object to avoid 
all these conditions on types of images inside the core logic.


Breaking up the spawn method to make it modular and thus testable or 
refactoring to use oslo.vmware, order there doesn't seem to really 
matter to me since both sound good.  But this scares me:


"This refactoring work should include the integration with the image 
handler framework"


Hopefully the refactoring being talked about here with oslo.vmware and 
breaking spawn into chunks can be done *before* any work to refactor the 
vmware driver to use the multiple image locations feature - it will 
probably have to be given that was reverted out of Icehouse and will 
have some prerequisite work to do before it will land in Juno.



. I would like to see you cores to be "involved" in this design since you will be 
reviewing the code at some point. "involved" here can be interpreted as review the 
design, and/ or actually participate to the design discussions. I would like to get your POV on 
this.

Let me know if this approach makes sense.

Thanks,
Arnaud

[1] https://review.openstack.org/#/c/70175/
[2] https://review.openstack.org/#/c/33409/


- Original Message -
From: "Matt Riedemann" 
To: openstack-dev@lists.openstack.org
Sent: Wednesday, March 12, 2014 11:28:23 AM
Subject: Re: [openstack-dev] [nova] An analysis of code review in Nova



On 2/25/2014 6:36 AM, Matthew Booth wrote:

I'm new to Nova. After some frustration with the review process,
specifically in the VMware driver, I decided to try to visualise how the
review process is working across Nova. To that end, I've created 2
graphs, both attached to this mail.

Both graphs show a nova directory tree pruned at the point that a
directory contains less than 2% of total LOCs. Additionally, /tests and
/locale are pruned as they make the resulting graph much busier without
adding a great deal of useful information. The data for both graphs was
generated from the most recent 1000 changes in gerrit on Monday 24th Feb
2014. This includes all pending changes, just over 500, and just under
500 recently merged changes.

pending.svg shows the percentage of LOCs which have an outstanding
change against them. This is one measure of how hard it is to write new
code in Nova.

merged.svg shows the average length of time between the
ultimately-accepted version of a change being pushed and being approved.

Note that there are inaccuracies in these graphs, but they should be
mostly good. Details of generation here:
https://urldefense.proofpoint.com/v1/url?u=https://github.com/mdbooth/heatmap&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0A&m=q%2BhYPEq%2BGxlDrGrMdbYCWuaLhZOwXwRpMQwWxkSied4%3D%0A&s=9a9e8ba562a81e0d00ca4190fbda306617637473ba5e721e4071d8d0ae20175c.
 This code is obviously
single-purpose, but is free for re-use if anyone feels so inclined.

The first graph above (pending.svg) is the one I was most interested in,
and shows exactly what I expected it to. Note the size of 'vmwareapi'.
If you check out Nova master, 24% of the vmwareapi driver has an
outstanding change against it. It is practically impossible to write new
code in vmwareapi without stomping on an oustanding patch. Compare that
to the libvirt driver at a much healthier 3%.

The second graph (merged.svg) is an attempt to look at why that is.
Again comparing the VMware driver with the libvirt we can see that at 12
days, it takes much longer for a change to be approved in the VMware
driver than in the libvirt driver. I suspect that this isn't the whole
story, which is likely a combination of a much longer review time with
very active development.

What's the impact of this? As I said above, it obviously makes it very
hard to come in as a new developer of the VMware driver when almost a
quarter of it has been rewritten, but you can't see it. I am very new to
this and others should validate my conclusions, but I also believe this
is having a detrimental impact to cod

Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-13 Thread Andrew Woodward
I disagree with the new dependency graph here, I don't think its reasonable
continue to have the Ephemeral RBD patch behind both glance v2 support and
image-multiple-location. Given the time that this has been in flight, we
should not be holding up features that do exist for features that don't.

I think we should go back to the original work proposed by Josh in [1] and
clean it up to be resubmitted once we re-open for Juno. If some
re-factoring for RBD is needed when glance v2 or image-multiple-location
does land, we would be happy to assist.

[1]  https://review.openstack.org/46879

Andrew
Mirantis
Ceph Community


On Thu, Mar 13, 2014 at 12:04 PM, Josh Durgin wrote:

> On 03/12/2014 04:54 PM, Matt Riedemann wrote:
>
>>
>>
>> On 3/12/2014 6:32 PM, Dan Smith wrote:
>>
>>> I'm confused as to why we arrived at the decision to revert the commits
 since Jay's patch was accepted. I'd like some details about this
 decision, and what new steps we need to take to get this back in for
 Juno.

>>>
>>> Jay's fix resolved the immediate problem that was reported by the user.
>>> However, after realizing why the bug manifested itself and why it didn't
>>> occur during our testing, all of the core members involved recommended a
>>> revert as the least-risky course of action at this point. If it took
>>> almost no time for that change to break a user that wasn't even using
>>> the feature, we're fearful about what may crop up later.
>>>
>>> We talked with the patch author (zhiyan) in IRC for a while after making
>>> the decision to revert about what the path forward for Juno is. The
>>> tl;dr as I recall is:
>>>
>>>   1. Full Glance v2 API support merged
>>>   2. Tests in tempest and nova that exercise Glance v2, and the new
>>>  feature
>>>   3. Push the feature patches back in
>>>
>>> --Dan
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> Those are essentially the steps as I remember them too.  Sean changed
>> the dependencies in the blueprints so the nova glance v2 blueprint is
>> the root dependency, then multiple images and then the other download
>> handler blueprints at the top.  I haven't checked but the blueprints
>> should be marked as not complete (not sure what that would be now) and
>> marked for next, the v2 glance root blueprint should be marked as high
>> priority too so we get the proper focus when Juno opens up.
>>
>
> These reverts are still confusing me. The use of glance's v2 api
> is very limited and easy to protect from errors.
>
> These patches use the v2 glance api for exactly one call - to get
> image locations. This has been available and used by other
> features in nova and cinder since 2012.
>
> Jay's patch fixed the one issue that was found, and added tests for
> several other cases. No other calls to glance v2 are used. The method
> Jay fixed is the only one that accesses the response from glanceclient.
> Furthermore, it's trivial to guard against more incompatibilities and
> fall back to downloading normally if any errors occur. This already
> happens if glance does not expose image locations.
>
> Can we consider adding this safety valve and un-reverting these patches?
>
> Josh
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
If google has done it, Google did it right!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting minutes March 13 [savanna]

2014-03-13 Thread Matthew Farrellee

On 03/13/2014 03:24 PM, Jay Pipes wrote:

On Thu, 2014-03-13 at 23:13 +0400, Sergey Lukjanov wrote:

Thanks everyone who have joined Savanna meeting.


You mean Sahara? :P

-jay


sergey now has to put some bitcoins in the jar...


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-13 Thread Jorge Miramontes
Hey everyone,

Now that the thread has had enough time for people to reply it appears that the 
majority of people that vocalized their opinion are in favor of a mini-summit, 
preferably to occur in Atlanta days before the Openstack summit. There are 
concerns however, most notably the concern that the mini-summit is not 100% 
inclusive (this seems to imply that other mini-summits are not 100% inclusive). 
Furthermore, there seems to be a concern about timing. I am relatively new to 
Openstack processes so I want to make sure I am following them. In this case, 
does majority vote win? If so, I'd like to further this discussion into 
actually planning a mini-summit. Thoughts?

Cheers,
--Jorge

From: Mike Wilson mailto:geekinu...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, March 11, 2014 11:57 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

Hangouts  worked well at the nova mid-cycle meetup. Just make sure you have 
your network situation sorted out before hand. Bandwidth and firewalls are what 
comes to mind immediately.

-Mike


On Tue, Mar 11, 2014 at 9:34 AM, Tom Creighton 
mailto:tom.creigh...@rackspace.com>> wrote:
When the Designate team had their mini-summit, they had an open Google Hangout 
for remote participants.  We could even have an open conference bridge if you 
are not partial to video conferencing.  With the issue of inclusion solved, 
let’s focus on a date that is good for the team!

Cheers,

Tom Creighton


On Mar 10, 2014, at 4:10 PM, Edgar Magana 
mailto:emag...@plumgrid.com>> wrote:

> Eugene,
>
> A have a few arguments why I believe this is not 100% inclusive
>   • Is the foundation involved on this process? How? What is the budget? 
> Who is the responsible from the foundation  side?
>   • If somebody made already travel arraignments, it won't be possible to 
> make changes at not cost.
>   • Staying extra days in a different city could impact anyone's budget
>   • As a OpenStack developer. I want to understand why the summit is not 
> enough for deciding the next steps for each project. If that is the case, I 
> would prefer to make changes on the organization of the summit instead of 
> creating mini-summits all around!
> I could continue but I think these are good enough.
>
> I could agree with your point about previous summits being distractive for 
> developers, this is why this time the OpenStack foundation is trying very 
> hard to allocate specific days for the conference and specific days for the 
> summit.
> The point that I am totally agree with you is that we SHOULD NOT have session 
> about work that will be done no matter what!  Those are just a waste of good 
> time that could be invested in very interesting discussions about topics that 
> are still not clear.
> I would recommend that you express this opinion to Mark. He is the right guy 
> to decide which sessions will bring interesting discussions and which ones 
> will be just a declaration of intents.
>
> Thanks,
>
> Edgar
>
> From: Eugene Nikanorov 
> mailto:enikano...@mirantis.com>>
> Reply-To: OpenStack List 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Monday, March 10, 2014 10:32 AM
> To: OpenStack List 
> mailto:openstack-dev@lists.openstack.org>>
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?
>
> Hi Edgar,
>
> I'm neutral to the suggestion of mini summit at this point.
> Why do you think it will exclude developers?
> If we keep it 1-3 days prior to OS Summit in Atlanta (e.g. in the same city) 
> that would allow anyone who joins OS Summit to save on extra travelling.
> OS Summit itself is too distractive to have really productive discussions, 
> unless your missing the sessions and spend time discussing.
> For instance design sessions basically only good for declaration of intents, 
> but not for real discussion of a complex topic at meaningful detail level.
>
> What would be your suggestions to make this more inclusive?
> I think the time and place is the key here - hence Atlanta and few days prior 
> OS summit.
>
> Thanks,
> Eugene.
>
>
>
> On Mon, Mar 10, 2014 at 10:59 PM, Edgar Magana 
> mailto:emag...@plumgrid.com>> wrote:
>> Team,
>>
>> I found that having a mini-summit with a very short notice means excluding
>> a lot of developers of such an interesting topic for Neutron.
>> The OpenStack summit is the opportunity for all developers to come
>> together and discuss the next steps, there are many developers that CAN
>> NOT afford another trip for a "special" summit. I am personally against
>> that and I do support Mark's proposal of having all the conversation over
>> IRC and mailing list.
>>
>> Please, do not start excluding people that won't be able to attend another
>> face-to-face meeting besides the summit. I believe tha

Re: [openstack-dev] [keystone] All LDAP users returned using keystone v3/users API

2014-03-13 Thread Anna A Sortland
Hi Mark, 

The existing v3/users API will still exist and will show all users. So you 
will still be able to grant role to a user who does not have one now.
I wonder if it makes sense to add a new API that would show only users 
that have role grants. 

So we would have:
v3/users - list all users   (existing API)
v3/roles/users - list users that have role grants   (new API)
v3/roles/​{role_id}​/users - list users with a specified role (existing 
API)



Anna Sortland
Cloud Systems Software Development
IBM Rochester, MN
annas...@us.ibm.com






From:   Mark Washenberger 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date:   03/13/2014 01:01 PM
Subject:Re: [openstack-dev] [keystone] All LDAP users returned 
using keystone v3/users API



Hi Anna,


On Thu, Mar 13, 2014 at 8:36 AM, Anna A Sortland  
wrote:
[A] The current keystone LDAP community driver returns all users that 
exist in LDAP via the API call v3/users, instead of returning just users 
that have role grants (similar processing is true for groups). This could 
potentially be a very large number of users. We have seen large companies 
with LDAP servers containing hundreds and thousands of users. We are aware 
of the filters available in keystone.conf ([ldap].user_filter and 
[ldap].query_scope) to cut down on the number of results, but they do not 
provide sufficient filtering (for example, it is not possible to set 
user_filter to members of certain known groups for OpenLDAP without 
creating a memberOf overlay on the LDAP server). 

[Nathan Kinder] What attributes would you filter on?  It seems to me that 
LDAP would need to have knowledge of the roles to be able to filter based 
on the roles.  This is not necessarily the case, as identity and 
assignment can be split in Keystone such that identity is in LDAP and role 
assignment is in SQL.  I believe it was designed this way to deal with 
deployments
where LDAP already exists and there is no need (or possibility) of adding 
role info into LDAP. 

[A] That's our main use case. The users and groups are in LDAP and role 
assignments are in SQL. 
You would filter on role grants and this information is in SQL backend. So 
new API would need to query both identity and assignment drivers. 

From my perspective, it seems there is a chicken-and-egg problem with this 
proposal. If a user doesn't have a role assigned, the user does not show 
up in the list. But if the user doesn't show up in the list, the user 
doesn't exist. If the user doesn't exist, you cannot add a role to it.

Perhaps what is needed is just some sort of filter to listing users that 
only returns users with a role in the cloud?

 

[Nathan Kinder] Without filtering based on a role attribute in LDAP, I 
don't think that there is a good solution if you have OpenStack and 
non-OpenStack users mixed in the same container in LDAP.
If you want to first find all of the users that have a role assigned to 
them in the assignments backend, then pull their information from LDAP, I 
think that you will end up with one LDAP search operation per user. This 
also isn't a very scalable solution.

[A] What was the reason the LDAP driver was written this way, instead of 
returning just the users that have OpenStack-known roles? Was the creation 
of a separate API for this function considered? 
Are other exploiters of OpenStack (or users of Horizon) experiencing this 
issue? If so, what was their approach to overcome this issue? We have been 
prototyping a keystone extension that provides an API that provides this 
filtering capability, but it seems like a function that should be 
generally available in keystone. 

[Nathan Kinder] I'm curious to know how your prototype is looking to 
handle this. 

[A] The prototype basically first calls assignment API 
list_role_assignments() to get a list of users and groups with role 
grants. It then iterates the retrieved list and calls identity API 
list_users_in_group() to get the list of users in these groups with grants 
and get_user() to get users that have role grants but do not belong to the 
groups with role grants (a call for each user). Both calls ignore groups 
and users that are not found in the LDAP registry but exist in SQL (this 
could be the result of a user or group being removed from LDAP, but the 
corresponding role grant was not revoked). Then the code removes 
duplicates if any and returns the combined list. 

The new extension API is /v3/my_new_extension/users. Maybe the better 
naming would be v3/roles/users (list users with any role) - compare to 
existing v3/roles/{role_id}/users  (list users with a specified role). 

Another alternative that we've tried is just a new identity driver that 
inherits from keystone.identity.backends.ldap.LDAPIdentity and overrides 
just the list_users() function. That's probably not the best approach from 
OpenStack standards point of view but I would like to get community's 
feedback on whether this is acceptable. 


I've p

Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-13 Thread Russell Bryant
On 03/13/2014 03:04 PM, Josh Durgin wrote:
> These reverts are still confusing me. The use of glance's v2 api
> is very limited and easy to protect from errors.
> 
> These patches use the v2 glance api for exactly one call - to get
> image locations. This has been available and used by other
> features in nova and cinder since 2012.
> 
> Jay's patch fixed the one issue that was found, and added tests for
> several other cases. No other calls to glance v2 are used. The method
> Jay fixed is the only one that accesses the response from glanceclient.
> Furthermore, it's trivial to guard against more incompatibilities and
> fall back to downloading normally if any errors occur. This already
> happens if glance does not expose image locations.

There was some use of the v2 API, but not by default.  These patches
changed that, and it was broken.  We went from not requiring the v2 API
to requiring it, without a complete view for what that means, including
a severe lack of testing of that API.

I think it's the right call to block any non-optional use of the API
until it's properly tested, and ideally, supported more generally in nova.

> Can we consider adding this safety valve and un-reverting these patches?

No.  We're already well into the freeze and we can't afford risk or
distraction.  The time it took to deal with and discuss the issue this
caused is exactly why we're hesitant to approve FFEs at all.  It's a
distraction during critical time as we work toward the RC.

The focus right now has to be on high/critical priority bugs and
regressions.  We can revisit this properly in Juno.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-13 Thread Mike Wilson
The restore use case is for sure inconsistently implemented and used. I
think I agree with Boris that we treat it as separate and just move on with
cleaning up soft delete. I imagine most deployments don't like having most
of the rows in their table be useless and make db access slow? That being
said, I am a little sad my hacky restore method will need to be reworked
:-).

-Mike


On Thu, Mar 13, 2014 at 1:30 PM, Clint Byrum  wrote:

> Excerpts from Tim Bell's message of 2014-03-12 11:02:25 -0700:
> >
> > >
> > > If you want to archive images per-say, on deletion just export it to a
> 'backup tape' (for example) and store enough of the metadata
> > > on that 'tape' to re-insert it if this is really desired and then
> delete it from the database (or do the export... asynchronously). The
> > > same could be said with VMs, although likely not all resources, aka
> networks/.../ make sense to do this.
> > >
> > > So instead of deleted = 1, wait for cleaner, just save the resource (if
> > > possible) + enough metadata on some other system ('backup tape',
> alternate storage location, hdfs, ceph...) and leave it there unless
> > > it's really needed. Making the database more complex (and all
> associated code) to achieve this same goal seems like a hack that just
> > > needs to be addressed with a better way to do archiving.
> > >
> > > In a cloudy world of course people would be able to recreate
> everything they need on-demand so who needs undelete anyway ;-)
> > >
> >
> > I have no problem if there was an existing process integrated into all
> of the OpenStack components which would produce me an archive trail with
> meta data and a command to recover the object from that data.
> >
> > Currently, my understanding is that there is no such function and thus
> the proposal to remove the deleted column is premature.
> >
>
> That seems like an unreasonable request of low level tools like Nova. End
> user applications and infrastructure management should be responsible
> for these things and will do a much better job of it, as you can work
> your own business needs for reliability and recovery speed into an
> application aware solution. If Nova does it, your cloud just has to
> provide everybody with the same un-delete, which is probably overkill
> for _many_ applications.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel-dev] [Fuel] Fuel Project 4.1 milestone reached!

2014-03-13 Thread David Easter
Hi All,

  Just wanted to let everyone know that the Fuel Project met it¹s 4.1
milestone on Monday March 7th.  This latest version includes (among other
things):
* Support for the OpenStack Havana 2013.2.2
  release.
* Ability to stop a deployment before completion
* Ability to reset an environment back to pre-deployment state without
losing original configuration settings
* NIC Bonding configuration in the Fuel UI
* Ceph acting as a backend for ephemeral volumes is no longer experimental
* The Ceilometer section within Horizon is now enabled by default
* Multiple network roles can share a single physical NIC
* Hundreds of defect fixes
Please feel free to visit https://launchpad.net/fuel/4.x/4.1 to view the
blueprints implemented and defects fixed.

Thanks to everyone in the community who contributed to hitting this
milestone!

- David J. Easter
  Product Line Manager,  Mirantis


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [db][all] (Proposal) Restorable & Delayed deletion of OS Resources

2014-03-13 Thread Boris Pavlovic
Hi stackers,

As a result of discussion:
[openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion
(step by step)
http://osdir.com/ml/openstack-dev/2014-03/msg00947.html

I understood that there should be another proposal. About how we should
implement Restorable & Delayed Deletion of OpenStack Resource in common way
& without these hacks with soft deletion in DB.  It is actually very
simple, take a look at this document:

https://docs.google.com/document/d/1WGrIgMtWJqPDyT6PkPeZhNpej2Q9Mwimula8S8lYGV4/edit?usp=sharing


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-13 Thread Clint Byrum
Excerpts from Tim Bell's message of 2014-03-12 11:02:25 -0700:
> 
> > 
> > If you want to archive images per-say, on deletion just export it to a 
> > 'backup tape' (for example) and store enough of the metadata
> > on that 'tape' to re-insert it if this is really desired and then delete it 
> > from the database (or do the export... asynchronously). The
> > same could be said with VMs, although likely not all resources, aka 
> > networks/.../ make sense to do this.
> > 
> > So instead of deleted = 1, wait for cleaner, just save the resource (if
> > possible) + enough metadata on some other system ('backup tape', alternate 
> > storage location, hdfs, ceph...) and leave it there unless
> > it's really needed. Making the database more complex (and all associated 
> > code) to achieve this same goal seems like a hack that just
> > needs to be addressed with a better way to do archiving.
> > 
> > In a cloudy world of course people would be able to recreate everything 
> > they need on-demand so who needs undelete anyway ;-)
> > 
> 
> I have no problem if there was an existing process integrated into all of the 
> OpenStack components which would produce me an archive trail with meta data 
> and a command to recover the object from that data.
> 
> Currently, my understanding is that there is no such function and thus the 
> proposal to remove the deleted column is premature.
> 

That seems like an unreasonable request of low level tools like Nova. End
user applications and infrastructure management should be responsible
for these things and will do a much better job of it, as you can work
your own business needs for reliability and recovery speed into an
application aware solution. If Nova does it, your cloud just has to
provide everybody with the same un-delete, which is probably overkill
for _many_ applications.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-13 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-03-12 10:58:36 -0700:
> On Wed, 2014-03-12 at 17:35 +, Tim Bell wrote:
> > And if the same mistake is done for a cinder volume or a trove database ?
> 
> Snapshots and backups?
> 

and bears, oh my!

+1, whether it is large data on a volume or saved state in the RAM of
a compute node, it isn't safe unless it is duplicated.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting minutes March 13 [savanna]

2014-03-13 Thread Jay Pipes
On Thu, 2014-03-13 at 23:13 +0400, Sergey Lukjanov wrote:
> Thanks everyone who have joined Savanna meeting.

You mean Sahara? :P

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting minutes March 13 [savanna]

2014-03-13 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-03-13-18.04.html
Log: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-03-13-18.04.log.html

It was decided to not keep backward compatibility for renaming due to
the a lot of additional effort needed. We'll discuss the starting date
for the full backward compatibility on the next meeting.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-13 Thread Josh Durgin

On 03/12/2014 04:54 PM, Matt Riedemann wrote:



On 3/12/2014 6:32 PM, Dan Smith wrote:

I'm confused as to why we arrived at the decision to revert the commits
since Jay's patch was accepted. I'd like some details about this
decision, and what new steps we need to take to get this back in for
Juno.


Jay's fix resolved the immediate problem that was reported by the user.
However, after realizing why the bug manifested itself and why it didn't
occur during our testing, all of the core members involved recommended a
revert as the least-risky course of action at this point. If it took
almost no time for that change to break a user that wasn't even using
the feature, we're fearful about what may crop up later.

We talked with the patch author (zhiyan) in IRC for a while after making
the decision to revert about what the path forward for Juno is. The
tl;dr as I recall is:

  1. Full Glance v2 API support merged
  2. Tests in tempest and nova that exercise Glance v2, and the new
 feature
  3. Push the feature patches back in

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Those are essentially the steps as I remember them too.  Sean changed
the dependencies in the blueprints so the nova glance v2 blueprint is
the root dependency, then multiple images and then the other download
handler blueprints at the top.  I haven't checked but the blueprints
should be marked as not complete (not sure what that would be now) and
marked for next, the v2 glance root blueprint should be marked as high
priority too so we get the proper focus when Juno opens up.


These reverts are still confusing me. The use of glance's v2 api
is very limited and easy to protect from errors.

These patches use the v2 glance api for exactly one call - to get
image locations. This has been available and used by other
features in nova and cinder since 2012.

Jay's patch fixed the one issue that was found, and added tests for
several other cases. No other calls to glance v2 are used. The method
Jay fixed is the only one that accesses the response from glanceclient.
Furthermore, it's trivial to guard against more incompatibilities and
fall back to downloading normally if any errors occur. This already
happens if glance does not expose image locations.

Can we consider adding this safety valve and un-reverting these patches?

Josh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-13 Thread Boris Pavlovic
Hi all,


I would like to fix direction of this thread. Cause it is going in wrong
direction.

To assume:
1) Yes restoring already deleted recourses could be useful.
2) Current approach with soft deletion is broken by design and we should
get rid of them.

More about why I think that it is broken:
1) When you are restoring some resource you should restore N records from N
tables (e.g. VM)
2) Restoring sometimes means not only restoring DB records.
3) Not all resources should be restorable (e.g. why I need to restore
fixed_ip? or key-pairs?)


So what we should think about is:
1) How to implement restoring functionally in common way (e.g. framework
that will be in oslo)
2) Split of work of getting rid of soft deletion in steps (that I already
mention):
a) remove soft deletion from places where we are not using it
b) replace internal code where we are using soft deletion to that framework
c) replace API stuff using ceilometer (for logs) or this framework (for
restorable stuff)


To put in a nutshell: Restoring Delete resources / Delayed Deletion != Soft
deletion.


Best regards,
Boris Pavlovic



On Thu, Mar 13, 2014 at 9:21 PM, Mike Wilson  wrote:

> For some guests we use the LVM imagebackend and there are times when the
> guest is deleted on accident. Humans, being what they are, don't back up
> their files and don't take care of important data, so it is not uncommon to
> use lvrestore and "undelete" an instance so that people can get their data.
> Of course, this is not always possible if the data has been subsequently
> overwritten. But it is common enough that I imagine most of our operators
> are familiar with how to do it. So I guess my saying that we do it on a
> regular basis is not quite accurate. Probably would be better to say that
> it is not uncommon to do this, but definitely not a daily task or something
> of that ilk.
>
> I have personally "undeleted" an instance a few times after accidental
> deletion also. I can't remember the specifics, but I do remember doing it
> :-).
>
> -Mike
>
>
> On Tue, Mar 11, 2014 at 12:46 PM, Johannes Erdfelt 
> wrote:
>
>> On Tue, Mar 11, 2014, Mike Wilson  wrote:
>> > Undeleting things is an important use case in my opinion. We do this in
>> our
>> > environment on a regular basis. In that light I'm not sure that it
>> would be
>> > appropriate just to log the deletion and git rid of the row. I would
>> like
>> > to see it go to an archival table where it is easily restored.
>>
>> I'm curious, what are you undeleting and why?
>>
>> JE
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Monty Taylor
Will do!

On Mar 13, 2014 10:13 AM, Solly Ross  wrote:
>
> @Monty: having a packaging system sounds like a good idea.  Send us a pull 
> request on github.com/kanaka/noVNC. 
>
> Best Regards, 
> Solly Ross 
>
> - Original Message - 
> From: "Monty Taylor"  
> To: "Sean Dague" , "OpenStack Development Mailing List (not 
> for usage questions)" , 
> openst...@nemebean.com 
> Cc: openstack-in...@lists.openstack.org 
> Sent: Thursday, March 13, 2014 12:09:01 PM 
> Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
> noVNC from github.com/kanaka 
>
> I agree. 
>
> Solly - in addition to potentially 'adopting' noVNC - or as a parallel 
> train of thought ... 
>
> As we started working on storyboard in infra, we've started using the 
> bower tool for html/javascript packaging - and we have some ability to 
> cache the output of that pretty easily. Would you accept patches to 
> noVNC to add bower config things and/or publication of tarballs of 
> releases via it? Since noVNC isn't likely to be participating in the 
> integrated gate in either case, we could potentially split the question 
> of "how do we get copies of it in a way that doesn't depend on OS 
> distros" (which is why we use pip for our python depends) and "does 
> noVNC want to have its git repo exist in OpenStack Infra systems. 
>
> Monty 
>
> On 03/13/2014 07:44 AM, Sean Dague wrote: 
> > I think a bigger question is why are we using a git version of something 
> > outside of OpenStack. 
> > 
> > Where is a noNVC release we can point to and use? 
> > 
> > In Juno I'd really be pro removing all the devstack references to git 
> > repos not on git.openstack.org, because these kinds of failures have 
> > real impact. 
> > 
> > Currently we have 4 repositories that fit this bill: 
> > 
> > SWIFT3_REPO=${SWIFT3_REPO:-http://github.com/fujita/swift3.git} 
> > NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git} 
> > RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git} 
> > SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}
> >  
> > 
> > I think all of these probably need to be removed from devstack. We 
> > should be using release versions (preferably in distros, though allowed 
> > to be in language specific package manager). 
> > 
> > -Sean 
> > 
> > On 03/13/2014 10:26 AM, Solly Ross wrote: 
> >> @bnemec: I don't think that's been considered.  I'm actually one of the 
> >> upstream maintainers for noVNC.  The only concern that I'd have with 
> >> OpenStack adopting noVNC (there are other maintainers, as well as the 
> >> author, so I'd have to check with them as well) is that there are a few 
> >> other projects that use noVNC, so we'd need to make sure that no 
> >> OpenStack-specific code gets merged into noVNC if we adopt it.  Other that 
> >> that, though, adopting noVNC doesn't sound like a horrible idea. 
> >> 
> >> Best Regards, 
> >> Solly Ross 
> >> 
> >> - Original Message - 
> >> From: "Ben Nemec"  
> >> To: "OpenStack Development Mailing List (not for usage questions)" 
> >>  
> >> Cc: openstack-in...@lists.openstack.org 
> >> Sent: Wednesday, March 12, 2014 3:38:19 PM 
> >> Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures 
> >> cloning noVNC from github.com/kanaka 
> >> 
> >> 
> >> 
> >> On 2014-03-11 20:34, Joshua Harlow wrote: 
> >> 
> >> 
> >> https://status.github.com/messages 
> >> * 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
> >> mitigations we have in place are proving effective in protecting us and 
> >> we're hopeful that we've got this one resolved.' 
> >> If you were cloning from github.org and not http://git.openstack.org then 
> >> you were likely seeing some of the DDoS attack in action. 
> >> Unfortunately I don't think novnc is in git.openstack.org because it's not 
> >> an OpenStack project. I wonder if we should investigate adopting it (if 
> >> the author(s) are amenable to that) since we're using the git version of 
> >> it. Maybe that's already been considered and I just don't know about it. 
> >> :-) 
> >> -Ben 
> >> 
> >> 
> >> 
> >> From: Sukhdev Kapur < sukhdevka...@gmail.com > 
> >> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < 
> >> openstack-dev@lists.openstack.org > 
> >> Date: Tuesday, March 11, 2014 at 4:08 PM 
> >> To: "Dane Leblanc (leblancd)" < lebla...@cisco.com > 
> >> Cc: "OpenStack Development Mailing List (not for usage questions)" < 
> >> openstack-dev@lists.openstack.org >, " openstack-in...@lists.openstack.org 
> >> " < openstack-in...@lists.openstack.org > 
> >> Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures 
> >> cloning noVNC from github.com/kanaka 
> >> 
> >> 
> >> 
> >> I have noticed that even clone of devstack has failed few times within 
> >> last couple of hours - it was running fairly smooth so far. 
> >> -Sukhdev 
> >> 
> >> 
> >> On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur < sukhdevka...@gmail.com >

[openstack-dev] [Neutron] [Nova] libvirt+Xen+OVS VLAN networking in icehouse

2014-03-13 Thread iain macdonnell
I've been playing with an icehouse build grabbed from fedorapeople. My
hypervisor platform is libvirt-xen, which I understand may be
deprecated for icehouse(?) but I'm stuck with it for now, and I'm
using VLAN networking. It almost works, but I have a problem with
networking. In havana, the VIF gets placed on a legacy ethernet
bridge, and a veth pair connects that to the OVS integration bridge.
In understand that this was done to enable iptables filtering at the
VIF. In icehouse, the VIF appears to get placed directly on the
integration bridge - i.e. the libvirt XML includes something like:


  
  
  
  



The problem is that the port on br-int does not have the VLAN tag.
i.e. I'll see something like:

Bridge br-int
Port "tap43b9d367-32"
Interface "tap43b9d367-32"
Port "qr-cac87198-df"
tag: 1
Interface "qr-cac87198-df"
type: internal
Port "int-br-bond0"
Interface "int-br-bond0"
Port br-int
Interface br-int
type: internal
Port "tapb8096c18-cf"
tag: 1
Interface "tapb8096c18-cf"
type: internal


If I manually set the tag using 'ovs-vsctl set port tap43b9d367-32
tag=1', traffic starts flowing where it needs to.

I've traced this back a bit through the agent code, and find that the
bridge port is ignored by the agent because it does not have any
"external_ids" (observed with 'ovs-vsctl list Interface'), and so the
update process that normally sets the tag is not invoked. It appears
that Xen is adding the port to the bridge, but nothing is updating it
with the neutron-specific "external_ids" that the agent expects to
see.

Before I dig any further, I thought I'd ask; is this stuff supposed to
work at this point? Is it intentional that the VIF is getting placed
directly on the integration bridge now? Might I be missing something
in my configuration?

FWIW, I've tried the ML2 plugin as well as the legacy OVS one, with
the same result.

TIA,

~iain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Data Integration]

2014-03-13 Thread Rajdeep Dua
Thanks for the feedback.
Will create a design on these lines and send across for review


On Wed, Mar 12, 2014 at 3:53 PM, Tim Hinrichs  wrote:

> Hi Rajdeep,
>
> This is an great problem to work on because it confronts one of the
> assumptions we're making in Congress: that cloud services can be
> represented as a collection of tables in a reasonable way.  You're asking
> good questions here.
>
> More responses inline.
>
> Tim
>
>
> --
>
> *From: *"Rajdeep Dua" 
> *To: *openstack-dev@lists.openstack.org
> *Sent: *Wednesday, March 12, 2014 11:54:28 AM
> *Subject: *[openstack-dev] [Congress][Data Integration]
>
>
> Need some guidance on how to convert nested types into flat tuples.
> Also should we reorder the tuple values in a particular sequence?
>
> Order of tuples doesn't matter. Order of columns (values) within a tuple
> doesn't really matter either, except that all tuples must use the same
> order and the policies we write must know which column is which.
>
>
> Thanks
> Rajdeep
>
> As an example i have shown networks and ports tuples with some nested types
>
> networks - tuple format
> ---
>
> keys (for reference)
>
> {'status','subnets',
> 'name','test-network','provider:physical_network','admin_state_up',
> 'tenant_id','provider:network_type','router:external',
> 'shared',id,'provider:segmentation_id'}
>
> values
> ---
> ('ACTIVE', ['4cef03d0-1d02-40bb-8c99-2f442aac6ab0'], 'test-network', None,
> True,
> '570fe78a1dc54cffa053bd802984ede2', 'gre', False, False,
> '240ff9df-df35-43ae-9df5-27fae87f2492', 4)
>
> Here we'd want to pull the List out and replace it with an ID. Then create
> another table that shows which subnets belong to the list with that ID.
> (You can think of the ID as a pointer to the list---in the C/C++ sense.)
>  So something like...
>
> network( 'ACTIVE', 'ID1', 'test-network', None, True,
>
> '570fe78a1dc54cffa053bd802984ede2', 'gre', False, False,
> '240ff9df-df35-43ae-9df5-27fae87f2492', 4)
>
> element('ID1', '4cef03d0-1d02-40bb-8c99-2f442aac6ab0')
> element('ID1', )
>
> The other thing to think about is whether we want 1 table with 10 columns
> or we want 10 tables with 2 columns each. In this example, we would have...
>
>
> network('net1')
> network.status('net1', 'ACTIVE' )
> network.subnets('net1', 'ID1')
> network.name('net1', 'test-network')
> ...
>
> The period is just another character in the tablename. Nothing fancy
> happening here.
>
> The ports example below would need a similar flattening.  To handle
> dictionaries, I would use the dot-notation shown above.
>
> A single Neutron API call might populate several Congress tables.
>
> Tim
>
>
> ports - tuple format
> 
> keys (for reference)
>
> {'status','binding:host_id', 'name', 'allowed_address_pairs',
> 'admin_state_up', 'network_id',
> 'tenant_id', 'extra_dhcp_opts': [],
> 'binding:vif_type', 'device_owner',
> 'binding:capabilities', 'mac_address',
> 'fixed_ips' , 'id', 'security_groups',
> 'device_id'}
>
> Values
>
> ('ACTIVE', 'havana', '', [], True, '240ff9df-df35-43ae-9df5-27fae87f2492',
> '570fe78a1dc54cffa053bd802984ede2', [], 'ovs', 'network:router_interface',
> {'port_filter': True}, 'fa:16:3e:ab:90:df', [{'subnet_id':
> '4cef03d0-1d02-40bb-8c99-2f442aac6ab0', 'ip_address': '90.0.0.1'}],
> '0a2ce569-85a8-45ec-abb3-0d4b34ff69ba', [],
> '864e4acf-bf8e-4664-8cf7-ad5daa95681e')
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
>
> https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=A86YVKfBX5U3g6F7eNScJYjr6Qwjv4dyDyVxE9Im8g8%3D%0A&s=0345ab3711a58ec1ebcee08649f047826cec593f57e9843df0fec2f8cfb03b42
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-13 Thread Fox, Kevin M
Funny this topic came up. I was just looking into some of this yesterday. 
Here's some links that I came up with:

*  
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-qemu-ga-freeze-thaw.html
 - Describes how application level safe backups of vm's can be accomplished. 
Didn't have the proper framework prior to RedHat 6.5. Looks reasonable now.

* http://lists.gnu.org/archive/html/qemu-devel/2012-11/msg01043.html - An 
example of a hook that lets you snapshot mysql safely while it is still running.

* https://wiki.openstack.org/wiki/Cinder/QuiescedSnapshotWithQemuGuestAgent - A 
blueprint for making safe live snapshots enabled via the Cinder api. Its not 
there yet, but being worked on.

 * https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support - Nova 
supports freeze/thawing the instance.

Thanks,
Kevin

From: Bruce Montague [bruce_monta...@symantec.com]
Sent: Thursday, March 13, 2014 7:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Hi, about OpenStack and VSS. Does anyone have experience with the qemu 
project's implementation of VSS support? They appear to have a within-guest 
agent, qemu-ga, that perhaps can work as a VSS requestor. Does it also work 
with KVM? Does qemu-ga work with libvirt (can VSS quiesce be triggered via 
libvirt)? I think there was an effort for qemu-ga to use fsfreeze as an 
equivalent to VSS on Linux systems, was that done?  If so, could an OpenStack 
API provide a generic quiesce request that would then get passed to libvirt? 
(Also, the XenServer VSS support seems different than qemu/KVM's, is this true? 
Can it also be accessed through libvirt?

Thanks,

-bruce

-Original Message-
From: Alessandro Pilotti [mailto:apilo...@cloudbasesolutions.com]
Sent: Thursday, March 13, 2014 6:49 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Those use cases are very important in enterprise scenarios requirements, but 
there's an important missing piece in the current OpenStack APIs: support for 
application consistent backups via Volume Shadow Copy (or other solutions) at 
the instance level, including differential / incremental backups.

VSS can be seamlessly added to the Nova Hyper-V driver (it's included with the 
free Hyper-V Server) with e.g. vSphere and XenServer supporting it as well 
(quescing) and with the option for third party vendors to add drivers for their 
solutions.

A generic Nova backup / restore API supporting those features is quite 
straightforward to design. The main question at this stage is if the OpenStack 
community wants to support those use cases or not. Cinder backup/restore 
support [1] and volume replication [2] are surely a great starting point in 
this direction.

Alessandro

[1] https://review.openstack.org/#/c/69351/
[2] https://review.openstack.org/#/c/64026/


> On 12/mar/2014, at 20:45, "Bruce Montague"  
> wrote:
>
>
> Hi, regarding the call to create a list of disaster recovery (DR) use cases ( 
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html ), 
> the following list sketches some speculative OpenStack DR use cases. These 
> use cases do not reflect any specific product behavior and span a wide 
> spectrum. This list is not a proposal, it is intended primarily to solicit 
> additional discussion. The first basic use case, (1), is described in a bit 
> more detail than the others; many of the others are elaborations on this 
> basic theme.
>
>
>
> * (1) [Single VM]
>
> A single Windows VM with 4 volumes and VSS (Microsoft's Volume Shadowcopy 
> Services) installed runs a key application and integral database. VSS can 
> quiesce the app, database, filesystem, and I/O on demand and can be invoked 
> external to the guest.
>
>   a. The VM's volumes, including the boot volume, are replicated to a remote 
> DR site (another OpenStack deployment).
>
>   b. Some form of replicated VM or VM metadata exists at the remote site. 
> This VM/description includes the replicated volumes. Some systems might use 
> cold migration or some form of wide-area live VM migration to establish this 
> remote site VM/description.
>
>   c. When specified by an SLA or policy, VSS is invoked, putting the VM's 
> volumes in an application-consistent state. This state is flushed all the way 
> through to the remote volumes. As each remote volume reaches its 
> application-consistent state, this is recognized in some fashion, perhaps by 
> an in-band signal, and a snapshot of the volume is made at the remote site. 
> Volume replication is re-enabled immediately following the snapshot. A backup 
> is then made of the snapshot on the remote site. At the completion of this 
> cycle, application-consistent volume snapshots and backups 

Re: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and OS::Neutron::PoolMember?

2014-03-13 Thread Mike Spreitzer
Therve told me he actually tested this and it works.  Now if I could only 
configure DevStack to install a working Neutron...

Regards,
Mike



From:   "Fox, Kevin M" 
To: Chris Armstrong , "OpenStack 
Development Mailing List (not for usage questions)" 
, 
Date:   03/13/2014 02:19 PM
Subject:Re: [openstack-dev] [heat][neutron] 
OS::Heat::AutoScalingGroup and OS::Neutron::PoolMember?



Hi Chris,

That's great to hear. I'm looking forward to installing icehouse and 
testing that out. :)

Thanks,
Kevin


From: Chris Armstrong [chris.armstr...@rackspace.com]
Sent: Wednesday, March 12, 2014 1:29 PM
To: Fox, Kevin M; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup 
and OS::Neutron::PoolMember?

Hi Kevin,

The design of OS::Heat::AutoScalingGroup should not require explicit 
support for load balancers. The design is meant to allow you to create a 
resource that wraps up both a OS::Heat::Server and a PoolMember in a 
template and use it via a Stack resource.

(Note that Mike was talking about the new OS::Heat::AutoScalingGroup 
resource, not AWS::AutoScaling::AutoScalingGroup).

So, while I haven’t tested this case with PoolMember specifically, and 
there may still be bugs, no more feature implementation should be 
necessary (I hope).

-- 
Christopher Armstrong
IRC: radix


On March 12, 2014 at 1:52:53 PM, Fox, Kevin M (kevin@pnnl.gov) wrote:
I submitted a blueprint a while back that I think is relevant:

https://blueprints.launchpad.net/heat/+spec/elasticloadbalancing-lbaas

Currently heat autoscaling doesn't interact with Neutron lbaas and the 
configurable bits aren't configurable enough to allow it without code 
changes as far as I can tell.

I think its only a few days of work, but the OpenStack CLA is preventing 
me from contributing. :/

Thanks,
Kevin


From: Mike Spreitzer [mspre...@us.ibm.com]
Sent: Wednesday, March 12, 2014 11:34 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and 
OS::Neutron::PoolMember?

Has anybody exercised the case of OS::Heat::AutoScalingGroup scaling a 
nested stack that includes a OS::Neutron::PoolMember?  Should I expect 
this to work?

Thanks,
Mike
___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and OS::Neutron::PoolMember?

2014-03-13 Thread Fox, Kevin M
Hi Chris,

That's great to hear. I'm looking forward to installing icehouse and testing 
that out. :)

Thanks,
Kevin


From: Chris Armstrong [chris.armstr...@rackspace.com]
Sent: Wednesday, March 12, 2014 1:29 PM
To: Fox, Kevin M; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and 
OS::Neutron::PoolMember?

Hi Kevin,

The design of OS::Heat::AutoScalingGroup should not require explicit support 
for load balancers. The design is meant to allow you to create a resource that 
wraps up both a OS::Heat::Server and a PoolMember in a template and use it via 
a Stack resource.

(Note that Mike was talking about the new OS::Heat::AutoScalingGroup resource, 
not AWS::AutoScaling::AutoScalingGroup).

So, while I haven’t tested this case with PoolMember specifically, and there 
may still be bugs, no more feature implementation should be necessary (I hope).

--
Christopher Armstrong
IRC: radix



On March 12, 2014 at 1:52:53 PM, Fox, Kevin M 
(kevin@pnnl.gov) wrote:

I submitted a blueprint a while back that I think is relevant:

https://blueprints.launchpad.net/heat/+spec/elasticloadbalancing-lbaas

Currently heat autoscaling doesn't interact with Neutron lbaas and the 
configurable bits aren't configurable enough to allow it without code changes 
as far as I can tell.

I think its only a few days of work, but the OpenStack CLA is preventing me 
from contributing. :/

Thanks,
Kevin


From: Mike Spreitzer [mspre...@us.ibm.com]
Sent: Wednesday, March 12, 2014 11:34 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [heat][neutron] OS::Heat::AutoScalingGroup and 
OS::Neutron::PoolMember?

Has anybody exercised the case of OS::Heat::AutoScalingGroup scaling a nested 
stack that includes a OS::Neutron::PoolMember?  Should I expect this to work?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-13 Thread W Chan
On the transport variable, the problem I see isn't with passing the
variable to the engine and executor.  It's passing the transport into the
API layer.  The API layer is a pecan app and I currently don't see a way
where the transport variable can be passed to it directly.  I'm looking at
https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50and
https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44.
 Do you have any suggestion?  Thanks.


On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov wrote:

>
> On 13 Mar 2014, at 10:40, W Chan  wrote:
>
>
>- I can write a method in base test to start local executor.  I will
>do that as a separate bp.
>
> Ok.
>
>
>- After the engine is made standalone, the API will communicate to the
>engine and the engine to the executor via the oslo.messaging transport.
> This means that for the "local" option, we need to start all three
>components (API, engine, and executor) on the same process.  If the long
>term goal as you stated above is to use separate launchers for these
>components, this means that the API launcher needs to duplicate all the
>logic to launch the engine and the executor. Hence, my proposal here is to
>move the logic to launch the components into a common module and either
>have a single generic launch script that launch specific components based
>on the CLI options or have separate launch scripts that reference the
>appropriate launch function from the common module.
>
> Ok, I see your point. Then I would suggest we have one script which we
> could use to run all the components (any subset of of them). So for those
> components we specified when launching the script we use this local
> transport. Btw, scheduler eventually should become a standalone component
> too, so we have 4 components.
>
>
>- The RPC client/server in oslo.messaging do not determine the
>transport.  The transport is determine via oslo.config and then given
>explicitly to the RPC client/server.
>
> https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31and
>
> https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63are
>  examples for the client and server respectively.  The in process Queue
>is instantiated within this transport object from the fake driver.  For the
>"local" option, all three components need to share the same transport in
>order to have the Queue in scope. Thus, we will need some method to have
>this transport object visible to all three components and hence my proposal
>to use a global variable and a factory method.
>
> I'm still not sure I follow your point here.. Looking at the links you
> provided I see this:
>
> transport = messaging.get_transport(cfg.CONF)
>
> So my point here is we can make this call once in the launching script and
> pass it to engine/executor (and now API too if we want it to be launched by
> the same script). Of course, we'll have to change the way how we initialize
> these components, but I believe we can do it. So it's just a dependency
> injection. And in this case we wouldn't need to use a global variable. Am I
> still missing something?
>
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo incubator deprecation policy tweak

2014-03-13 Thread Doug Hellmann
The Oslo team is working hard to move code from the incubator into
libraries, and that work will speed up during Juno. As part of the
planning, we have been developing our deprecation policy for code in the
oslo-incubator repository. We recognize that it may take some projects
longer than others to adopt the new libraries, but we need to balance the
need for long-term support with the amount of effort it requires to
maintain multiple copies of the code.

We have, during icehouse, been treating the master branch of oslo-incubator
as the "stable" branch for oslo.messaging. In practice, that has meant
refusing new features in the incubator copy of the rpc code and requiring
bug fixes to land in oslo.messaging first. This policy is described in the
wiki (https://wiki.openstack.org/wiki/Oslo#Graduation):

After the first release of the new library, the status of the module(s)
should be updated to "Obsolete." During this phase, only critical bug fixes
will be allowed in the incubator version of the code. New features and
minor bugs should be fixed in the released library, and effort should be
spent focusing on having downstream projects consume the library.

After all integrated projects that use the code are using the library
instead of the incubator, the module(s)_ can be deleted from the incubator.

We would like to clarify the first part, and add a time limit to the second
part:

After the first release of the new library, the status of the module(s)
should be updated to "Obsolete." During this phase, only critical bug fixes
will be allowed in the incubator version of the code. All changes should be
proposed first to the new library repository, and then bug fixes can be
back-ported to the incubator. New features and minor bugs should be fixed
in the released library only, and effort should be spent focusing on having
downstream projects consume the library.

The incubator version of the code will be supported with critical bug
fixes for one full release cycle after the library graduates, and then be
deleted. If all integrated projects using the module(s) update to use the
library before this time period, the module(s) may be deleted early. Old
versions will be maintained in the stable branches of the incubator under
the usual long-term deprecation policy.

I will update the wiki, but I also wanted to announce the change here on
the list so everyone is aware.


Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-13 Thread Sean Dague
On 03/13/2014 12:31 PM, Thomas Goirand wrote:
> On 03/12/2014 07:07 PM, Sean Dague wrote:
>> Because of where we are in the freeze, I think this should wait until
>> Juno opens to fix. Icehouse will only be compatible with SQLA 0.8, which
>> I think is fine. I expect the rest of the issues can be addressed during
>> Juno 1.
>>
>>  -Sean
> 
> Sean,
> 
> No, it's not fine for me. I'd like things to be fixed so we can move
> forward. Debian Sid has SQLA 0.9, and Jessie (the next Debian stable)
> will be released SQLA 0.9 and with Icehouse, not Juno.

We're past freeze, and this requires deep changes in Nova DB to work. So
it's not going to happen. Nova provably does not work with SQLA 0.9, as
seen in Tempest tests.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] All LDAP users returned using keystone v3/users API

2014-03-13 Thread Mark Washenberger
Hi Anna,


On Thu, Mar 13, 2014 at 8:36 AM, Anna A Sortland wrote:

> [A] The current keystone LDAP community driver returns all users that
> exist in LDAP via the API call v3/users, instead of returning just users
> that have role grants (similar processing is true for groups). This could
> potentially be a very large number of users. We have seen large companies
> with LDAP servers containing hundreds and thousands of users. We are aware
> of the filters available in keystone.conf ([ldap].user_filter and
> [ldap].query_scope) to cut down on the number of results, but they do not
> provide sufficient filtering (for example, it is not possible to set
> user_filter to members of certain known groups for OpenLDAP without
> creating a memberOf overlay on the LDAP server).
>
> [Nathan Kinder] What attributes would you filter on?  It seems to me that
> LDAP would need to have knowledge of the roles to be able to filter based
> on the roles.  This is not necessarily the case, as identity and assignment
> can be split in Keystone such that identity is in LDAP and role assignment
> is in SQL.  I believe it was designed this way to deal with deployments
> where LDAP already exists and there is no need (or possibility) of adding
> role info into LDAP.
>
> [A] That's our main use case. The users and groups are in LDAP and role
> assignments are in SQL.
> You would filter on role grants and this information is in SQL backend. So
> new API would need to query both identity and assignment drivers.
>

>From my perspective, it seems there is a chicken-and-egg problem with this
proposal. If a user doesn't have a role assigned, the user does not show up
in the list. But if the user doesn't show up in the list, the user doesn't
exist. If the user doesn't exist, you cannot add a role to it.

Perhaps what is needed is just some sort of filter to listing users that
only returns users with a role in the cloud?



>
> [Nathan Kinder] Without filtering based on a role attribute in LDAP, I
> don't think that there is a good solution if you have OpenStack and
> non-OpenStack users mixed in the same container in LDAP.
> If you want to first find all of the users that have a role assigned to
> them in the assignments backend, then pull their information from LDAP, I
> think that you will end up with one LDAP search operation per user. This
> also isn't a very scalable solution.
>
> [A] What was the reason the LDAP driver was written this way, instead of
> returning just the users that have OpenStack-known roles? Was the creation
> of a separate API for this function considered?
> Are other exploiters of OpenStack (or users of Horizon) experiencing this
> issue? If so, what was their approach to overcome this issue? We have been
> prototyping a keystone extension that provides an API that provides this
> filtering capability, but it seems like a function that should be generally
> available in keystone.
>
> [Nathan Kinder] I'm curious to know how your prototype is looking to
> handle this.
>
> [A] The prototype basically first calls assignment API
> list_role_assignments() to get a list of users and groups with role grants.
> It then iterates the retrieved list and calls identity API
> list_users_in_group() to get the list of users in these groups with grants
> and get_user() to get users that have role grants but do not belong to the
> groups with role grants (a call for each user). Both calls ignore groups
> and users that are not found in the LDAP registry but exist in SQL (this
> could be the result of a user or group being removed from LDAP, but the
> corresponding role grant was not revoked). Then the code removes duplicates
> if any and returns the combined list.
>
> The new extension API is /v3/my_new_extension/users. Maybe the better
> naming would be v3/roles/users (list users with any role) - compare to
> existing v3/roles/{role_id}/users  (list users with a specified role).
>
> Another alternative that we've tried is just a new identity driver that
> inherits from keystone.identity.backends.ldap.LDAPIdentity and overrides
> just the list_users() function. That's probably not the best approach from
> OpenStack standards point of view but I would like to get community's
> feedback on whether this is acceptable.
>
>
> I've posted this question to openstack-security last week but could not
> get any feedback after Nathan's first reply. Reposting to openstack-dev..
>
>
>
> Anna Sortland
> Cloud Systems Software Development
> IBM Rochester, MN
> annas...@us.ibm.com
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest][identity] Tempest

2014-03-13 Thread Ruslan Kiianchuk
Hello, community!

I'm trying to use Tempest to run tests on DevStack environment that is set
up to use Identity API v3. Keystone uses
policy.v3cloudsample.json
to enable domain and cloud admins. The environment itself works fine (VMs
boot successfully, etc).

However when I run tempest, the api.identity.admin.v3 tests fail with error
403 forbidden which happens during policy enforcement:

==
FAIL:
tempest.api.identity.admin.v3.test_tokens.UsersTestXML.test_tokens[gate,smoke]
--
Traceback (most recent call last):
_StringException: Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2014-03-13 16:39:14,225 Request: POST http://127.0.0.1:5000/v2.0/tokens
2014-03-13 16:39:14,717 Response Status: 200
2014-03-13 16:39:14,718 Request: POST http://127.0.0.1:35357/v3/users
2014-03-13 16:39:14,736 Response Status: 403
}}}

Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/api/identity/admin/v3/test_tokens.py",
line 37, in test_tokens
email=u_email)
  File
"/opt/stack/tempest/tempest/services/identity/v3/xml/identity_client.py",
line 100, in create_user
self.headers)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 302, in post
return self.request('POST', url, headers, body)
  File
"/opt/stack/tempest/tempest/services/identity/v3/xml/identity_client.py",
line 80, in request
body=body)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 436, in
request
resp, resp_body)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 478, in
_error_checker
raise exceptions.Unauthorized()
Unauthorized: Unauthorized

It seems that user credentials passed to policy engine do not contain
domain_id for some reason. Has anyone faced similar problem or can point
into the right direction for resolving this?

Thank you.

-- 
Sincerely, Ruslan Kiianchuk.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Replication multi cloud

2014-03-13 Thread Marco Fargetta
Hi Chmouel,

using this approach should I need to have the same users in both keystone?

Is there any way to map user A from cloud X to user B in cloud Y?

Our clouds have different users and replicates the keystone could have
some problems, not only technical.

Cheers,
Marco

On Thu, Mar 13, 2014 at 06:19:29PM +0100, Chmouel Boudjnah wrote:
> You may be interested by this project as well :
> 
> https://github.com/stackforge/swiftsync
> 
> you would need to replicate your keystone in both way via mysql replication
> or something like this (and have endpoint url changed as well obviously
> there).
> 
> Chmouel
> 
> 
> 
> On Thu, Mar 13, 2014 at 5:25 PM, Marco Fargetta
> wrote:
> 
> > Thanks Donagh,
> >
> > I will take a look to the ontainer-to-container synchronization to
> > understand if it fits with my scenario.
> >
> > Cheers,
> > Marco
> >
> > On Thu, Mar 13, 2014 at 03:28:03PM +, McCabe, Donagh wrote:
> > > Marco,
> > >
> > > The replication *inside* Swift is not intended to move data between two
> > different Swift instances -- it's an internal data repair and rebalance
> > mechanism.
> > >
> > > However, there is a different mechanism, called container-to-container
> > synchronization that might be what you are looking for. It will sync two
> > containers in different swift instances. The swift instances may be in
> > different Keystone administrative domains -- the authentication is not
> > based on Keystone. It does require that each swift instance be configured
> > to "recognise" each other. However, this is only usable for low update
> > rates.
> > >
> > > Regards,
> > > Donagh
> > >
> > > -Original Message-
> > > From: Fargetta Marco [mailto:marco.farge...@ct.infn.it]
> > > Sent: 13 March 2014 11:24
> > > To: OpenStack Development Mailing List
> > > Subject: [openstack-dev] [swift] Replication multi cloud
> > >
> > > Hi all,
> > >
> > > we would use the replication mechanism in swift to replicate the data in
> > two swift instances deployed in different clouds with different keystones
> > and administrative domains.
> > >
> > > Is this possible with the current replication facilities or they should
> > stay in the same cloud sharing the keystone?
> > >
> > > Cheers,
> > > Marco
> > >
> > >
> > >
> > > --
> > > 
> > > Eng. Marco Fargetta, PhD
> > >
> > > Istituto Nazionale di Fisica Nucleare (INFN) Catania, Italy
> > >
> > > EMail: marco.farge...@ct.infn.it
> > > 
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > --
> > 
> > Eng. Marco Fargetta, PhD
> >
> > Istituto Nazionale di Fisica Nucleare (INFN)
> > Catania, Italy
> >
> > EMail: marco.farge...@ct.infn.it
> > 
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 

Eng. Marco Fargetta, PhD
 
Istituto Nazionale di Fisica Nucleare (INFN)
Catania, Italy

EMail: marco.farge...@ct.infn.it




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-13 Thread Joe Gordon
On Thu, Mar 13, 2014 at 7:50 AM, Joe Hakim Rahme <
joe.hakim.ra...@enovance.com> wrote:

> On 10 Mar 2014, at 22:54, David Kranz  wrote:
>
> > There are a number of patches up for review that make various changes to
> use "six" apis instead of Python 2 constructs. While I understand the
> desire to get a head start on getting Tempest to run in Python 3, I'm not
> sure it makes sense to do this work piecemeal until we are near ready to
> introduce a py3 gate job. Many contributors will not be aware of what all
> the differences are and py2-isms will creep back in resulting in more
> overall time spent making these changes and reviewing. Also, the core
> review team is busy trying to do stuff important to the icehouse release
> which is barely more than 5 weeks away. IMO we should hold off on various
> kinds of "cleanup" patches for now.
>
> +1 I agree with you David.
>
> However, what's the best way we can go about making sure to make this a
> goal for the next release cycle?
>

On a related note, we have been -2ing these patches in nova until there is
a plan to get all the dependencies python3 compatible.


>
> ---
> Joe H. Rahme
> IRC: rahmu
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] [zeromq] nova-rpc-zmq-receiver bottleneck

2014-03-13 Thread yatin kumbhare
Hello Folks,

When zeromq is use as rpc-backend, "nova-rpc-zmq-receiver" service needs to
be run on every node.

zmq-receiver receives messages on tcp://*:9501 with socket type PULL and
based on topic-name (which is extracted from received data), it forwards
data to respective local services, over IPC protocol.

While, openstack services, listen/bind on "IPC" socket with socket-type
PULL.

I see, zmq-receiver as a bottleneck and overhead as per the current design.
1. if this service crashes: communication lost.
2. overhead of running this extra service on every nodes, which just
forward messages as is.


I'm looking forward to, remove zmq-receiver service and enable direct
communication (nova-* and cinder-*) across and within node.

I believe, this will create, zmq experience more seamless.

the communication will change from IPC to zmq TCP socket type for each
service.

like: rpc.cast from scheduler -to - compute would be direct rpc message
passing. no routing through zmq-receiver.

Now, TCP protocol, all services will bind to unique port (port-range could
be, 9501-9510)

from nova.conf, rpc_zmq_matchmaker =
nova.openstack.common.rpc.matchmaker_ring.MatchMakerRing.

I have put arbitrary ports numbers after the service name.

file:///etc/oslo/matchmaker_ring.json

{
 "cert:9507": [
 "controller"
 ],
 "cinder-scheduler:9508": [
 "controller"
 ],
 "cinder-volume:9509": [
 "controller"
 ],
 "compute:9501": [
 "controller","computenodex"
 ],
 "conductor:9502": [
 "controller"
 ],
 "consoleauth:9503": [
 "controller"
 ],
 "network:9504": [
 "controller","computenodex"
 ],
 "scheduler:9506": [
 "controller"
 ],
 "zmq_replies:9510": [
 "controller","computenodex"
 ]
 }

Here, the json file would keep track of ports for each services.

Looking forward to seek community feedback on this idea.


Regards,
Yatin
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-13 Thread Sean Dague
On 03/13/2014 12:42 PM, Dan Smith wrote:
>> Because of where we are in the freeze, I think this should wait
>> until Juno opens to fix. Icehouse will only be compatible with
>> SQLA 0.8, which I think is fine. I expect the rest of the issues
>> can be addressed during Juno 1.
> 
> Agreed. I think we have some other things to check before we make this
> move, like how we currently check to see if something is loaded in a
> SQLA object. ISTR it changed between 0.8 and 0.9 and so likely tests
> would not fail, but we'd lazy load a bunch of stuff that we didn't
> intend to.
> 
> Even without that, I think it's really way too late to make such a switch.
> 
> --Dan

Yeh, the initial look at Tempest failures wasn't terrible once I fixed a
ceilometer issue. However something is definitely different on delete
semantics, enough to make us fail a bunch of Nova Tempest tests.

That seems dangerous to address during freeze.

I consider this something which should be dealt with in Juno 1 though,
as I'm very interested in whether the new optimizer in sqla 0.9 helps us
on performance.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-13 Thread Mike Wilson
For some guests we use the LVM imagebackend and there are times when the
guest is deleted on accident. Humans, being what they are, don't back up
their files and don't take care of important data, so it is not uncommon to
use lvrestore and "undelete" an instance so that people can get their data.
Of course, this is not always possible if the data has been subsequently
overwritten. But it is common enough that I imagine most of our operators
are familiar with how to do it. So I guess my saying that we do it on a
regular basis is not quite accurate. Probably would be better to say that
it is not uncommon to do this, but definitely not a daily task or something
of that ilk.

I have personally "undeleted" an instance a few times after accidental
deletion also. I can't remember the specifics, but I do remember doing it
:-).

-Mike


On Tue, Mar 11, 2014 at 12:46 PM, Johannes Erdfelt wrote:

> On Tue, Mar 11, 2014, Mike Wilson  wrote:
> > Undeleting things is an important use case in my opinion. We do this in
> our
> > environment on a regular basis. In that light I'm not sure that it would
> be
> > appropriate just to log the deletion and git rid of the row. I would like
> > to see it go to an archival table where it is easily restored.
>
> I'm curious, what are you undeleting and why?
>
> JE
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Replication multi cloud

2014-03-13 Thread Chmouel Boudjnah
You may be interested by this project as well :

https://github.com/stackforge/swiftsync

you would need to replicate your keystone in both way via mysql replication
or something like this (and have endpoint url changed as well obviously
there).

Chmouel



On Thu, Mar 13, 2014 at 5:25 PM, Marco Fargetta
wrote:

> Thanks Donagh,
>
> I will take a look to the ontainer-to-container synchronization to
> understand if it fits with my scenario.
>
> Cheers,
> Marco
>
> On Thu, Mar 13, 2014 at 03:28:03PM +, McCabe, Donagh wrote:
> > Marco,
> >
> > The replication *inside* Swift is not intended to move data between two
> different Swift instances -- it's an internal data repair and rebalance
> mechanism.
> >
> > However, there is a different mechanism, called container-to-container
> synchronization that might be what you are looking for. It will sync two
> containers in different swift instances. The swift instances may be in
> different Keystone administrative domains -- the authentication is not
> based on Keystone. It does require that each swift instance be configured
> to "recognise" each other. However, this is only usable for low update
> rates.
> >
> > Regards,
> > Donagh
> >
> > -Original Message-
> > From: Fargetta Marco [mailto:marco.farge...@ct.infn.it]
> > Sent: 13 March 2014 11:24
> > To: OpenStack Development Mailing List
> > Subject: [openstack-dev] [swift] Replication multi cloud
> >
> > Hi all,
> >
> > we would use the replication mechanism in swift to replicate the data in
> two swift instances deployed in different clouds with different keystones
> and administrative domains.
> >
> > Is this possible with the current replication facilities or they should
> stay in the same cloud sharing the keystone?
> >
> > Cheers,
> > Marco
> >
> >
> >
> > --
> > 
> > Eng. Marco Fargetta, PhD
> >
> > Istituto Nazionale di Fisica Nucleare (INFN) Catania, Italy
> >
> > EMail: marco.farge...@ct.infn.it
> > 
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> 
> Eng. Marco Fargetta, PhD
>
> Istituto Nazionale di Fisica Nucleare (INFN)
> Catania, Italy
>
> EMail: marco.farge...@ct.infn.it
> 
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Solly Ross
@Monty: having a packaging system sounds like a good idea.  Send us a pull 
request on github.com/kanaka/noVNC.

Best Regards,
Solly Ross

- Original Message -
From: "Monty Taylor" 
To: "Sean Dague" , "OpenStack Development Mailing List (not for 
usage questions)" , openst...@nemebean.com
Cc: openstack-in...@lists.openstack.org
Sent: Thursday, March 13, 2014 12:09:01 PM
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
noVNC from github.com/kanaka

I agree.

Solly - in addition to potentially 'adopting' noVNC - or as a parallel 
train of thought ...

As we started working on storyboard in infra, we've started using the 
bower tool for html/javascript packaging - and we have some ability to 
cache the output of that pretty easily. Would you accept patches to 
noVNC to add bower config things and/or publication of tarballs of 
releases via it? Since noVNC isn't likely to be participating in the 
integrated gate in either case, we could potentially split the question 
of "how do we get copies of it in a way that doesn't depend on OS 
distros" (which is why we use pip for our python depends) and "does 
noVNC want to have its git repo exist in OpenStack Infra systems.

Monty

On 03/13/2014 07:44 AM, Sean Dague wrote:
> I think a bigger question is why are we using a git version of something
> outside of OpenStack.
>
> Where is a noNVC release we can point to and use?
>
> In Juno I'd really be pro removing all the devstack references to git
> repos not on git.openstack.org, because these kinds of failures have
> real impact.
>
> Currently we have 4 repositories that fit this bill:
>
> SWIFT3_REPO=${SWIFT3_REPO:-http://github.com/fujita/swift3.git}
> NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
> RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git}
> SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}
>
> I think all of these probably need to be removed from devstack. We
> should be using release versions (preferably in distros, though allowed
> to be in language specific package manager).
>
>   -Sean
>
> On 03/13/2014 10:26 AM, Solly Ross wrote:
>> @bnemec: I don't think that's been considered.  I'm actually one of the 
>> upstream maintainers for noVNC.  The only concern that I'd have with 
>> OpenStack adopting noVNC (there are other maintainers, as well as the 
>> author, so I'd have to check with them as well) is that there are a few 
>> other projects that use noVNC, so we'd need to make sure that no 
>> OpenStack-specific code gets merged into noVNC if we adopt it.  Other that 
>> that, though, adopting noVNC doesn't sound like a horrible idea.
>>
>> Best Regards,
>> Solly Ross
>>
>> - Original Message -
>> From: "Ben Nemec" 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Cc: openstack-in...@lists.openstack.org
>> Sent: Wednesday, March 12, 2014 3:38:19 PM
>> Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
>> noVNC from github.com/kanaka
>>
>>
>>
>> On 2014-03-11 20:34, Joshua Harlow wrote:
>>
>>
>> https://status.github.com/messages
>> * 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
>> mitigations we have in place are proving effective in protecting us and 
>> we're hopeful that we've got this one resolved.'
>> If you were cloning from github.org and not http://git.openstack.org then 
>> you were likely seeing some of the DDoS attack in action.
>> Unfortunately I don't think novnc is in git.openstack.org because it's not 
>> an OpenStack project. I wonder if we should investigate adopting it (if the 
>> author(s) are amenable to that) since we're using the git version of it. 
>> Maybe that's already been considered and I just don't know about it. :-)
>> -Ben
>>
>>
>>
>> From: Sukhdev Kapur < sukhdevka...@gmail.com >
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < 
>> openstack-dev@lists.openstack.org >
>> Date: Tuesday, March 11, 2014 at 4:08 PM
>> To: "Dane Leblanc (leblancd)" < lebla...@cisco.com >
>> Cc: "OpenStack Development Mailing List (not for usage questions)" < 
>> openstack-dev@lists.openstack.org >, " openstack-in...@lists.openstack.org " 
>> < openstack-in...@lists.openstack.org >
>> Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
>> noVNC from github.com/kanaka
>>
>>
>>
>> I have noticed that even clone of devstack has failed few times within last 
>> couple of hours - it was running fairly smooth so far.
>> -Sukhdev
>>
>>
>> On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur < sukhdevka...@gmail.com > 
>> wrote:
>>
>>
>>
>> [adding openstack-dev list as well ]
>> I have noticed that this has stated hitting my builds within last few hours. 
>> I have noticed exact same failures on almost 10 builds.
>> Looks like something has happened within last few hours - perhaps the load?
>> -Sukhdev
>>
>>
>> On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd) < 

[openstack-dev] [MagnetoDB] MagnetoDB API draft

2014-03-13 Thread Aleksandr Chudnovets
Hi all,

Here is the draft for MagnetoDB API:
https://wiki.openstack.org/wiki/MagnetoDB/api

Your comments and propositions are welcome. And welcome to discuss this
draft and any other KeyValue aaS -related subjects in our IRC channel:
#magnetodb. Please note, MagnetoDB team mostly in UTC+2.

Best regards,
Alexander Chudnovets
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Can I use a new plugin based on Ml2Plugin instead of Ml2Plugin as core_plugin

2014-03-13 Thread Nader Lahouti
-- edited the subject

I'm resending this question.
The issue is described in email thread and. In brief, I need to add load
new extensions and it seems the mechanism driver does not support that. In
order to do that I was thinking to have a new ml2 plugin base on existing
Ml2Plugin and add my changes there and have it as core_plugin.
Please read the email thread and glad to have your suggestion.


On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti wrote:

> 1) Does it mean an interim solution is to have our own plugin (and have
> all the changes in it) and declare it as core_plugin instead of Ml2Plugin?
>
> 2) The other issue as I mentioned before, is that the extension(s) is not
> showing up in the result, for instance when create_network is called
> [*result = super(Ml2Plugin, self).create_network(context, network)]*, and
> as a result they cannot be used in the mechanism drivers when needed.
>
> Looks like the process_extensions is disabled when fix for Bug 1201957
> committed and here is the change:
> Any idea why it is disabled?
>
> --
> Avoid performing extra query for fetching port security binding
>
> Bug 1201957
>
>
> Add a relationship performing eager load in Port and Network
>
> models, thus preventing the 'extend' function from performing
>
> an extra database query.
>
> Also fixes a comment in securitygroups_db.py
>
>
> Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa
>
>  master   h.1
>
> ...
>
>  2013.2
>
> commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7
>
> Salvatore Orlando salv-orlando authored 8 months ago
>
>
> 2  neutron/db/db_base_plugin_v2.py View
>
>  @@ -995,7 +995,7 @@ def create_network(self, context, network):
>
> 995   'status': constants.NET_STATUS_ACTIVE}
>
> 996   network = models_v2.Network(**args)
>
> 997   context.session.add(network)
>
> *998 -return self._make_network_dict(network)*
>
> *998 +return self._make_network_dict(network,
> process_extensions=False)*
>
> 999
>
> 1000  def update_network(self, context, id, network):
>
> 1001
>
>  n = network['network']
>
> ---
>
>
> Regards,
> Nader.
>
>
>
>
>
> On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura wrote:
>
>>
>> On 3/7/14, 3:53 AM, Édouard Thuleau wrote:
>>
>> Yes, that sounds good to be able to load extensions from a mechanism
>> driver.
>>
>> But another problem I think we have with ML2 plugin is the list
>> extensions supported by default [1].
>> The extensions should only load by MD and the ML2 plugin should only
>> implement the Neutron core API.
>>
>>
>> Keep in mind that ML2 supports multiple MDs simultaneously, so no single
>> MD can really control what set of extensions are active. Drivers need to be
>> able to load private extensions that only pertain to that driver, but we
>> also need to be able to share common extensions across subsets of drivers.
>> Furthermore, the semantics of the extensions need to be correct in the face
>> of multiple co-existing drivers, some of which know about the extension,
>> and some of which don't. Getting this properly defined and implemented
>> seems like a good goal for juno.
>>
>> -Bob
>>
>>
>>
>>  Any though ?
>> Édouard.
>>
>>  [1]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87
>>
>>
>>
>> On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki  wrote:
>>
>>> Hi,
>>>
>>> I think it is better to continue the discussion here. It is a good log
>>> :-)
>>>
>>> Eugine and I talked the related topic to allow drivers to load
>>> extensions)  in Icehouse Summit
>>> but I could not have enough time to work on it during Icehouse.
>>> I am still interested in implementing it and will register a blueprint
>>> on it.
>>>
>>> etherpad in icehouse summit has baseline thought on how to achieve it.
>>> https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
>>> I hope it is a good start point of the discussion.
>>>
>>> Thanks,
>>> Akihiro
>>>
>>> On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti 
>>> wrote:
>>> > Hi Kyle,
>>> >
>>> > Just wanted to clarify: Should I continue using this mailing list to
>>> post my
>>> > question/concerns about ML2? Please advise.
>>> >
>>> > Thanks,
>>> > Nader.
>>> >
>>> >
>>> >
>>> > On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery <
>>> mest...@noironetworks.com>
>>> > wrote:
>>> >>
>>> >> Thanks Edgar, I think this is the appropriate place to continue this
>>> >> discussion.
>>> >>
>>> >>
>>> >> On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana 
>>> wrote:
>>> >>>
>>> >>> Nader,
>>> >>>
>>> >>> I would encourage you to first discuss the possible extension with
>>> the
>>> >>> ML2 team. Rober and Kyle are leading this effort and they have a IRC
>>> meeting
>>> >>> every week:
>>> >>>
>>> https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
>>> >>>
>>> >>> Bring your concerns on this meeting and get the right feedback.
>>> >>>
>>> >>> Thanks,
>>> >>>
>>> >>> Edgar
>>> >>>
>>> >>> From: Nader Lahouti 
>>> >>> Reply-To: OpenStack Lis

[openstack-dev] [MagnetoDB] Weekly meeting summary

2014-03-13 Thread Ilya Sviridov
Hello openstackers,

You can find MagnetoDB team weekly meeting notes below

Meeting summary

   1. *General project status overview*
(isviridov,
   13:02:15)
   2. *MagnetoDB API Draft status*
(isviridov,
   13:08:37)
  1. https://wiki.openstack.org/wiki/MagnetoDB/api
(isviridov,
  13:09:28)
  2. ACTION: achudnovets start ML thread with API discussion
(isviridov,
  13:13:19)
  3. https://launchpad.net/magnetodb/+milestone/2.0.1
(isviridov,
  13:14:24)

   3. *Third party CI status*
(isviridov,
   13:14:41)
  1. https://blueprints.launchpad.net/magnetodb/+spec/third-party-ci (
  
isviridov,
  13:16:39)
  2. ACTION: achuprin discuss with infra the best way for our CI (
  
isviridov,
  13:27:36)
  3. ACTION: achuprin create wiki page with CI description
(isviridov,
  13:28:01)

   4. *Support of other database backends except Cassandra. Support of
   HBase* 
(isviridov,
   13:29:24)
  1. ACTION: isviridov ikhudoshyn start mail thread about evalution
  other databases as backend for MagnetoDB
(isviridov,
  13:38:16)

   5. *Devstack integration status*
(isviridov,
   13:38:35)
  1.
  https://blueprints.launchpad.net/magnetodb/+spec/devstack-integration
  
(isviridov,
  13:39:07)
  2. https://github.com/pcmanus/ccm
(vnaboichenko,
  13:40:13)
  3. ACTION: vnaboichenko devstack integration guide in OpenStack wiki (
  
isviridov,
  13:42:15)

   6. *Weekly meeting time slot*
(isviridov,
   13:42:33)
  1. ACTION: isviridov find better time slot for meeting
(isviridov,
  13:44:47)
  2. ACTION: isviridov start ML voting meeting time
(isviridov,
  13:45:05)

   7. *Open discussion*
(isviridov,
   13:45:31)


For more details, please follow the links

Minutes:
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.txt
Log:
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html


Have a nice day,
Ilya Sviridov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-13 Thread Dan Smith
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

> Because of where we are in the freeze, I think this should wait 
> until Juno opens to fix. Icehouse will only be compatible with
> SQLA 0.8, which I think is fine. I expect the rest of the issues
> can be addressed during Juno 1.

Agreed. I think we have some other things to check before we make this
move, like how we currently check to see if something is loaded in a
SQLA object. ISTR it changed between 0.8 and 0.9 and so likely tests
would not fail, but we'd lazy load a bunch of stuff that we didn't
intend to.

Even without that, I think it's really way too late to make such a switch.

- --Dan
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTIeATAAoJEBeZxaMESjNVm8QH/0kjEjXYTHuj3jmuiL0P8ccy
KVMaXTL3NmIhaNm1UD/OcWIebgkOKk1BjYAloSRRewulvt0XcK5yr272FLhwuLqr
IJBtF15/4pG1b9B8Ol/sOlgAUzcgQ68pu8jIHRd7S5cxjWlEuCP7y2H3pUG38rfq
lqUZhrltMpBbcZ0/ewG1BlIgfCWjuv6c/U+S8K2D4zcKkfuOG2hfzPk4ZEy99+wh
UYiLfaW+dvku8rN6Lll+6S8VfKM1V+I9hFpKs2exxbX65KJinNgymHxLAj2iQD6Y
Ubpk8LO2DElpUm2gULgUqKh0kddmXL7Cuqa2/B5Bm3BAa89CAUVny4ASAWk868c=
=Qet4
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-13 Thread Thomas Goirand
On 03/12/2014 07:07 PM, Sean Dague wrote:
> Because of where we are in the freeze, I think this should wait until
> Juno opens to fix. Icehouse will only be compatible with SQLA 0.8, which
> I think is fine. I expect the rest of the issues can be addressed during
> Juno 1.
> 
>   -Sean

Sean,

No, it's not fine for me. I'd like things to be fixed so we can move
forward. Debian Sid has SQLA 0.9, and Jessie (the next Debian stable)
will be released SQLA 0.9 and with Icehouse, not Juno.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Replication multi cloud

2014-03-13 Thread Marco Fargetta
Thanks Donagh,

I will take a look to the ontainer-to-container synchronization to understand 
if it fits with my scenario.

Cheers,
Marco

On Thu, Mar 13, 2014 at 03:28:03PM +, McCabe, Donagh wrote:
> Marco,
> 
> The replication *inside* Swift is not intended to move data between two 
> different Swift instances -- it's an internal data repair and rebalance 
> mechanism.
> 
> However, there is a different mechanism, called container-to-container 
> synchronization that might be what you are looking for. It will sync two 
> containers in different swift instances. The swift instances may be in 
> different Keystone administrative domains -- the authentication is not based 
> on Keystone. It does require that each swift instance be configured to 
> "recognise" each other. However, this is only usable for low update rates.
> 
> Regards,
> Donagh
> 
> -Original Message-
> From: Fargetta Marco [mailto:marco.farge...@ct.infn.it] 
> Sent: 13 March 2014 11:24
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [swift] Replication multi cloud
> 
> Hi all,
> 
> we would use the replication mechanism in swift to replicate the data in two 
> swift instances deployed in different clouds with different keystones and 
> administrative domains.
> 
> Is this possible with the current replication facilities or they should stay 
> in the same cloud sharing the keystone?
> 
> Cheers,
> Marco
> 
> 
> 
> --
> 
> Eng. Marco Fargetta, PhD
> 
> Istituto Nazionale di Fisica Nucleare (INFN) Catania, Italy
> 
> EMail: marco.farge...@ct.infn.it
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Eng. Marco Fargetta, PhD
 
Istituto Nazionale di Fisica Nucleare (INFN)
Catania, Italy

EMail: marco.farge...@ct.infn.it




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] test environment requirements

2014-03-13 Thread James Slagle
On Thu, Mar 13, 2014 at 2:51 AM, Robert Collins
 wrote:
> So we already have pretty high requirements - its basically a 16G
> workstation as minimum.
>
> Specifically to test the full story:
>  - a seed VM
>  - an undercloud VM (bm deploy infra)
>  - 1 overcloud control VM
>  - 2 overcloud hypervisor VMs
> 
>5 VMs with 2+G RAM each.
>
> To test the overcloud alone against the seed we save 1 VM, to skip the
> overcloud we save 3.
>
> However, as HA matures we're about to add 4 more VMs: we need a HA
> control plane for both the under and overclouds:
>  - a seed VM
>  - 3 undercloud VMs (HA bm deploy infra)
>  - 3 overcloud control VMs (HA)
>  - 2 overcloud hypervisor VMs
> 
>9 VMs with 2+G RAM each == 18GB
>
> What should we do about this?
>
> A few thoughts to kick start discussion:
>  - use Ironic to test across multiple machines (involves tunnelling
> brbm across machines, fairly easy)
>  - shrink the VM sizes (causes thrashing)
>  - tell folk to toughen up and get bigger machines (ahahahahaha, no)
>  - make the default configuration inline the hypervisors on the
> overcloud with the control plane:
>- a seed VM
>- 3 undercloud VMs (HA bm deploy infra)
>- 3 overcloud all-in-one VMs (HA)
>   
>  7 VMs with 2+G RAM each == 14GB
>
>
> I think its important that we exercise features like HA and live
> migration regularly by developers, so I'm quite keen to have a fairly
> solid systematic answer that will let us catch things like bad
> firewall rules on the control node preventing network tunnelling
> etc... e.g. we benefit the more things are split out like scale
> deployments are. OTOH testing the micro-cloud that folk may start with
> is also a really good idea


The idea I was thinking was to make a testenv host available to
tripleo atc's. Or, perhaps make it a bit more locked down and only
available to a new group of tripleo folk, existing somewhere between
the privileges of tripleo atc's and tripleo-cd-admins.  We could
document how you use the cloud (Red Hat's or HP's) rack to start up a
instance to run devtest on one of the compute hosts, request and lock
yourself a testenv environment on one of the testenv hosts, etc.
Basically, how our CI works. Although I think we'd want different
testenv hosts for development vs what runs the CI, and would need to
make sure everything was locked down appropriately security-wise.

Some other ideas:

- Allow an option to get rid of the seed VM, or make it so that you
can shut it down after the Undercloud is up. This only really gets rid
of 1 VM though, so it doesn't buy you much nor solve any long term
problem.

- Make it easier to see how you'd use virsh against any libvirt host
you might have lying around.  We already have the setting exposed, but
make it a bit more public and call it out more in the docs. I've
actually never tried it myself, but have been meaning to.

- I'm really reaching now, and this may be entirely unrealistic :),
butsomehow use the fake baremetal driver and expose a mechanism to
let the developer specify the already setup undercloud/overcloud
environment ahead of time.
For example:
* Build your undercloud images with the vm element since you won't be
PXE booting it
* Upload your images to a public cloud, and boot instances for them.
* Use this new mechanism when you run devtest (presumably running from
another instance in the same cloud)  to say "I'm using the fake
baremetal driver, and here are the  IP's of the undercloud instances".
* Repeat steps for the overcloud (e.g., configure undercloud to use
fake baremetal driver, etc).
* Maybe it's not the fake baremetal driver, and instead a new driver
that is a noop for the pxe stuff, and the power_on implementation
powers on the cloud instances.
* Obviously if your aim is to test the pxe and disk deploy process
itself, this wouldn't work for you.
* Presumably said public cloud is OpenStack, so we've also achieved
another layer of "On OpenStack".


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Monty Taylor

I agree.

Solly - in addition to potentially 'adopting' noVNC - or as a parallel 
train of thought ...


As we started working on storyboard in infra, we've started using the 
bower tool for html/javascript packaging - and we have some ability to 
cache the output of that pretty easily. Would you accept patches to 
noVNC to add bower config things and/or publication of tarballs of 
releases via it? Since noVNC isn't likely to be participating in the 
integrated gate in either case, we could potentially split the question 
of "how do we get copies of it in a way that doesn't depend on OS 
distros" (which is why we use pip for our python depends) and "does 
noVNC want to have its git repo exist in OpenStack Infra systems.


Monty

On 03/13/2014 07:44 AM, Sean Dague wrote:

I think a bigger question is why are we using a git version of something
outside of OpenStack.

Where is a noNVC release we can point to and use?

In Juno I'd really be pro removing all the devstack references to git
repos not on git.openstack.org, because these kinds of failures have
real impact.

Currently we have 4 repositories that fit this bill:

SWIFT3_REPO=${SWIFT3_REPO:-http://github.com/fujita/swift3.git}
NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git}
SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}

I think all of these probably need to be removed from devstack. We
should be using release versions (preferably in distros, though allowed
to be in language specific package manager).

-Sean

On 03/13/2014 10:26 AM, Solly Ross wrote:

@bnemec: I don't think that's been considered.  I'm actually one of the 
upstream maintainers for noVNC.  The only concern that I'd have with OpenStack 
adopting noVNC (there are other maintainers, as well as the author, so I'd have 
to check with them as well) is that there are a few other projects that use 
noVNC, so we'd need to make sure that no OpenStack-specific code gets merged 
into noVNC if we adopt it.  Other that that, though, adopting noVNC doesn't 
sound like a horrible idea.

Best Regards,
Solly Ross

- Original Message -
From: "Ben Nemec" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: openstack-in...@lists.openstack.org
Sent: Wednesday, March 12, 2014 3:38:19 PM
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning
noVNC from github.com/kanaka



On 2014-03-11 20:34, Joshua Harlow wrote:


https://status.github.com/messages
* 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
mitigations we have in place are proving effective in protecting us and we're 
hopeful that we've got this one resolved.'
If you were cloning from github.org and not http://git.openstack.org then you 
were likely seeing some of the DDoS attack in action.
Unfortunately I don't think novnc is in git.openstack.org because it's not an 
OpenStack project. I wonder if we should investigate adopting it (if the 
author(s) are amenable to that) since we're using the git version of it. Maybe 
that's already been considered and I just don't know about it. :-)
-Ben



From: Sukhdev Kapur < sukhdevka...@gmail.com >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" < 
openstack-dev@lists.openstack.org >
Date: Tuesday, March 11, 2014 at 4:08 PM
To: "Dane Leblanc (leblancd)" < lebla...@cisco.com >
Cc: "OpenStack Development Mailing List (not for usage questions)" < 
openstack-dev@lists.openstack.org >, " openstack-in...@lists.openstack.org " < 
openstack-in...@lists.openstack.org >
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
noVNC from github.com/kanaka



I have noticed that even clone of devstack has failed few times within last 
couple of hours - it was running fairly smooth so far.
-Sukhdev


On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur < sukhdevka...@gmail.com > wrote:



[adding openstack-dev list as well ]
I have noticed that this has stated hitting my builds within last few hours. I 
have noticed exact same failures on almost 10 builds.
Looks like something has happened within last few hours - perhaps the load?
-Sukhdev


On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd) < lebla...@cisco.com > 
wrote:





Apologies if this is the wrong audience for this question...



I'm seeing intermittent failures running stack.sh whereby 'git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC' is returning various 
errors. Below are 2 examples.



Is this a known issue? Are there any localrc settings which might help here?



Example 1:



2014-03-11 15:00:33.779 | + is_service_enabled n-novnc

2014-03-11 15:00:33.780 | + return 0

2014-03-11 15:00:33.781 | ++ trueorfalse False

2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False

2014-03-11 15:00:33.783 | + '[' False = True ']'

2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC

2014-03-11 15:00:33.785 | + git_c

[openstack-dev] [Horizon] Regarding bug/bp https://bugs.launchpad.net/horizon/+bug/1285298

2014-03-13 Thread Abishek Subramanian (absubram)
Hi all, Akihiro, David,

This is regarding the review for - https://review.openstack.org/#/c/76653/

Akihiro - Thanks for the review as always and as I mentioned in the review
comment 
I completely agree with you. This is a small featurette.

However this is small in that it adds to a chociefield in an existing
forms.py
attribute that I left out which neutron supports.
And so in addition, I also had to add some code to my clean routine and yes
update the string in the create description to include this new option.
I have more test code really, than actual code.

It was small enough, and hence I made request that this be treated as
a bug and not a bp. And only then did I proceed to open the bug.

I will respect what the community decides on this. Please let me know
how we wish to proceed.


Thanks and regards,
Abishek


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy types

2014-03-13 Thread Tim Hinrichs
Hi Prabhakar,

I'm not sure the functionality is split between 'policy' and 'server' as 
cleanly as you describe.

The 'policy' directory contains the Policy Engine.  At its core, the policy 
engine has a generic Datalog implementation that could feasibly be used by 
other OS components.  (I don't want to think about pulling it out into Oslo 
though.  There are just too many other things going on and no demand yet.)  But 
there are also Congress-specific things in that directory, e.g. the class 
Runtime in policy/runtime.py will be the one that we hook up external API calls 
to.

The 'server' directory contains the code for the API web server that calls into 
the Runtime class.

So if you're digging through code, I'd suggest focusing on the 'policy' 
directory and looking at compile.py (responsible for converting Datalog rules 
written as strings into an internal representation) and runtime.py (responsible 
for everything else).  The docs I mentioned in the IRC should have a decent 
explanation of the functions in Runtime that the API web server will hook into. 
 

Be warned though that unless someone raises some serious objections to the 
proposal that started this thread, we'll be removing some of the more 
complicated functions from Runtime.  The compile.py code won't change (much).  
All of the 3 new theories will be instances of MaterializedViewTheory.  That's 
also the code that must change to add in the Python functions we talked about 
(more specifically see MaterializedViewTheory::propagate_rule(), which calls 
TopDownTheory::top_down_evaluation(), which is what will need modification).

Tim
 



- Original Message -
| From: "prabhakar Kudva" 
| To: "OpenStack Development Mailing List (not for usage questions)" 

| Sent: Wednesday, March 12, 2014 1:38:55 PM
| Subject: Re: [openstack-dev] [Congress] Policy types
| 
| 
| 
| 
| Hi Tim,
| 
| Thanks for your comments.
| Would be happy to contribute to the propsal and code.
| 
| The existing code already reflects the thoughts below, and got me
| in the line of ideas. Please orrect me if I am wrong as I am
| learning with these discussions:
| 
| One part (reflected by code in "policy" directory is the generic
| "condition-> action engine" which could take logic primitives and
| (in the future) python functions, evaluate the conditions and
| execute the action. This portable core engine be used for any kind of
| policy enforcement
| (as by other OS projects), such as for data center monitoring and
| repair,
| service level enforcement, compliance policies, optimization (energy,
| performance) etc... at any level of the stack. This core engine seems
| possibly
| a combination of logic reasoning/unification and python function
| evaluation, and python code actions.
| 
| Second part (reflected by code in "server") are the applications
| for various purposes. These could be project specific, task specific.
| We could add a diverse set of examples. The example I have worked
| with seems closer to compliance (as in net owner, vm owner check),
| and we will add more.
| 
| Prabhakar
| 
| 
| 
| Date: Wed, 12 Mar 2014 12:33:35 -0700
| From: thinri...@vmware.com
| To: openstack-dev@lists.openstack.org
| Subject: Re: [openstack-dev] [Congress] Policy types
| 
| 
| 
| Hi Prabhakar,
| 
| 
| Thanks for the feedback. I'd be interested to hear what other policy
| types you have in mind.
| 
| 
| To answer your questions...
| 
| 
| We're planning on extending our policy language in such a way that
| you can use Python functions as conditions ("" in the grammar)
| in rules. That's on my todo-list but didn't mention it yesterday as
| we were short on time. There will be some syntactic restrictions so
| that we can properly execute those Python functions (i.e. we need to
| always be able to compute the inputs to the function). I had thought
| it was just an implementation detail I hadn't gotten around to (all
| Datalog implementations I've seen have such things), but it sounds
| like it's worth writing up a proposal and sending it around before
| implementing. If that's a pressing concern for you, let me know and
| I'll bump it up the stack (a little). If you'd like, feel free to
| draft a proposal (or remind me to do it once in a while).
| 
| 
| As for actions, I typically think of them as API calls to other OS
| components like Nova. But they could just as easily be Python
| functions. But I would want to avoid an action that changes
| Congress's internal data structures directly (e.g. adding a new
| policy statement). Such actions have caused trouble in the past for
| policy languages (though for declarative programming languages like
| Prolog they are less problematic). I don't think there's anyway we
| can stop people from creating such actions, but I think we should
| advocate against them.
| 
| 
| Tim
| 
| 
| 
| From: "prabhakar Kudva" 
| To: "OpenStack Development Mailing List (not for usage questions)"
| 
| Sent: Wednesday, March 12, 2014 11:34:04 AM
| Subject

Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-13 Thread Michael Factor
Bruce,

Nice list of use cases; thank you for sharing.  One thought

Bruce Montague  wrote on 13/03/2014 04:34:59 
PM:


> > * (2) [Core tenant/project infrastructure VMs]
> >
> > Twenty VMs power the core infrastructure of a group using a 
> private cloud (OpenStack in their own datacenter). Not all VMs run 
> Windows with VSS, some run Linux with some equivalent mechanism, 
> such as qemu-ga, driving fsfreeze and signal scripts. These VMs are 
> replicated to a remote OpenStack deployment, in a fashion similar to
> (1). Orchestration occurring at the remote site on failover is more 
> complex (correct VM boot order is orchestrated, DHCP service is 
> configured as expected, all IPs are made available and verified). An
> equivalent virtual network topology consisting of multiple networks 
> or subnets might be pre-created or dynamically created at failover time.
> >
> >   a. Storage for all volumes of all VMs might be on a single 
> storage backend (logically a single large volume containing many 
> smaller sub-volumes, examples being a VMware datastore or Hyper-V 
> CSV). This entire large volume might be replicated between similar 
> storage backends at the primary and secondary site. A single 
> replicated large volume thus replicates all the tenant VM's volumes.
> The DR system must trigger quiesce of all volumes to application-
> consistent state.

A variant of having logically a single volume on a single storage backend 
is having all the volumes allocated from storage that provides consistency 
groups.  This may also be related to cross VM consistent 
backups/snapshots.  Of course a question would be whether, and if so, how 
to surface this.

-- Michael

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Brian Haley
Aaron,

I thought the l3-agent already did this if doing a "full sync"?

_sync_routers_task()->_process_routers()->spawn_n(self.process_router, ri)

So each router gets processed in a greenthread.

It seems like the other calls - sudo/rootwrap, /sbin/ip, etc are now the
limiting factor, at least on network nodes with large numbers of namespaces.

-Brian

On 03/13/2014 10:48 AM, Aaron Rosen wrote:
> The easiest/quickest thing to do for ice house would probably be to run the
> initial sync in parallel like the dhcp-agent does for this exact reason. See:
> https://review.openstack.org/#/c/28914/ which did this for thr dhcp-agent.
> 
> Best,
> 
> Aaron
> 
> On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo  > wrote:
> 
> Yuri, could you elaborate your idea in detail? , I'm lost at some
> points with your unix domain / token authentication.
> 
> Where does the token come from?,
> 
> Who starts rootwrap the first time?
> 
> If you could write a full interaction sequence, on the etherpad, from
> rootwrap daemon start ,to a simple call to system happening, I think 
> that'd
> help my understanding.
> 
> 
> Here it is: https://etherpad.openstack.org/p/rootwrap-agent
> Please take a look.
> 
> -- 
> 
> Kind regards, Yuriy.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] All LDAP users returned using keystone v3/users API

2014-03-13 Thread Anna A Sortland
[A] The current keystone LDAP community driver returns all users that 
exist in LDAP via the API call v3/users, instead of returning just users 
that have role grants (similar processing is true for groups). This could 
potentially be a very large number of users. We have seen large companies 
with LDAP servers containing hundreds and thousands of users. We are aware 
of the filters available in keystone.conf ([ldap].user_filter and 
[ldap].query_scope) to cut down on the number of results, but they do not 
provide sufficient filtering (for example, it is not possible to set 
user_filter to members of certain known groups for OpenLDAP without 
creating a memberOf overlay on the LDAP server). 

[Nathan Kinder] What attributes would you filter on?  It seems to me that 
LDAP would need to have knowledge of the roles to be able to filter based 
on the roles.  This is not necessarily the case, as identity and 
assignment can be split in Keystone such that identity is in LDAP and role 
assignment is in SQL.  I believe it was designed this way to deal with 
deployments
where LDAP already exists and there is no need (or possibility) of adding 
role info into LDAP. 

[A] That's our main use case. The users and groups are in LDAP and role 
assignments are in SQL. 
You would filter on role grants and this information is in SQL backend. So 
new API would need to query both identity and assignment drivers. 

[Nathan Kinder] Without filtering based on a role attribute in LDAP, I 
don't think that there is a good solution if you have OpenStack and 
non-OpenStack users mixed in the same container in LDAP.
If you want to first find all of the users that have a role assigned to 
them in the assignments backend, then pull their information from LDAP, I 
think that you will end up with one LDAP search operation per user. This 
also isn't a very scalable solution.

[A] What was the reason the LDAP driver was written this way, instead of 
returning just the users that have OpenStack-known roles? Was the creation 
of a separate API for this function considered? 
Are other exploiters of OpenStack (or users of Horizon) experiencing this 
issue? If so, what was their approach to overcome this issue? We have been 
prototyping a keystone extension that provides an API that provides this 
filtering capability, but it seems like a function that should be 
generally available in keystone.

[Nathan Kinder] I'm curious to know how your prototype is looking to 
handle this. 

[A] The prototype basically first calls assignment API 
list_role_assignments() to get a list of users and groups with role 
grants. It then iterates the retrieved list and calls identity API 
list_users_in_group() to get the list of users in these groups with grants 
and get_user() to get users that have role grants but do not belong to the 
groups with role grants (a call for each user). Both calls ignore groups 
and users that are not found in the LDAP registry but exist in SQL (this 
could be the result of a user or group being removed from LDAP, but the 
corresponding role grant was not revoked). Then the code removes 
duplicates if any and returns the combined list. 

The new extension API is /v3/my_new_extension/users. Maybe the better 
naming would be v3/roles/users (list users with any role) - compare to 
existing v3/roles/​{role_id}​/users  (list users with a specified role). 


Another alternative that we've tried is just a new identity driver that 
inherits from keystone.identity.backends.ldap.LDAPIdentity and overrides 
just the list_users() function. That's probably not the best approach from 
OpenStack standards point of view but I would like to get community's 
feedback on whether this is acceptable. 


I've posted this question to openstack-security last week but could not 
get any feedback after Nathan's first reply. Reposting to openstack-dev..



Anna Sortland
Cloud Systems Software Development
IBM Rochester, MN
annas...@us.ibm.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Ben Nemec

On 2014-03-13 09:44, Sean Dague wrote:
I think a bigger question is why are we using a git version of 
something

outside of OpenStack.

Where is a noNVC release we can point to and use?

In Juno I'd really be pro removing all the devstack references to git
repos not on git.openstack.org, because these kinds of failures have
real impact.

Currently we have 4 repositories that fit this bill:

SWIFT3_REPO=${SWIFT3_REPO:-http://github.com/fujita/swift3.git}
NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git}
SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}

I think all of these probably need to be removed from devstack. We
should be using release versions (preferably in distros, though allowed
to be in language specific package manager).


IIRC, when I looked into using the distro-packaged noVNC it broke all 
kinds of things because for some reason noVNC has a dependency on 
nova-common (now python-nova it looks like), so we end up pulling in all 
kinds of distro nova stuff that conflicts with the devstack installed 
pieces.  It doesn't seem like a correct dep to me, but maybe Solly can 
comment on whether it's necessary or not.


-Ben



-Sean

On 03/13/2014 10:26 AM, Solly Ross wrote:
@bnemec: I don't think that's been considered.  I'm actually one of 
the upstream maintainers for noVNC.  The only concern that I'd have 
with OpenStack adopting noVNC (there are other maintainers, as well as 
the author, so I'd have to check with them as well) is that there are 
a few other projects that use noVNC, so we'd need to make sure that no 
OpenStack-specific code gets merged into noVNC if we adopt it.  Other 
that that, though, adopting noVNC doesn't sound like a horrible idea.


Best Regards,
Solly Ross

- Original Message -
From: "Ben Nemec" 
To: "OpenStack Development Mailing List (not for usage questions)" 


Cc: openstack-in...@lists.openstack.org
Sent: Wednesday, March 12, 2014 3:38:19 PM
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures 
cloning	noVNC from github.com/kanaka




On 2014-03-11 20:34, Joshua Harlow wrote:


https://status.github.com/messages
* 'GitHub.com is operating normally, despite an ongoing DDoS attack. 
The mitigations we have in place are proving effective in protecting 
us and we're hopeful that we've got this one resolved.'
If you were cloning from github.org and not http://git.openstack.org 
then you were likely seeing some of the DDoS attack in action.
Unfortunately I don't think novnc is in git.openstack.org because it's 
not an OpenStack project. I wonder if we should investigate adopting 
it (if the author(s) are amenable to that) since we're using the git 
version of it. Maybe that's already been considered and I just don't 
know about it. :-)

-Ben



From: Sukhdev Kapur < sukhdevka...@gmail.com >
Reply-To: "OpenStack Development Mailing List (not for usage 
questions)" < openstack-dev@lists.openstack.org >

Date: Tuesday, March 11, 2014 at 4:08 PM
To: "Dane Leblanc (leblancd)" < lebla...@cisco.com >
Cc: "OpenStack Development Mailing List (not for usage questions)" < 
openstack-dev@lists.openstack.org >, " 
openstack-in...@lists.openstack.org " < 
openstack-in...@lists.openstack.org >
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures 
cloning noVNC from github.com/kanaka




I have noticed that even clone of devstack has failed few times within 
last couple of hours - it was running fairly smooth so far.

-Sukhdev


On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur < 
sukhdevka...@gmail.com > wrote:




[adding openstack-dev list as well ]
I have noticed that this has stated hitting my builds within last few 
hours. I have noticed exact same failures on almost 10 builds.
Looks like something has happened within last few hours - perhaps the 
load?

-Sukhdev


On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd) < 
lebla...@cisco.com > wrote:






Apologies if this is the wrong audience for this question...



I'm seeing intermittent failures running stack.sh whereby 'git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC' is returning 
various errors. Below are 2 examples.




Is this a known issue? Are there any localrc settings which might help 
here?




Example 1:



2014-03-11 15:00:33.779 | + is_service_enabled n-novnc

2014-03-11 15:00:33.780 | + return 0

2014-03-11 15:00:33.781 | ++ trueorfalse False

2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False

2014-03-11 15:00:33.783 | + '[' False = True ']'

2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC

2014-03-11 15:00:33.785 | + git_clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC master


2014-03-11 15:00:33.786 | + GIT_REMOTE= 
https://github.com/kanaka/noVNC.git


2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC

2014-03-11 15:00:33.789 | + GIT_REF=master

2014-03-11 15:00:33.790 | ++ trueorfalse False False

2014-03-11 15:00:33.

Re: [openstack-dev] Replication multi cloud

2014-03-13 Thread McCabe, Donagh
Marco,

The replication *inside* Swift is not intended to move data between two 
different Swift instances -- it's an internal data repair and rebalance 
mechanism.

However, there is a different mechanism, called container-to-container 
synchronization that might be what you are looking for. It will sync two 
containers in different swift instances. The swift instances may be in 
different Keystone administrative domains -- the authentication is not based on 
Keystone. It does require that each swift instance be configured to "recognise" 
each other. However, this is only usable for low update rates.

Regards,
Donagh

-Original Message-
From: Fargetta Marco [mailto:marco.farge...@ct.infn.it] 
Sent: 13 March 2014 11:24
To: OpenStack Development Mailing List
Subject: [openstack-dev] [swift] Replication multi cloud

Hi all,

we would use the replication mechanism in swift to replicate the data in two 
swift instances deployed in different clouds with different keystones and 
administrative domains.

Is this possible with the current replication facilities or they should stay in 
the same cloud sharing the keystone?

Cheers,
Marco



--

Eng. Marco Fargetta, PhD

Istituto Nazionale di Fisica Nucleare (INFN) Catania, Italy

EMail: marco.farge...@ct.infn.it


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS design proposals

2014-03-13 Thread Brandon Logan
This is the object model proposals:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion

From: Prashanth Hari [hvpr...@gmail.com]
Sent: Thursday, March 13, 2014 9:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] LBaaS design proposals

Hi,

I am a late comer in this discussion.
Can someone please point me to the design proposal documentations in addition 
to the object model ?


Thanks,
Prashanth
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [Infra] pep8 issues in tempest gate / testscenarios lib

2014-03-13 Thread Koderer, Marc
Hi folks,

I can't make it to the QA meeting for today so I wanted to summarize the issue
that we have with the pep8 and tempest gate. An example for the issue you can
find here:
  https://review.openstack.org/#/c/79256/ 
  
http://logs.openstack.org/56/79256/1/gate/gate-tempest-pep8/088cc12/console.html

pep8 check shows an error but the check itself is marked as success.

For me this show two issues. First flake8 should return with an exit code !=0.
I will have a closer look into hacking and what went wrong here.

Second issue is the current implementation of the negative testing framework:
we are using the testscenarios lib with the "load_tests" variable interpreted
by the test runner. This forces us to build the scenario at import time and if
we want to have tempest configurations for this (like introduced in
https://review.openstack.org/#/c/73982/) the laziness for the config doesn't
work.

Although it seems like if I remove the inheritance of the xml class to the
json class 
(https://github.com/openstack/tempest/blob/master/tempest/api/compute/admin/test_flavors_negative_xml.py#L24)
that error doesn't appear any longer, I see a general problem with
the usage of "import-time" code and we may think about a better solution in 
general.

I'll try to address the missing pieces tomorrow.
Bug: https://bugs.launchpad.net/tempest/+bug/1291826

Regards,
Marc

DEUTSCHE TELEKOM AG
Digital Business Unit, Cloud Services (P&I)
Marc Koderer
Cloud Technology Software Developer
T-Online-Allee 1, 64211 Darmstadt
E-Mail: m.kode...@telekom.de
www.telekom.com   

LIFE IS FOR SHARING. 

DEUTSCHE TELEKOM AG
Supervisory Board: Prof. Dr. Ulrich Lehner (Chairman)
Board of Management: René Obermann (Chairman),
Reinhard Clemens, Niek Jan van Damme, Timotheus Höttges,
Dr. Thomas Kremer, Claudia Nemat, Prof. Dr. Marion Schick
Commercial register: Amtsgericht Bonn HRB 6794
Registered office: Bonn

BIG CHANGES START SMALL – CONSERVE RESOURCES BY NOT PRINTING EVERY E-MAIL.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] any recommendations for live debugging of openstack services?

2014-03-13 Thread Solly Ross
Well, for a non-interactive view of things, you can use the 
openstack.common.report functionality.  It's currently integrated into Nova, 
and I believe that the other projects are working to get it integrated as well. 
 To use it, you just send a SIGUSR1 to any Nova process, and a report of the 
current stack traces of threads and green threads, as well as the current 
configuration options, will be dumped.

It doesn't look like exactly what you want, but I figured it might be useful to 
you anyway.

Best Regards,
Solly Ross

- Original Message -
From: "Chris Friesen" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, March 12, 2014 12:47:32 PM
Subject: [openstack-dev] any recommendations for live debugging of openstack
services?


Are there any tools that people can recommend for live debugging of 
openstack services?

I'm looking for a mechanism where I could take a running system that 
isn't behaving the way I expect and somehow poke around inside the 
program while it keeps running.  (Sort of like tracepoints in gdb.)

I've seen mention of things like twisted.manhole and 
eventlet.backdoor...has anyone used this sort of thing with openstack? 
Are there better options?

Also, has anyone ever seen an implementation of watchpoints for python? 
  By that I mean the ability to set a breakpoint if the value of a 
variable changes.  I found 
"https://sourceforge.net/blog/watchpoints-in-python/"; but it looks 
pretty hacky.

Thanks,
Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [3rd party testing] How to setup CI? Take #2

2014-03-13 Thread Luke Gorrie
oh and in my haste I forgot to say: thank you extremely much to everybody
who's been giving me pointers on IRC and especially to Jay for the blog
walkthrough!


On 13 March 2014 15:30, Luke Gorrie  wrote:

> Howdy!
>
> I have some tech questions I'd love some pointers on from people who've
> succeeded in setting up CI for Neutron based on the upstream devstack-gate.
>
> Here are the parts where I'm blocked now:
>
> 1. I need to enable an ML2 mech driver. How can I do this? I have been
> trying to create a localrc with a "Q_ML2_PLUGIN_MECHANISM_DRIVERS=..."
> line, but it appears that the KEEP_LOCALRC option in devstack-gate is
> broken (confirmed on #openstack-infra).
>
> 2. How do I streamline which tests are run? I tried added "export
> DEVSTACK_GATE_TEMPEST_REGEX=network" in the Jenkins job configuration but I
> don't see any effect. (word on #openstack-infra is this option is not used
> by them so status unknown.)
>
> 3. How do I have Jenkins copy the log files into a directory on the
> Jenkins master node (that I can serve up with Apache)? This is left as an
> exercise to the reader in the blog tutorial but I would love a cheat, since
> I am getting plenty of exercise already :-).
>
> I also have the meta-question: How can I test changes/fixes to
> devstack-gate? I've attempted many times to modify how scripts work, but I
> don't have a global understanding of the whole openstack-infra setup, and
> somehow my changes always end up being clobbered by a fresh checkout from
> the upstream repo on Github. That's crazy frustrating when it takes 10+
> minutes to fire up a test via Jenkins even when I'm only e.g. trying to add
> an "echo" to a shell script somewhere to see what's in an environment
> variable at a certain point in a script. I'd love a faster edit-compile-run
> loop, especially one that doesn't involve needing to get changed merged
> upstream into the official openstack-infra repo.
>
> I also have an issue that worries me. I once started seeing tempest tests
> failing due to a resource leak where the kernel ran out of loopback mounts
> and that broke tempest. Here is what I saw:
>
> root@egg-slave:~# losetup -a
> /dev/loop0: [fc00]:5248399
> (/opt/stack/data/swift/drives/images/swift.img)
> /dev/loop1: [fc00]:5248409 (/opt/stack/data/stack-volumes-backing-file)
> /dev/loop2: [fc00]:5248467
> (/opt/stack/data/swift/drives/images/swift.img)
> /dev/loop3: [fc00]:5248496
> (/opt/stack/data/swift/drives/images/swift.img)
> /dev/loop4: [fc00]:5248702
> (/opt/stack/data/swift/drives/images/swift.img)
> /dev/loop5: [fc00]:5248735
> (/opt/stack/data/swift/drives/images/swift.img)
> /dev/loop6: [fc00]:5248814
> (/opt/stack/data/swift/drives/images/swift.img)
> /dev/loop7: [fc00]:5248825
> (/opt/stack/data/swift/drives/images/swift.img)
>
> and trying to remove this with 'losetup -d ...' had no effect. I rebooted.
> (I'm on Ubuntu 13.10.)
>
> This kind of spurious error has the potential to cause my CI to start
> casting negative votes (again) and upsetting everybody's workflows, not
> because my tests have actually found a problem but just because it's a
> non-trivial problem for me to keep a devstack-gate continuously
> operational. I hope that doesn't happen, but with this level of
> infrastructure complexity it does feel a little like playing russian
> roulette that the next glitch in
> devstack/devstack-gate/Jenkins/Gerrit/Zuul/Gearman/... will manifest itself
> in the copy that's running on my server. 
>
> Cheers,
> -Luke
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Solly Ross
@sdague: We (the upstream noVNC/websockify maintainers) are attempting to get 
back on the bandwagon WRT releases.  Unfortunately, before a few months back, 
the developer had taken a break from noVNC work, so there isn't a recent 
release.  We just pushed a new release of Websockify the week before last, and 
I'll start the discussion about a new release of noVNC.

Best Regards,
Solly Ross

- Original Message -
From: "Sean Dague" 
To: "OpenStack Development Mailing List (not for usage questions)" 
, openst...@nemebean.com
Cc: openstack-in...@lists.openstack.org
Sent: Thursday, March 13, 2014 10:44:09 AM
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
noVNC from github.com/kanaka

I think a bigger question is why are we using a git version of something
outside of OpenStack.

Where is a noNVC release we can point to and use?

In Juno I'd really be pro removing all the devstack references to git
repos not on git.openstack.org, because these kinds of failures have
real impact.

Currently we have 4 repositories that fit this bill:

SWIFT3_REPO=${SWIFT3_REPO:-http://github.com/fujita/swift3.git}
NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git}
SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}

I think all of these probably need to be removed from devstack. We
should be using release versions (preferably in distros, though allowed
to be in language specific package manager).

-Sean

On 03/13/2014 10:26 AM, Solly Ross wrote:
> @bnemec: I don't think that's been considered.  I'm actually one of the 
> upstream maintainers for noVNC.  The only concern that I'd have with 
> OpenStack adopting noVNC (there are other maintainers, as well as the author, 
> so I'd have to check with them as well) is that there are a few other 
> projects that use noVNC, so we'd need to make sure that no OpenStack-specific 
> code gets merged into noVNC if we adopt it.  Other that that, though, 
> adopting noVNC doesn't sound like a horrible idea.
> 
> Best Regards,
> Solly Ross
> 
> - Original Message -
> From: "Ben Nemec" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: openstack-in...@lists.openstack.org
> Sent: Wednesday, March 12, 2014 3:38:19 PM
> Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning  
> noVNC from github.com/kanaka
> 
> 
> 
> On 2014-03-11 20:34, Joshua Harlow wrote: 
> 
> 
> https://status.github.com/messages 
> * 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
> mitigations we have in place are proving effective in protecting us and we're 
> hopeful that we've got this one resolved.' 
> If you were cloning from github.org and not http://git.openstack.org then you 
> were likely seeing some of the DDoS attack in action. 
> Unfortunately I don't think novnc is in git.openstack.org because it's not an 
> OpenStack project. I wonder if we should investigate adopting it (if the 
> author(s) are amenable to that) since we're using the git version of it. 
> Maybe that's already been considered and I just don't know about it. :-) 
> -Ben 
> 
> 
> 
> From: Sukhdev Kapur < sukhdevka...@gmail.com > 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < 
> openstack-dev@lists.openstack.org > 
> Date: Tuesday, March 11, 2014 at 4:08 PM 
> To: "Dane Leblanc (leblancd)" < lebla...@cisco.com > 
> Cc: "OpenStack Development Mailing List (not for usage questions)" < 
> openstack-dev@lists.openstack.org >, " openstack-in...@lists.openstack.org " 
> < openstack-in...@lists.openstack.org > 
> Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
> noVNC from github.com/kanaka 
> 
> 
> 
> I have noticed that even clone of devstack has failed few times within last 
> couple of hours - it was running fairly smooth so far. 
> -Sukhdev 
> 
> 
> On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur < sukhdevka...@gmail.com > 
> wrote: 
> 
> 
> 
> [adding openstack-dev list as well ] 
> I have noticed that this has stated hitting my builds within last few hours. 
> I have noticed exact same failures on almost 10 builds. 
> Looks like something has happened within last few hours - perhaps the load? 
> -Sukhdev 
> 
> 
> On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd) < lebla...@cisco.com 
> > wrote: 
> 
> 
> 
> 
> 
> Apologies if this is the wrong audience for this question... 
> 
> 
> 
> I'm seeing intermittent failures running stack.sh whereby 'git clone 
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC' is returning various 
> errors. Below are 2 examples. 
> 
> 
> 
> Is this a known issue? Are there any localrc settings which might help here? 
> 
> 
> 
> Example 1: 
> 
> 
> 
> 2014-03-11 15:00:33.779 | + is_service_enabled n-novnc 
> 
> 2014-03-11 15:00:33.780 | + return 0 
> 
> 2014-03-11 15:00:33.781 | ++ trueorfalse False 
> 
> 2014-03-11 15:00:33.782 |

[openstack-dev] [Neutron][LBaaS] LBaaS design proposals

2014-03-13 Thread Prashanth Hari
Hi,

I am a late comer in this discussion.
Can someone please point me to the design proposal documentations in
addition to the object model ?


Thanks,
Prashanth
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-13 Thread Joe Hakim Rahme
On 10 Mar 2014, at 22:54, David Kranz  wrote:

> There are a number of patches up for review that make various changes to use 
> "six" apis instead of Python 2 constructs. While I understand the desire to 
> get a head start on getting Tempest to run in Python 3, I'm not sure it makes 
> sense to do this work piecemeal until we are near ready to introduce a py3 
> gate job. Many contributors will not be aware of what all the differences are 
> and py2-isms will creep back in resulting in more overall time spent making 
> these changes and reviewing. Also, the core review team is busy trying to do 
> stuff important to the icehouse release which is barely more than 5 weeks 
> away. IMO we should hold off on various kinds of "cleanup" patches for now.

+1 I agree with you David.

However, what’s the best way we can go about making sure to make this a
goal for the next release cycle?

---
Joe H. Rahme
IRC: rahmu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Aaron Rosen
The easiest/quickest thing to do for ice house would probably be to run the
initial sync in parallel like the dhcp-agent does for this exact reason.
See: https://review.openstack.org/#/c/28914/ which did this for thr
dhcp-agent.

Best,

Aaron
On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo wrote:
>
> Yuri, could you elaborate your idea in detail? , I'm lost at some
> points with your unix domain / token authentication.
>
> Where does the token come from?,
>
> Who starts rootwrap the first time?
>
> If you could write a full interaction sequence, on the etherpad, from
> rootwrap daemon start ,to a simple call to system happening, I think that'd
> help my understanding.


Here it is: https://etherpad.openstack.org/p/rootwrap-agent
Please take a look.

-- 

Kind regards, Yuriy.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-13 Thread Russell Bryant
On 03/12/2014 12:14 PM, Sylvain Bauza wrote:
> Hi Russell,
> Thanks for replying,
> 
> 
> 2014-03-12 16:46 GMT+01:00 Russell Bryant  >:
> The biggest concern seemed to be that we weren't sure whether Climate
> makes sense as an independent project or not.  We think it may make more
> sense to integrate what Climate does today into Nova directly.  More
> generally, we think reservations of resources may best belong in the
> APIs responsible for managing those resources, similar to how quota
> management for resources lives in the resource APIs.
> 
> There is some expectation that this type of functionality will extend
> beyond Nova, but for that we could look at creating a shared library of
> code to ease implementing this sort of thing in each API that needs it.
> 
> 
> 
> That's really a good question, so maybe I could give some feedback on
> how we deal with the existing use-cases.
> About the possible integration with Nova, that's already something we
> did for the virtual instances use-case, thanks to an API extension
> responsible for checking if a scheduler hint called 'reservation' was
> spent, and if so, take use of the python-climateclient package to send a
> request to Climate.
> 
> I truly agree with the fact that possibly users should not use a
> separate API for reserving resources, but that would be worth duty for
> the project itself (Nova, Cinder or even Heat). That said, we think that
> there is need for having a global ordonancer managing resources and not
> siloing the resources. Hence that's why we still think there is still a
> need for a Climate Manager.

What we need to dig in to is *why* do you feel it needs to be global?

I'm trying to understand what you're saying here ... do you mean that
since we're trying to get to where there's a global scheduler, that it
makes sense there should be a central point for this, even if the API is
through the existing compute/networking/storage APIs?

If so, I think that makes sense.  However, until we actually have
something for scheduling, I think we should look at implementing all of
this in the services, and perhaps share some code with a Python library.
 So, I'm thinking along the lines of ...

1) Take what Climate does today and work to integrate it into Nova,
using as much of the existing Climate code as makes sense.  Be careful
about coupling in Nova so that we can easily split out the right code
into a library once we're ready to work on integration in another project.

2) In parallel, continue working on decoupling nova-scheduler from the
rest of Nova, so that we can split it out into its own project.

3) Once the scheduler is split out, re-visit what part of reservations
functionality belongs in the new scheduling project and what parts
should remain in each of the projects responsible for managing resources.

> Once I said that, there are different ways to plug in with the Manager,
> our proposal is to deliver a REST API and a python client so that there
> could be still some operator access for managing the resources if
> needed. The other way would be to only expose an RPC interface like the
> scheduler does at the moment but as the move to Pecan/WSME is already
> close to be done (reviews currently in progress), that's still a good
> opportunity for leveraging the existing bits of code.

Yes, I would want to use as much of the existing code as possible.

As I said above, I just think it's premature to make this its own
project on its own, unless we're able to look at scheduling more broadly
as its own project.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Sean Dague
I think a bigger question is why are we using a git version of something
outside of OpenStack.

Where is a noNVC release we can point to and use?

In Juno I'd really be pro removing all the devstack references to git
repos not on git.openstack.org, because these kinds of failures have
real impact.

Currently we have 4 repositories that fit this bill:

SWIFT3_REPO=${SWIFT3_REPO:-http://github.com/fujita/swift3.git}
NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git}
SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}

I think all of these probably need to be removed from devstack. We
should be using release versions (preferably in distros, though allowed
to be in language specific package manager).

-Sean

On 03/13/2014 10:26 AM, Solly Ross wrote:
> @bnemec: I don't think that's been considered.  I'm actually one of the 
> upstream maintainers for noVNC.  The only concern that I'd have with 
> OpenStack adopting noVNC (there are other maintainers, as well as the author, 
> so I'd have to check with them as well) is that there are a few other 
> projects that use noVNC, so we'd need to make sure that no OpenStack-specific 
> code gets merged into noVNC if we adopt it.  Other that that, though, 
> adopting noVNC doesn't sound like a horrible idea.
> 
> Best Regards,
> Solly Ross
> 
> - Original Message -
> From: "Ben Nemec" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: openstack-in...@lists.openstack.org
> Sent: Wednesday, March 12, 2014 3:38:19 PM
> Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning  
> noVNC from github.com/kanaka
> 
> 
> 
> On 2014-03-11 20:34, Joshua Harlow wrote: 
> 
> 
> https://status.github.com/messages 
> * 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
> mitigations we have in place are proving effective in protecting us and we're 
> hopeful that we've got this one resolved.' 
> If you were cloning from github.org and not http://git.openstack.org then you 
> were likely seeing some of the DDoS attack in action. 
> Unfortunately I don't think novnc is in git.openstack.org because it's not an 
> OpenStack project. I wonder if we should investigate adopting it (if the 
> author(s) are amenable to that) since we're using the git version of it. 
> Maybe that's already been considered and I just don't know about it. :-) 
> -Ben 
> 
> 
> 
> From: Sukhdev Kapur < sukhdevka...@gmail.com > 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < 
> openstack-dev@lists.openstack.org > 
> Date: Tuesday, March 11, 2014 at 4:08 PM 
> To: "Dane Leblanc (leblancd)" < lebla...@cisco.com > 
> Cc: "OpenStack Development Mailing List (not for usage questions)" < 
> openstack-dev@lists.openstack.org >, " openstack-in...@lists.openstack.org " 
> < openstack-in...@lists.openstack.org > 
> Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
> noVNC from github.com/kanaka 
> 
> 
> 
> I have noticed that even clone of devstack has failed few times within last 
> couple of hours - it was running fairly smooth so far. 
> -Sukhdev 
> 
> 
> On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur < sukhdevka...@gmail.com > 
> wrote: 
> 
> 
> 
> [adding openstack-dev list as well ] 
> I have noticed that this has stated hitting my builds within last few hours. 
> I have noticed exact same failures on almost 10 builds. 
> Looks like something has happened within last few hours - perhaps the load? 
> -Sukhdev 
> 
> 
> On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd) < lebla...@cisco.com 
> > wrote: 
> 
> 
> 
> 
> 
> Apologies if this is the wrong audience for this question... 
> 
> 
> 
> I'm seeing intermittent failures running stack.sh whereby 'git clone 
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC' is returning various 
> errors. Below are 2 examples. 
> 
> 
> 
> Is this a known issue? Are there any localrc settings which might help here? 
> 
> 
> 
> Example 1: 
> 
> 
> 
> 2014-03-11 15:00:33.779 | + is_service_enabled n-novnc 
> 
> 2014-03-11 15:00:33.780 | + return 0 
> 
> 2014-03-11 15:00:33.781 | ++ trueorfalse False 
> 
> 2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False 
> 
> 2014-03-11 15:00:33.783 | + '[' False = True ']' 
> 
> 2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC 
> 
> 2014-03-11 15:00:33.785 | + git_clone https://github.com/kanaka/noVNC.git 
> /opt/stack/noVNC master 
> 
> 2014-03-11 15:00:33.786 | + GIT_REMOTE= https://github.com/kanaka/noVNC.git 
> 
> 2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC 
> 
> 2014-03-11 15:00:33.789 | + GIT_REF=master 
> 
> 2014-03-11 15:00:33.790 | ++ trueorfalse False False 
> 
> 2014-03-11 15:00:33.791 | + RECLONE=False 
> 
> 2014-03-11 15:00:33.792 | + [[ False = \T\r\u\e ]] 
> 
> 2014-03-11 15:00:33.793 | + echo master 
> 
> 2014-03-11 15:00:33.794 | + egrep -q '^refs' 
> 
> 2014-03-11 15:0

Re: [openstack-dev] Disaster Recovery for OpenStack - call for stakeholder

2014-03-13 Thread Bruce Montague
Hi, about OpenStack and VSS. Does anyone have experience with the qemu 
project's implementation of VSS support? They appear to have a within-guest 
agent, qemu-ga, that perhaps can work as a VSS requestor. Does it also work 
with KVM? Does qemu-ga work with libvirt (can VSS quiesce be triggered via 
libvirt)? I think there was an effort for qemu-ga to use fsfreeze as an 
equivalent to VSS on Linux systems, was that done?  If so, could an OpenStack 
API provide a generic quiesce request that would then get passed to libvirt? 
(Also, the XenServer VSS support seems different than qemu/KVM's, is this true? 
Can it also be accessed through libvirt?

Thanks,

-bruce

-Original Message-
From: Alessandro Pilotti [mailto:apilo...@cloudbasesolutions.com]
Sent: Thursday, March 13, 2014 6:49 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Those use cases are very important in enterprise scenarios requirements, but 
there's an important missing piece in the current OpenStack APIs: support for 
application consistent backups via Volume Shadow Copy (or other solutions) at 
the instance level, including differential / incremental backups.

VSS can be seamlessly added to the Nova Hyper-V driver (it's included with the 
free Hyper-V Server) with e.g. vSphere and XenServer supporting it as well 
(quescing) and with the option for third party vendors to add drivers for their 
solutions.

A generic Nova backup / restore API supporting those features is quite 
straightforward to design. The main question at this stage is if the OpenStack 
community wants to support those use cases or not. Cinder backup/restore 
support [1] and volume replication [2] are surely a great starting point in 
this direction.

Alessandro

[1] https://review.openstack.org/#/c/69351/
[2] https://review.openstack.org/#/c/64026/


> On 12/mar/2014, at 20:45, "Bruce Montague"  
> wrote:
>
>
> Hi, regarding the call to create a list of disaster recovery (DR) use cases ( 
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html ), 
> the following list sketches some speculative OpenStack DR use cases. These 
> use cases do not reflect any specific product behavior and span a wide 
> spectrum. This list is not a proposal, it is intended primarily to solicit 
> additional discussion. The first basic use case, (1), is described in a bit 
> more detail than the others; many of the others are elaborations on this 
> basic theme.
>
>
>
> * (1) [Single VM]
>
> A single Windows VM with 4 volumes and VSS (Microsoft's Volume Shadowcopy 
> Services) installed runs a key application and integral database. VSS can 
> quiesce the app, database, filesystem, and I/O on demand and can be invoked 
> external to the guest.
>
>   a. The VM's volumes, including the boot volume, are replicated to a remote 
> DR site (another OpenStack deployment).
>
>   b. Some form of replicated VM or VM metadata exists at the remote site. 
> This VM/description includes the replicated volumes. Some systems might use 
> cold migration or some form of wide-area live VM migration to establish this 
> remote site VM/description.
>
>   c. When specified by an SLA or policy, VSS is invoked, putting the VM's 
> volumes in an application-consistent state. This state is flushed all the way 
> through to the remote volumes. As each remote volume reaches its 
> application-consistent state, this is recognized in some fashion, perhaps by 
> an in-band signal, and a snapshot of the volume is made at the remote site. 
> Volume replication is re-enabled immediately following the snapshot. A backup 
> is then made of the snapshot on the remote site. At the completion of this 
> cycle, application-consistent volume snapshots and backups exist on the 
> remote site.
>
>   d.  When a disaster or firedrill happens, the replication network
> connection is cut. The remote site VM pre-created or defined so as to use the 
> replicated volumes is then booted, using the latest application-consistent 
> state of the replicated volumes. The entire VM environment (management 
> accounts, networking, external firewalling, console access, etc..), similar 
> to that of the primary, either needs to pre-exist in some fashion on the 
> secondary or be created dynamically by the DR system. The booting VM either 
> needs to attach to a virtual network environment similar to at the primary 
> site or the VM needs to have boot code that can alter its network 
> personality. Networking configuration may occur in conjunction with an update 
> to DNS and other networking infrastructure. It is necessary for all required 
> networking configuration  to be pre-specified or done automatically. No 
> manual admin activity should be required. Environment requirements may be 
> stored in a DR configuration o r database associated with the replication.
>
>   e. In a firedrill or test, the virtual network environment at the remote 
> site

Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-13 Thread Duncan Thomas
On 12 March 2014 17:35, Tim Bell  wrote:

> And if the same mistake is done for a cinder volume or a trove database ?

Deferred deletion for cinder has been proposed, and there have been
few objections to it... nobody has put forward code yet, but anybody
is welcome to do so.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [3rd party testing] How to setup CI? Take #2

2014-03-13 Thread Luke Gorrie
Howdy!

I have some tech questions I'd love some pointers on from people who've
succeeded in setting up CI for Neutron based on the upstream devstack-gate.

Here are the parts where I'm blocked now:

1. I need to enable an ML2 mech driver. How can I do this? I have been
trying to create a localrc with a "Q_ML2_PLUGIN_MECHANISM_DRIVERS=..."
line, but it appears that the KEEP_LOCALRC option in devstack-gate is
broken (confirmed on #openstack-infra).

2. How do I streamline which tests are run? I tried added "export
DEVSTACK_GATE_TEMPEST_REGEX=network" in the Jenkins job configuration but I
don't see any effect. (word on #openstack-infra is this option is not used
by them so status unknown.)

3. How do I have Jenkins copy the log files into a directory on the Jenkins
master node (that I can serve up with Apache)? This is left as an exercise
to the reader in the blog tutorial but I would love a cheat, since I am
getting plenty of exercise already :-).

I also have the meta-question: How can I test changes/fixes to
devstack-gate? I've attempted many times to modify how scripts work, but I
don't have a global understanding of the whole openstack-infra setup, and
somehow my changes always end up being clobbered by a fresh checkout from
the upstream repo on Github. That's crazy frustrating when it takes 10+
minutes to fire up a test via Jenkins even when I'm only e.g. trying to add
an "echo" to a shell script somewhere to see what's in an environment
variable at a certain point in a script. I'd love a faster edit-compile-run
loop, especially one that doesn't involve needing to get changed merged
upstream into the official openstack-infra repo.

I also have an issue that worries me. I once started seeing tempest tests
failing due to a resource leak where the kernel ran out of loopback mounts
and that broke tempest. Here is what I saw:

root@egg-slave:~# losetup -a
/dev/loop0: [fc00]:5248399
(/opt/stack/data/swift/drives/images/swift.img)
/dev/loop1: [fc00]:5248409 (/opt/stack/data/stack-volumes-backing-file)
/dev/loop2: [fc00]:5248467
(/opt/stack/data/swift/drives/images/swift.img)
/dev/loop3: [fc00]:5248496
(/opt/stack/data/swift/drives/images/swift.img)
/dev/loop4: [fc00]:5248702
(/opt/stack/data/swift/drives/images/swift.img)
/dev/loop5: [fc00]:5248735
(/opt/stack/data/swift/drives/images/swift.img)
/dev/loop6: [fc00]:5248814
(/opt/stack/data/swift/drives/images/swift.img)
/dev/loop7: [fc00]:5248825
(/opt/stack/data/swift/drives/images/swift.img)

and trying to remove this with 'losetup -d ...' had no effect. I rebooted.
(I'm on Ubuntu 13.10.)

This kind of spurious error has the potential to cause my CI to start
casting negative votes (again) and upsetting everybody's workflows, not
because my tests have actually found a problem but just because it's a
non-trivial problem for me to keep a devstack-gate continuously
operational. I hope that doesn't happen, but with this level of
infrastructure complexity it does feel a little like playing russian
roulette that the next glitch in
devstack/devstack-gate/Jenkins/Gerrit/Zuul/Gearman/... will manifest itself
in the copy that's running on my server. 

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-13 Thread Solly Ross
@bnemec: I don't think that's been considered.  I'm actually one of the 
upstream maintainers for noVNC.  The only concern that I'd have with OpenStack 
adopting noVNC (there are other maintainers, as well as the author, so I'd have 
to check with them as well) is that there are a few other projects that use 
noVNC, so we'd need to make sure that no OpenStack-specific code gets merged 
into noVNC if we adopt it.  Other that that, though, adopting noVNC doesn't 
sound like a horrible idea.

Best Regards,
Solly Ross

- Original Message -
From: "Ben Nemec" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: openstack-in...@lists.openstack.org
Sent: Wednesday, March 12, 2014 3:38:19 PM
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning
noVNC from github.com/kanaka



On 2014-03-11 20:34, Joshua Harlow wrote: 


https://status.github.com/messages 
* 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
mitigations we have in place are proving effective in protecting us and we're 
hopeful that we've got this one resolved.' 
If you were cloning from github.org and not http://git.openstack.org then you 
were likely seeing some of the DDoS attack in action. 
Unfortunately I don't think novnc is in git.openstack.org because it's not an 
OpenStack project. I wonder if we should investigate adopting it (if the 
author(s) are amenable to that) since we're using the git version of it. Maybe 
that's already been considered and I just don't know about it. :-) 
-Ben 



From: Sukhdev Kapur < sukhdevka...@gmail.com > 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" < 
openstack-dev@lists.openstack.org > 
Date: Tuesday, March 11, 2014 at 4:08 PM 
To: "Dane Leblanc (leblancd)" < lebla...@cisco.com > 
Cc: "OpenStack Development Mailing List (not for usage questions)" < 
openstack-dev@lists.openstack.org >, " openstack-in...@lists.openstack.org " < 
openstack-in...@lists.openstack.org > 
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
noVNC from github.com/kanaka 



I have noticed that even clone of devstack has failed few times within last 
couple of hours - it was running fairly smooth so far. 
-Sukhdev 


On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur < sukhdevka...@gmail.com > 
wrote: 



[adding openstack-dev list as well ] 
I have noticed that this has stated hitting my builds within last few hours. I 
have noticed exact same failures on almost 10 builds. 
Looks like something has happened within last few hours - perhaps the load? 
-Sukhdev 


On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd) < lebla...@cisco.com > 
wrote: 





Apologies if this is the wrong audience for this question... 



I'm seeing intermittent failures running stack.sh whereby 'git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC' is returning various 
errors. Below are 2 examples. 



Is this a known issue? Are there any localrc settings which might help here? 



Example 1: 



2014-03-11 15:00:33.779 | + is_service_enabled n-novnc 

2014-03-11 15:00:33.780 | + return 0 

2014-03-11 15:00:33.781 | ++ trueorfalse False 

2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False 

2014-03-11 15:00:33.783 | + '[' False = True ']' 

2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC 

2014-03-11 15:00:33.785 | + git_clone https://github.com/kanaka/noVNC.git 
/opt/stack/noVNC master 

2014-03-11 15:00:33.786 | + GIT_REMOTE= https://github.com/kanaka/noVNC.git 

2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC 

2014-03-11 15:00:33.789 | + GIT_REF=master 

2014-03-11 15:00:33.790 | ++ trueorfalse False False 

2014-03-11 15:00:33.791 | + RECLONE=False 

2014-03-11 15:00:33.792 | + [[ False = \T\r\u\e ]] 

2014-03-11 15:00:33.793 | + echo master 

2014-03-11 15:00:33.794 | + egrep -q '^refs' 

2014-03-11 15:00:33.795 | + [[ ! -d /opt/stack/noVNC ]] 

2014-03-11 15:00:33.796 | + [[ False = \T\r\u\e ]] 

2014-03-11 15:00:33.797 | + git_timed clone https://github.com/kanaka/noVNC.git 
/opt/stack/noVNC 

2014-03-11 15:00:33.798 | + local count=0 

2014-03-11 15:00:33.799 | + local timeout=0 

2014-03-11 15:00:33.801 | + [[ -n 0 ]] 

2014-03-11 15:00:33.802 | + timeout=0 

2014-03-11 15:00:33.803 | + timeout -s SIGINT 0 git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC 

2014-03-11 15:00:33.804 | Cloning into '/opt/stack/noVNC'... 

2014-03-11 15:03:13.694 | error: RPC failed; result=56, HTTP code = 200 

2014-03-11 15:03:13.695 | fatal: The remote end hung up unexpectedly 

2014-03-11 15:03:13.697 | fatal: early EOF 

2014-03-11 15:03:13.698 | fatal: index-pack failed 

2014-03-11 15:03:13.699 | + [[ 128 -ne 124 ]] 

2014-03-11 15:03:13.700 | + die 596 'git call failed: [git clone' 
https://github.com/kanaka/noVNC.git '/opt/stack/noVNC]' 

2014-03-11 15:03:13.701 | + local exitcode=0 

2014-03-11 15:03:13.702 | [Call Trace] 

2014-03-11 15:03:13.703 | ./stack.sh:736:install_nova 

2014-0

Re: [openstack-dev] [nova] [neutron] Top Gate Race - Bug 1248757 - test_snapshot_pattern fails with paramiko ssh EOFError

2014-03-13 Thread Dan Smith
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

> Here is the latest marked fail - 
> http://logs.openstack.org/28/79628/4/check/check-tempest-dsvm-neutron/11f8293/

So,
> 
looking at this a little bit, you can see from the n-cpu log that
it is getting failures when talking to neutron. Specifically, from
neutronclient:

throwing ConnectionFailed : timed out _cs_request
ConnectionFailed: Connection to neutron failed: Maximum attempts reached

- From a brief look at neutronclient, it looks like it tries several
times to send a request, before it falls back to the above error.
Given the debug "timed out" log there, I would assume that neutron's
API isn't accepting the connection in time.

Later in the log, it successfully reaches neutron again, and then
falls over again in the same way. This is a parallel job, so load is
high, which makes me suspect just a load issue.

- From talking to salv-orlando on IRC just now, it sounds like this
might just be some lock contention on the Neutron side, which is
slowing it down enough to cause failures occasionally.

- --Dan
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTIbxoAAoJEBeZxaMESjNVB4IH/0wzaRhW/xkuUbFxNsSbRRt5
8EJdBkDJHfFQW6VQM6GqmvyZOVFkTLOhdMGF1dgWLBTTkGhmOVRiwdkim059sPd4
3EwUH3ZhSQg8n/rSAoS0rb1nFKaCt6D76DNJR5LXBCd89k6d/0q8SAkOgwNg7H82
oS17CjnLYvUfvF0JqSmKNt4ter1zMSXMZXNe8z09mKqZBTC4vNWIskv2yLgUbecv
Sb6NVc+HFkCk3t5MlKlM8fnLIoF2b4F0w0rCSJPV9txXWL2ijiaFIncyTYSFSuOp
NE1kdEAuZOIUnnZW3udEyb4QQS3HhRbVvRHJbnTAOVLGw5ijp+1V5FoipizA3v0=
=lVal
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >