Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-01 Thread Mike Spreitzer
Zane Bitter  wrote on 07/01/2014 06:58:47 PM:

> On 01/07/14 15:47, Mike Spreitzer wrote:
> > In AWS, an autoscaling group includes health maintenance functionality
> > --- both an ability to detect basic forms of failures and an ability 
to
> > react properly to failures detected by itself or by a load balancer.
> >   What is the thinking about how to get this functionality in 
OpenStack?
> >   Since OpenStack's OS::Heat::AutoScalingGroup has a more general 
member
> > type, what is the thinking about what failure detection means (and how
> > it would be accomplished, communicated)?
> >
> > I have not found design discussion of this; have I missed something?
> 
> Yes :)
> 
> https://review.openstack.org/#/c/95907/
> 
> The idea is that Convergence will provide health maintenance for _all_ 
> forms of resources in Heat. Once this is implemented, autoscaling gets 
> it for free by virtue of that fact that it manages resources using Heat 
> stacks.

Ah, right.  My reading of that design is not quite so simple.  Note that 
in the User Stories section it calls for different treatment of Compute 
instances depending on whether they are in a scaling group.  That's why I 
was thinking of this from a scaling group perspective.  But perhaps the 
more natural approach is to take the pervasive perspective and figure out 
how to suppress convergence for the Compute instances to which it should 
not apply.

Thanks,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][DevStack] Failed to install OpenStack with DevStack

2014-07-01 Thread Ken'ichi Ohmichi
Hi Jay,

I faced the same problem and can pass it with adding the following line
into localrc:

LOGFILE=/opt/stack/logs/stack.sh.log

Thanks
Ken'ichi Ohmichi

---
2014-07-02 14:58 GMT+09:00 Jay Lau :
> Hi,
>
> Does any one encounter this error when install devstack? How did you resolve
> this issue?
>
> + [[ 1 -ne 0 ]]
> + echo 'Error on exit'
> Error on exit
> + ./tools/worlddump.py -d
> usage: worlddump.py [-h] [-d DIR]
> worlddump.py: error: argument -d/--dir: expected one argument
> 317.292u 180.092s 14:40.93 56.4%0+0k 195042+2987608io 1003pf+0w
>
> BTW: I was using ubuntu 12.04
>
>  gy...@mesos014.eng.platformlab.ibm.com-84: cat  /etc/*release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=12.04
> DISTRIB_CODENAME=precise
> DISTRIB_DESCRIPTION="Ubuntu 12.04.1 LTS"
>
> --
> Thanks,
>
> Jay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift: reason for using xfs on devices

2014-07-01 Thread Osanai, Hisashi

On Wednesday, July 02, 2014 1:06 PM, Pete Zaitcev  wrote:

Thanks for the detailed explanation.

Let me clarify the behavior of swift.

(1) Use ext4 on devices.
(2) Corrupt the data on (1)'s filesystem
(3) Move corrupt files to lost+found without a trace by ext4's fsck
(4) Cannot recognize (3) by Swift's auditors so hashes.pkl is not updated.

Is above sequence correct?
If it's correct, I understand we better to use xfs.

Thanks in advance,
Hisashi Osanai

> -Original Message-
> From: Pete Zaitcev [mailto:zait...@redhat.com]
> Sent: Wednesday, July 02, 2014 1:06 PM
> To: Osanai, Hisashi/小山内 尚
> Cc: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Swift: reason for using xfs on devices
> 
> On Wed, 2 Jul 2014 00:16:42 +
> "Osanai, Hisashi"  wrote:
> 
> > So I think if performance of swift is more important rather than
> scalability of it, it is a
> > good idea to use ext4.
> 
> The real problem is what happens when your drives corrupt the data.
> Both ext4 and XFS demonstrated good resilience, but XFS leaves empty
> files in directories where corrupt files were, while ext4's fsck moves
> them to lost+found without a trace. When that happens, Swift's auditors
> cannot know that something was amiss and the replication is not
> triggered (because hash lists are only updated by auditors).
> 
> Mr. You Yamagata worked on a patch to address this problem, but did
> not complete it. See here:
>  https://review.openstack.org/11452
> 
> -- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][DevStack] Failed to install OpenStack with DevStack

2014-07-01 Thread Jay Lau
Hi,

Does any one encounter this error when install devstack? How did you
resolve this issue?

+ [[ 1 -ne 0 ]]
+ echo 'Error on exit'
Error on exit
+ ./tools/worlddump.py -d
usage: worlddump.py [-h] [-d DIR]
worlddump.py: error: argument -d/--dir: expected one argument
317.292u 180.092s 14:40.93 56.4%0+0k 195042+2987608io 1003pf+0w

BTW: I was using ubuntu 12.04

 gy...@mesos014.eng.platformlab.ibm.com-84: cat  /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=12.04
DISTRIB_CODENAME=precise
DISTRIB_DESCRIPTION="Ubuntu 12.04.1 LTS"

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] guestagent config for overriding managers

2014-07-01 Thread Craig Vyvial
If you want to override the trove guestagent managers its looks really
nasty to have EVERY manager on a single line here.

datastore_registry_ext =
mysql:my.guestagent.datastore.mysql.manager.Manager,percona:my.guestagent.datastore.mysql.manager.Manager,...

This needs to be tidied up and split out some way.
Ideally each of these should be on a single line.

datastore_registry_ext = mysql:my.guestagent.datastore.mysql.manager.Manager
datastore_registry_ext =
percona:my.guestagent.datastore.mysql.manager.Manager

or maybe...

datastores = mysql,precona
[mysql]
manager = my.guestagent.datastore.mysql.manager.Manager
[percona]
manager = my.guestagent.datastore.percona.manager.Manager

After typing out the second idea i dont like it as much as something like
the first way.

Thoughts?

Thanks,
- Craig Vyvial
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift: reason for using xfs on devices

2014-07-01 Thread Pete Zaitcev
On Wed, 2 Jul 2014 00:16:42 +
"Osanai, Hisashi"  wrote:

> So I think if performance of swift is more important rather than scalability 
> of it, it is a
> good idea to use ext4.

The real problem is what happens when your drives corrupt the data.
Both ext4 and XFS demonstrated good resilience, but XFS leaves empty
files in directories where corrupt files were, while ext4's fsck moves
them to lost+found without a trace. When that happens, Swift's auditors
cannot know that something was amiss and the replication is not
triggered (because hash lists are only updated by auditors).

Mr. You Yamagata worked on a patch to address this problem, but did
not complete it. See here:
 https://review.openstack.org/11452

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] - rebasing patches for CI

2014-07-01 Thread Carl Baldwin
It could be that it this behavior has merely become more noticeable
since Jenkins is now reverifying patch sets when new comments show up
and it sees that the patch set hasn't been verified recently.

Carl

On Tue, Jul 1, 2014 at 5:00 PM, Jeremy Stanley  wrote:
> On 2014-07-01 10:05:45 -0700 (-0700), Kevin Benton wrote:
> [...]
>> As I understand it, this behavior for the main OpenStack CI check
>> queue changed to the latter some time over the past few months.
> [...]
>
> I'm not sure what you think changed, but we've (upstream OpenStack
> CI) been testing proposed patches merged to their target branches
> for years...
> --
> Jeremy Stanley
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] HTTP Get and HEAD requests mismatch on resulting HTTP status (proposed REST API Response Status Changes)

2014-07-01 Thread Nathan Kinder


On 07/01/2014 07:48 PM, Robert Collins wrote:
> Wearing my HTTP fanatic hat - I think this is actually an important
> change to do. Skew like this can cause all sorts of odd behaviours in
> client libraries.

+1.  The current behavior of inconsistent response codes between the two
recommended methods of deploying Keystone should definitely be
considered as a bug IMHO.  Consistency in responses is important
regardless of how Keystone is deployed, and it seems obvious that we
should modify the responses that are out of spec to achieve consistency.

-NGK
> 
> -Rob
> 
> On 2 July 2014 13:13, Morgan Fainberg  wrote:
>> In the endeavor to move from the default deployment of Keystone being 
>> eventlet (in devstack) to Apache + mod_wsgi, I noticed that there was an odd 
>> mis-match on a single set of tempest tests relating to trusts. Under 
>> eventlet a HTTP 204 No Content was being returned, but under mod_wsgi an 
>> HTTP 200 OK was being returned. After some investigation it turned out that 
>> in some cases mod_wsgi will rewrite HEAD requests to GET requests under the 
>> hood; this is to ensure that the response from Apache is properly built when 
>> dealing with filtering. A number of wsgi applications just return nothing on 
>> a HEAD request, which is incorrect, so mod_wsgi tries to compensate.
>>
>> The HTTP spec states: "The HEAD method is identical to GET except that the 
>> server must not return any Entity-Body in the response. The metainformation 
>> contained in the HTTP headers in response to a HEAD request should be 
>> identical to the information sent in response to a GET request. This method 
>> can be used for obtaining metainformation about the resource identified by 
>> the Request-URI without transferring the Entity-Body itself. This method is 
>> often used for testing hypertext links for validity, accessibility, and 
>> recent modification.”
>>
>> Keystone has 3 Routes where HEAD will result in a 204 and GET will result in 
>> a 200.
>>
>> * /v3/auth/tokens
>> * /v2.0/tokens/{token_id}
>> * /OS-TRUST/trusts/{trust_id}/roles/{role_id} <--- This is the only one 
>> tested by Tempest.
>>
>> The easiest solution is to correct the case where we are out of line with 
>> the HTTP spec and ensure these cases return the same status code for GET and 
>> HEAD methods. This however changes the response status of a public REST API. 
>> Before we do this, I wanted to ensure the community, developers, and TC did 
>> not have an issue with this correction. We are not changing the class of 
>> status (e.g. 2xx to 4xx or vice-versa). This would simply be returning the 
>> same response between GET and HEAD requests. The fix for this would be to 
>> modify the 3 tempest tests in question to expect HTTP 200 instead of 204.
>>
>> There are a couple of other cases where Keystone registers a HEAD route but 
>> no GET route (these would be corrected at the same time, to ensure 
>> compatibility). The final correction is to enforce that Keystone would not 
>> send any data on HEAD requests (it is possible to do so, but we have not had 
>> it happen).
>>
>> You can see a proof-of-concept review that shows the tempest failures here: 
>> https://review.openstack.org/#/c/104026
>>
>> If this change (even though it is in violation of 
>> https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Not_Acceptable 
>> is acceptable, it will unblock the last of a very few things to have 
>> Keystone default deploy via devstack under Apache (and gate upon it). Please 
>> let me know if anyone has significant issues with this change / concerns as 
>> I would like to finish up this road to mod_wsgi based Keystone as early in 
>> the Juno cycle as possible.
>>
>> Cheers,
>> Morgan Fainberg
>>
>>
>> —
>> Morgan Fainberg
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Custom Gerrit Dashboard for Juno-2

2014-07-01 Thread Tzu-Mainn Chen
Heya,

I was trying to help with the Horizon Juno-2 reviews, and found the launchpad 
link
(https://launchpad.net/horizon/+milestone/juno-2) a bit unhelpful in figuring 
out
which items actually need reviews, as some of the items marked 'Needs Code 
Review'
are works in progress, or already reviewed and waiting for approval, or 
abandoned.

So I've created a custom Gerrit dashboard for Horizon Juno-2 blueprints and 
bugs:

http://goo.gl/hmhMXW (the long URL is excessively long)

It is limited to reviews that are related to Juno-2 blueprints and bugs, and
relies on the topic branch being set correctly.  The top section lists changes 
that
only need a +A.  The bottom section lists changes that are waiting for a +2 
review,
and includes changes with -1s for informational purposes (it excludes changes 
with
a -2, though).

The URL was created using Sean Dague's gerrit-dash-creator using a 
script-generated
dashboard definition that takes in a milestone argument and scrapes the 
appropriate
Horizon launchpad milestone page.  That means that the above URL won't update if
and when items are added or removed from Juno-2.  However, it can be easily
regenerated; and can be run for Juno-3 and future milestones.

Hope this is helpful!


Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] HTTP Get and HEAD requests mismatch on resulting HTTP status (proposed REST API Response Status Changes)

2014-07-01 Thread Robert Collins
Wearing my HTTP fanatic hat - I think this is actually an important
change to do. Skew like this can cause all sorts of odd behaviours in
client libraries.

-Rob

On 2 July 2014 13:13, Morgan Fainberg  wrote:
> In the endeavor to move from the default deployment of Keystone being 
> eventlet (in devstack) to Apache + mod_wsgi, I noticed that there was an odd 
> mis-match on a single set of tempest tests relating to trusts. Under eventlet 
> a HTTP 204 No Content was being returned, but under mod_wsgi an HTTP 200 OK 
> was being returned. After some investigation it turned out that in some cases 
> mod_wsgi will rewrite HEAD requests to GET requests under the hood; this is 
> to ensure that the response from Apache is properly built when dealing with 
> filtering. A number of wsgi applications just return nothing on a HEAD 
> request, which is incorrect, so mod_wsgi tries to compensate.
>
> The HTTP spec states: "The HEAD method is identical to GET except that the 
> server must not return any Entity-Body in the response. The metainformation 
> contained in the HTTP headers in response to a HEAD request should be 
> identical to the information sent in response to a GET request. This method 
> can be used for obtaining metainformation about the resource identified by 
> the Request-URI without transferring the Entity-Body itself. This method is 
> often used for testing hypertext links for validity, accessibility, and 
> recent modification.”
>
> Keystone has 3 Routes where HEAD will result in a 204 and GET will result in 
> a 200.
>
> * /v3/auth/tokens
> * /v2.0/tokens/{token_id}
> * /OS-TRUST/trusts/{trust_id}/roles/{role_id} <--- This is the only one 
> tested by Tempest.
>
> The easiest solution is to correct the case where we are out of line with the 
> HTTP spec and ensure these cases return the same status code for GET and HEAD 
> methods. This however changes the response status of a public REST API. 
> Before we do this, I wanted to ensure the community, developers, and TC did 
> not have an issue with this correction. We are not changing the class of 
> status (e.g. 2xx to 4xx or vice-versa). This would simply be returning the 
> same response between GET and HEAD requests. The fix for this would be to 
> modify the 3 tempest tests in question to expect HTTP 200 instead of 204.
>
> There are a couple of other cases where Keystone registers a HEAD route but 
> no GET route (these would be corrected at the same time, to ensure 
> compatibility). The final correction is to enforce that Keystone would not 
> send any data on HEAD requests (it is possible to do so, but we have not had 
> it happen).
>
> You can see a proof-of-concept review that shows the tempest failures here: 
> https://review.openstack.org/#/c/104026
>
> If this change (even though it is in violation of 
> https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Not_Acceptable 
> is acceptable, it will unblock the last of a very few things to have Keystone 
> default deploy via devstack under Apache (and gate upon it). Please let me 
> know if anyone has significant issues with this change / concerns as I would 
> like to finish up this road to mod_wsgi based Keystone as early in the Juno 
> cycle as possible.
>
> Cheers,
> Morgan Fainberg
>
>
> —
> Morgan Fainberg
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Timeline for the remainder of Juno

2014-07-01 Thread Kyle Mestery
All:

I've updated the Neutron Juno Project Plan [1] page with a timeline
for the remainder of Juno. Similar to Nova, we're going to participate
in both Spec Proposal Deadline (SPD) and Spec Approval Deadline (SAD).
The dates are listed on the wiki, but since these are new, I'm calling
them out here:

Neutron SPD: July 10
Neutron SAD: July 20

This means if you don't have a spec proposed in neutron-specs by July
10, it's unlikely to be approved for Juno. If your spec doesn't merge
by July 20, same situation.

We already have 32 specs for Juno-2, and a handful in Juno-3 already.
I expect some things to slip from Juno-2 into Juno-3. On top of this,
as of this morning, our spec backlog is 102. I expect to spend some
time next week beginning to clean this out by placing -2's on specs
which won't make Juno and should be targeted for the "K" release.

If you have questions, please reply here or find me in #openstack-neutron.

Thanks!
Kyle

[1] 
https://wiki.openstack.org/wiki/NeutronJunoProjectPlan#Neutron_Release_Timelines

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] One more lifecycle plug point - in scaling groups

2014-07-01 Thread Adrian Otto
Zane,

If you happen to have a link to this blueprint, could you reply with it? I took 
a look, but did not find it.

I’d like to suggest that the implementation allow apps to call unauthenticated 
(signed) webhook URLs in order to trigger a scale up/down event within a 
scaling group. This is about the simplest possible API to allow any application 
to control it’s own elasticity.

Thanks,

Adrian

On Jul 1, 2014, at 6:09 PM, Mike Spreitzer 
mailto:mspre...@us.ibm.com>> wrote:

Zane Bitter mailto:zbit...@redhat.com>> wrote on 07/01/2014 
07:05:15 PM:

> On 01/07/14 16:30, Mike Spreitzer wrote:
> > Thinking about my favorite use case for lifecycle plug points for cloud
> > providers (i.e., giving something a chance to make a holistic placement
> > decision), it occurs to me that one more is needed: a scale-down plug
> > point.  A plugin for this point has a distinctive job: to decide which
> > group member(s) to remove from a scaling group (i.e.,
> > OS::Heat::AutoScalingGroup or OS::Heat::InstanceGroup or
> > OS::Heat::ResourceGroup or AWS::AutoScaling::AutoScalingGroup).  The
> > plugin's signature could be something like this: given a list of group
> > members and a number to remove, return the list of members to remove
> > (or, equivalently, return the list of members to keep).  What do you think?
>
> I think you're not thinking big enough ;)

I agree, I was taking only a small bite in hopes of a quick success.

> There exist a whole class of applications that would benefit from
> autoscaling but which are not quite stateless. (For example, a PaaS.) So
> it's not enough to have plugins that place the choice of which node to
> scale down under operator control; in fact it needs to be under
> _application_ control.

Exactly.  There are two different roles that want such control; in general, 
neither is happy if only the other gets it.  Now the question becomes, how do 
we get them to play nice together?  In the case of TripleO there may be an 
exceptionally easy out: the owner of an application deployed on the undercloud 
may well be the same as the provider of the undercloud (i.e., the operator 
whose end goal is to provide the overcloud(s) ).

> This is on the roadmap, and TripleO really needs it, so hopefully it
> will happen in Juno.

I assume you mean giving this control to the application, which I presume 
amounts to giving it to the template author.  Is this written up somewhere?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2014-07-01 Thread Zhenguo Niu
Thank you everyone, I'll do my best!

发自我的 iPhone

> 在 Jul 1, 2014,22:37,"Lyle, David"  写道:
> 
> Welcome Zhenguo and Ana to Horizon core.
> 
> David
> 
> 
>> On 6/20/14, 3:17 PM, "Lyle, David"  wrote:
>> 
>> I would like to nominate Zhenguo Niu and Ana Krivokapic to Horizon core.
>> 
>> Zhenguo has been a prolific reviewer for the past two releases providing
>> high quality reviews. And providing a significant number of patches over
>> the past three releases.
>> 
>> Ana has been a significant reviewer in the Icehouse and Juno release
>> cycles. She has also contributed several patches in this timeframe to both
>> Horizon and tuskar-ui.
>> 
>> Please feel free to respond in public or private your support or any
>> concerns.
>> 
>> Thanks,
>> David
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-07-01 Thread shihanzhang


hi Miguel Angel Ajo Pelayo!
I agree with you and modify my spes, but I will also optimization the RPC from 
security group agent to neutron server. 
Now the modle is 'port[rule1,rule2...], port...', I will change it to 
'port[sg1, sg2..]', this can reduce the size of RPC respose message from 
neutron server to security group agent. 
At 2014-07-01 09:06:17, "Miguel Angel Ajo Pelayo"  wrote:
>
>
>Ok, I was talking with Édouard @ IRC, and as I have time to work 
>into this problem, I could file an specific spec for the security
>group RPC optimization, a masterplan in two steps:
>
>1) Refactor the current RPC communication for security_groups_for_devices,
>   which could be used for full syncs, etc..
>
>2) Benchmark && make use of a fanout queue per security group to make
>   sure only the hosts with instances on a certain security group get
>   the updates as they happen.
>
>@shihanzhang do you find it reasonable?
>
>
>
>- Original Message -
>> - Original Message -
>> > @Nachi: Yes that could a good improvement to factorize the RPC mechanism.
>> > 
>> > Another idea:
>> > What about creating a RPC topic per security group (quid of the RPC topic
>> > scalability) on which an agent subscribes if one of its ports is associated
>> > to the security group?
>> > 
>> > Regards,
>> > Édouard.
>> > 
>> > 
>> 
>> 
>> Hmm, Interesting,
>> 
>> @Nachi, I'm not sure I fully understood:
>> 
>> 
>> SG_LIST [ SG1, SG2]
>> SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
>> port[SG_ID1, SG_ID2], port2 , port3
>> 
>> 
>> Probably we may need to include also the
>> SG_IP_LIST = [SG_IP1, SG_IP2] ...
>> 
>> 
>> and let the agent do all the combination work.
>> 
>> Something like this could make sense?
>> 
>> Security_Groups = {SG1:{IPs:[],RULES:[],
>>SG2:{IPs:[],RULES:[]}
>>   }
>> 
>> Ports = {Port1:[SG1, SG2], Port2: [SG1]  }
>> 
>> 
>> @Edouard, actually I like the idea of having the agent subscribed
>> to security groups they have ports on... That would remove the need to
>> include
>> all the security groups information on every call...
>> 
>> But would need another call to get the full information of a set of security
>> groups
>> at start/resync if we don't already have any.
>> 
>> 
>> > 
>> > On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang < ayshihanzh...@126.com >
>> > wrote:
>> > 
>> > 
>> > 
>> > hi Miguel Ángel,
>> > I am very agree with you about the following point:
>> > >  * physical implementation on the hosts (ipsets, nftables, ... )
>> > --this can reduce the load of compute node.
>> > >  * rpc communication mechanisms.
>> > -- this can reduce the load of neutron server
>> > can you help me to review my BP specs?
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > 
>> > At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" < mangel...@redhat.com >
>> > wrote:
>> > >
>> > >  Hi it's a very interesting topic, I was getting ready to raise
>> > >the same concerns about our security groups implementation, shihanzhang
>> > >thank you for starting this topic.
>> > >
>> > >  Not only at low level where (with our default security group
>> > >rules -allow all incoming from 'default' sg- the iptable rules
>> > >will grow in ~X^2 for a tenant, and, the
>> > >"security_group_rules_for_devices"
>> > >rpc call from ovs-agent to neutron-server grows to message sizes of
>> > >>100MB,
>> > >generating serious scalability issues or timeouts/retries that
>> > >totally break neutron service.
>> > >
>> > >   (example trace of that RPC call with a few instances
>> > > http://www.fpaste.org/104401/14008522/ )
>> > >
>> > >  I believe that we also need to review the RPC calling mechanism
>> > >for the OVS agent here, there are several possible approaches to breaking
>> > >down (or/and CIDR compressing) the information we return via this api
>> > >call.
>> > >
>> > >
>> > >   So we have to look at two things here:
>> > >
>> > >  * physical implementation on the hosts (ipsets, nftables, ... )
>> > >  * rpc communication mechanisms.
>> > >
>> > >   Best regards,
>> > >Miguel Ángel.
>> > >
>> > >- Mensaje original -
>> > >
>> > >> Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
>> > >> It also based on the rule set mechanism.
>> > >> The issue in that proposition, it's only stable since the begin of the
>> > >> year
>> > >> and on Linux kernel 3.13.
>> > >> But there lot of pros I don't list here (leverage iptables limitation,
>> > >> efficient update rule, rule set, standardization of netfilter
>> > >> commands...).
>> > >
>> > >> Édouard.
>> > >
>> > >> On Thu, Jun 19, 2014 at 8:25 AM, henry hly < henry4...@gmail.com >
>> > >> wrote:
>> > >
>> > >> > we have done some tests, but have different result: the performance is
>> > >> > nearly
>> > >> > the same for empty and 5k rules in iptable, but huge gap between
>> > >> > enable/disable iptable hook on linux bridge
>> > >> 
>> > >
>> > >> > On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang < ayshihanzh...@126.com
>> > >

[openstack-dev] [cinder][tempest]how to get cinder coverage using tempest?

2014-07-01 Thread iKhan
I have manually deployed cinder and it is up and running. Now I wanted to
get the live code coverage for cinder using tempest, I tried to running
each cinder-api services as "coverage run /usr/bin/cinder-api" it started
but when I tried same with cinder-volume with volume types configured it
failed.

Am I doing something wrong while volume types are configured?

-- 
Thanks,
IK
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DVR and FWaaS integration

2014-07-01 Thread Wuhongning


From: Carl Baldwin [c...@ecbaldwin.net]
Sent: Monday, June 30, 2014 3:43 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] DVR and FWaaS integration

On Mon, Jun 30, 2014 at 3:43 AM, Carl Baldwin 
mailto:c...@ecbaldwin.net>> wrote:

In line...

On Jun 25, 2014 2:02 PM, "Yi Sun" mailto:beyo...@gmail.com>> 
wrote:
>
> All,
> During last summit, we were talking about the integration issues between DVR 
> and FWaaS. After the summit, I had one IRC meeting with DVR team. But after 
> that meeting I was tight up with my work and did not get time to continue to 
> follow up the issue. To not slow down the discussion, I'm forwarding out the 
> email that I sent out as the follow up to the IRC meeting here, so that 
> whoever may be interested on the topic can continue to discuss about it.
>
> First some background about the issue:
> In the normal case, FW and router are running together inside the same box so 
> that FW can get route and NAT information from the router component. And in 
> order to have FW to function correctly, FW needs to see the both directions 
> of the traffic.
> DVR is designed in an asymmetric way that each DVR only sees one leg of the 
> traffic. If we build FW on top of DVR, then FW functionality will be broken. 
> We need to find a good method to have FW to work with DVR.
>
> ---forwarding email---
>  During the IRC meeting, we think that we could force the traffic to the FW 
> before DVR. Vivek had more detail; He thinks that since the br-int knowns 
> whether a packet is routed or switched, it is possible for the br-int to 
> forward traffic to FW before it forwards to DVR. The whole forwarding process 
> can be operated as part of service-chain operation. And there could be a 
> FWaaS driver that understands the DVR configuration to setup OVS flows on the 
> br-int.

I'm not sure what this solution would look like.  I'll have to get the details 
from Vivek.  It seems like this would effectively centralize the traffic that 
we worked so hard to decentralize.

It did cause me to wonder about something:  would it be possible to reign the 
symmetry to the traffic by directing any response traffic back to the DVR 
component which handled the request traffic?  I guess this would require 
running conntrack on the target side to track and identify return traffic.  I'm 
not sure how this would be inserted into the data path yet.  This is a 
half-baked idea here.

> The concern is that normally firewall and router are integrated together so 
> that firewall can make right decision based on the routing result. But what 
> we are suggesting is to split the firewall and router into two separated 
> components, hence there could be issues. For example, FW will not be able to 
> get enough information to setup zone. Normally Zone contains a group of 
> interfaces that can be used in the firewall policy to enforce the direction 
> of the policy. If we forward traffic to firewall before DVR, then we can only 
> create policy based on subnets not the interface.
> Also, I’m not sure if we have ever planed to support SNAT on the DVR, but if 
> we do, then it depends on at which point we forward traffic to the FW, the 
> subnet may not even work for us anymore (even DNAT could have problem too).

I agree that splitting the firewall from routing presents some problems that 
may be difficult to overcome.  I don't know how it would be done while 
maintaining the benefits of DVR.

Another half-baked idea:  could multi-primary state replication be used between 
DVR components to enable firewall operation?  Maybe work on the HA router 
blueprint -- which is long overdue to be merged Btw -- could be leveraged.  The 
number of DVR "pieces" could easily far exceed that of active firewall 
components normally used in such a configuration so there could be a major 
scaling problem.  I'm really just thinking out loud here.

Maybe you (or others) have other ideas?

I think ovs based firewall may resolve the conflict between "distributed" and 
"stateful". Redirect response traffic from ovs to iptable will bring a lot of 
challenge, such as how to restore source mac when traffic return to br-int from 
iptable.


In the future, security group and fwaas should be merged, present to user a 
unified firewall API.


>
> --- end of forwarding 
>
> YI
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Juno priorities and spec review timeline

2014-07-01 Thread Wan-yen Hsu
Hi Devananda,

  I noticed that firmware update is not on the priority list.  I thought
there was strong interest in this capability.  The design spec of
out-of-band firmware update has been submitted
https://review.openstack.org/#/c/100842/.  We will address the review
comments and uplaod a new version soon.  Is it possible to add this item to
the Juno list?  Thanks!

Regards,
iron1

>> From: Devananda van der Veen 

>> Date: Tue, Jul 1, 2014 at 3:42 AM

>> Subject: [openstack-dev] [Ironic] Juno priorities and spec review
timeline

>> To: OpenStack Development Mailing List > >

>>

>>

>> Hi all!

>>

>> We're roughly at the midway point between summit and release, and I

>> feel that's a good time to take a look at our progress compared to the

>> goals we set out at the design summit. To that end, I re-opened my

>> summit notes about what features we had prioritized in Atlanta, and

>> engaged many the core reviewers in a discussion last friday to

>> estimate what we'll have time to review and land in the remainder of

>> this cycle. Based on that, I've created this spreadsheet to represent

>> those expectations and our current progress towards what we think we

>> can achieve this cycle:

>>

>>

>>
https://docs.google.com/spreadsheets/d/1Hxyfy60hN_Fit0b-plsPzK6yW3ePQC5IfwuzJwltlbo

>>

>> Aside from several cleanup- and test-related tasks, these goals

>> correlate to spec reviews that have already been proposed. I've

>> crossed off ones which we discussed at the summit, but for which no

>> proposal has yet been submitted. The spec-review team and I will be

>> referring to this to help us prioritize specs reviews. While I am not

>> yet formally blocking proposals which do not fit within this list of

>> priorities, the review team is working with a large back-log and

>> probably won't have time to review anything else this cycle. If you're

>> concerned that you won't be able to land your favorite feature in

>> Juno, the best thing you can do is to participate in reviewing other

>> people's code, join the core team, and help us accelerate the

>> development process of "K".

>>

>> Borrowing a little from Nova's timeline, I have proposed the following

>> timeline for Ironic. Note that dates listed are Thursdays, and numbers

>> in parentheses are weeks until feature freeze.

>>

>> You may also note that I'll be offline for two weeks immediately prior

>> to the Juno-3 milestone, which is another reason why I'd like the core

>> review team to have a solid plan (read: approved specs) in place by

>> Aug 14.

>>

>>

>>

>> July 3 (-9): spec review day on Wednesday (July 2)

>>  focus on landing specs for our priorities:

>>

>>
https://docs.google.com/spreadsheets/d/1Hxyfy60hN_Fit0b-plsPzK6yW3ePQC5IfwuzJwltlbo

>>

>> Jul 24 (-6): Juno-2 milestone tagged

>>  new spec proposal freeze

>>

>> Jul 31 (-5): midcycle meetup (July 27-30)

>>  https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint

>>

>> Aug 14 (-3): last spec review day on Wednesday (Aug 13)

>>

>> Aug 21 (-2): PTL offline all week

>>

>> Aug 28 (-1): PTL offline all week

>>

>> Sep  4 ( 0): Juno-3 milestone tagged

>>  Feature freeze

>>  K opens for spec proposals

>>  Unmerged J spec proposals must rebase on K

>>  Merged J specs with no code proposed are deleted and may

>> be re-proposed for K

>>  Merged J specs with code proposed need to be reviewed for

>> feature-freeze-exception

>>

>> Sep 25 (+3): RC 1 build expected

>>  K spec reviews start

>>

>> Oct 16 (+6): Release!

>>

>> Oct 30 (+8): K summit spec proposal freeze

>>  K summit sessions should have corresponding spec proposal

>>

>> Nov  6 (+9): K design summit

>>

>>

>> Thanks!

>> Devananda

>>

>> ___

>> OpenStack-dev mailing list

>> OpenStack-dev@lists.openstack.org

>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>

>>

>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] HTTP Get and HEAD requests mismatch on resulting HTTP status (proposed REST API Response Status Changes)

2014-07-01 Thread Morgan Fainberg
In the endeavor to move from the default deployment of Keystone being eventlet 
(in devstack) to Apache + mod_wsgi, I noticed that there was an odd mis-match 
on a single set of tempest tests relating to trusts. Under eventlet a HTTP 204 
No Content was being returned, but under mod_wsgi an HTTP 200 OK was being 
returned. After some investigation it turned out that in some cases mod_wsgi 
will rewrite HEAD requests to GET requests under the hood; this is to ensure 
that the response from Apache is properly built when dealing with filtering. A 
number of wsgi applications just return nothing on a HEAD request, which is 
incorrect, so mod_wsgi tries to compensate.

The HTTP spec states: "The HEAD method is identical to GET except that the 
server must not return any Entity-Body in the response. The metainformation 
contained in the HTTP headers in response to a HEAD request should be identical 
to the information sent in response to a GET request. This method can be used 
for obtaining metainformation about the resource identified by the Request-URI 
without transferring the Entity-Body itself. This method is often used for 
testing hypertext links for validity, accessibility, and recent modification.”

Keystone has 3 Routes where HEAD will result in a 204 and GET will result in a 
200.

* /v3/auth/tokens
* /v2.0/tokens/{token_id}
* /OS-TRUST/trusts/{trust_id}/roles/{role_id} <--- This is the only one tested 
by Tempest.

The easiest solution is to correct the case where we are out of line with the 
HTTP spec and ensure these cases return the same status code for GET and HEAD 
methods. This however changes the response status of a public REST API. Before 
we do this, I wanted to ensure the community, developers, and TC did not have 
an issue with this correction. We are not changing the class of status (e.g. 
2xx to 4xx or vice-versa). This would simply be returning the same response 
between GET and HEAD requests. The fix for this would be to modify the 3 
tempest tests in question to expect HTTP 200 instead of 204.

There are a couple of other cases where Keystone registers a HEAD route but no 
GET route (these would be corrected at the same time, to ensure compatibility). 
The final correction is to enforce that Keystone would not send any data on 
HEAD requests (it is possible to do so, but we have not had it happen).

You can see a proof-of-concept review that shows the tempest failures here: 
https://review.openstack.org/#/c/104026

If this change (even though it is in violation of 
https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Not_Acceptable is 
acceptable, it will unblock the last of a very few things to have Keystone 
default deploy via devstack under Apache (and gate upon it). Please let me know 
if anyone has significant issues with this change / concerns as I would like to 
finish up this road to mod_wsgi based Keystone as early in the Juno cycle as 
possible.

Cheers,
Morgan Fainberg


—
Morgan Fainberg



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] One more lifecycle plug point - in scaling groups

2014-07-01 Thread Mike Spreitzer
Zane Bitter  wrote on 07/01/2014 07:05:15 PM:

> On 01/07/14 16:30, Mike Spreitzer wrote:
> > Thinking about my favorite use case for lifecycle plug points for 
cloud
> > providers (i.e., giving something a chance to make a holistic 
placement
> > decision), it occurs to me that one more is needed: a scale-down plug
> > point.  A plugin for this point has a distinctive job: to decide which
> > group member(s) to remove from a scaling group (i.e.,
> > OS::Heat::AutoScalingGroup or OS::Heat::InstanceGroup or
> > OS::Heat::ResourceGroup or AWS::AutoScaling::AutoScalingGroup).  The
> > plugin's signature could be something like this: given a list of group
> > members and a number to remove, return the list of members to remove
> > (or, equivalently, return the list of members to keep).  What do you 
think?
> 
> I think you're not thinking big enough ;)

I agree, I was taking only a small bite in hopes of a quick success.

> There exist a whole class of applications that would benefit from 
> autoscaling but which are not quite stateless. (For example, a PaaS.) So 

> it's not enough to have plugins that place the choice of which node to 
> scale down under operator control; in fact it needs to be under 
> _application_ control.

Exactly.  There are two different roles that want such control; in 
general, neither is happy if only the other gets it.  Now the question 
becomes, how do we get them to play nice together?  In the case of TripleO 
there may be an exceptionally easy out: the owner of an application 
deployed on the undercloud may well be the same as the provider of the 
undercloud (i.e., the operator whose end goal is to provide the 
overcloud(s) ).

> This is on the roadmap, and TripleO really needs it, so hopefully it 
> will happen in Juno.

I assume you mean giving this control to the application, which I presume 
amounts to giving it to the template author.  Is this written up 
somewhere?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to unit test scripts outside of nova/nova?

2014-07-01 Thread Matt Riedemann



On 7/1/2014 4:03 PM, Matthew Treinish wrote:

On Tue, Jul 01, 2014 at 03:21:06PM -0500, Matt Riedemann wrote:

As part of the enforce-unique-instance-uuid-in-db blueprint [1] I'm writing
a script to scan the database and find any NULL instance_uuid records that
will cause the new database migration to fail so that operators can run this
before they run the migration, otherwise the migration blocks if these types
of records are found.

I have the script written [2], but wanted to also write unit tests for it. I
guess I assumed the script would go under nova/tools/db like the
schema_diff.py script, but I'm not sure how to unit test anything outside of
the nova/nova tree.

Nova's testr configuration is only discovering tests within nova/tests [3].
But I don't think I can put the unit tests under nova/tests and then import
the module from nova/tools.


So we hit a similar issue in tempest when we wanted to unit test some utility
scripts in tempest/tools. Changing the discovery path to find tests outside of
nova/tests is actually a pretty easy change[4], but I don't think that will 
solve
the use case with tox. What happened when we tried to do this in tempest use
case was that when the project was getting installed the tools dir wasn't
included so when we ran with tox it couldn't find the files we were trying to
test. The solution we came up there was to put the script under the tempest
namespace and add unit tests in tempest/tests. (we also added an entry point for
the script to expose it as a command when tempest was installed)



So I'm a bit stuck.  I could take the easy way out and just throw the script
under nova/db/sqlalchemy/migrate_repo and put my unit tests under
nova/tests/db/, and I'd also get pep8 checking with that, but that doesn't
seem right - but I'm also possibly over-thinking this.

Anyone else have any ideas?


I think it really comes down to how you want to present the utility to the end
users. To enable unit testing it, it's just easier to put it in the nova
namespace. I couldn't come up with a good way to get around the
install/namespace issue. (maybe someone else who is more knowledgeable here has
a good way to get around this) So then you can symlink it to the tools dir or
add an entry point (or bake it into nova-manage) to make it easy to find. I
think the issue with putting it in nova/db/sqlalchemy/migrate_repo is that it's
hard to find.



[1] 
https://blueprints.launchpad.net/nova/+spec/enforce-unique-instance-uuid-in-db
[2] https://review.openstack.org/#/c/97946/
[3] http://git.openstack.org/cgit/openstack/nova/tree/.testr.conf#n5

[4] 
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/test_discover/test_discover.py

-Matt Treinish



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Matt,

Thanks for the help, I completely forgot about making the new script an 
entry point in setup.cfg, that's a good idea.


Before I saw this I did move the script under 
nova/db/sqlalchemy/migrate_repo and moved the tests under nova/tests/db 
and have that all working now, so will probably just move forward with 
that rather than try to do some black magic with test discovery and 
getting the module imported.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift: reason for using xfs on devices

2014-07-01 Thread Osanai, Hisashi

On Tuesday, July 01, 2014 9:44 PM, Anne Gentle  wrote:

Thank you for the quick response.

> The install guide only recommends a single path, not many options, to ensure 
> success.

I understand the point for writing the document.

> There's a little bit of discussion in the developer docs:
> http://docs.openstack.org/developer/swift/deployment_guide.html#filesystem-considerations
> I think that packstack gives the option of using xfs or ext4, so there must 
> be sufficient testing for ext4.

Thank you for this info.
In the discussion, there is a following sentence.
 " After thorough testing with our use cases and hardware configurations, XFS 
was the best 
all-around choice."

I would like to know what kind of testing should I do from filesystem point of 
view?

Backgroup of this question is:
I read the following performance comparison about ext4 and xfs. There are some 
results of 
Benchmark. It seems that performance of ext4 is better than xfs (Eric Whitney's 
FFSB testing).
So I think if performance of swift is more important rather than scalability of 
it, it is a
good idea to use ext4.

http://www.linuxtag.org/2013/fileadmin/www.linuxtag.org/slides/Heinz_Mauelshagen_-_Which_filesystem_should_I_use_.e204.pdf

Best Regards,
Hisashi Osanai

From: Anne Gentle [mailto:a...@openstack.org] 
Sent: Tuesday, July 01, 2014 9:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Swift: reason for using xfs on devices


On Tue, Jul 1, 2014 at 6:21 AM, Osanai, Hisashi  
wrote:

Hi,

In the following document, there is a setup up procedure for storage and
it seems that swift recommends to use xfs.

http://docs.openstack.org/icehouse/install-guide/install/yum/content/installing-and-configuring-storage-nodes.html
===
2. For each device on the node that you want to use for storage, set up the
XFS volume (/dev/sdb is used as an example). Use a single partition per drive.
For example, in a server with 12 disks you may use one or two disks for the
 operating system which should not be touched in this step. The other 10 or 11
disks should be partitioned with a single partition, then formatted in XFS.
===

I would like to know the reason why swift recommends xfs rather than ext4?

The install guide only recommends a single path, not many options, to ensure 
success.

There's a little bit of discussion in the developer docs:
http://docs.openstack.org/developer/swift/deployment_guide.html#filesystem-considerations

I think that packstack gives the option of using xfs or ext4, so there must be 
sufficient testing for ext4. 

Anne
 

I think ext4 has reasonable performance and can support 1EiB from design point 
of view.
# The max file system size of ext4 is not enough???

Thanks in advance,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Spec Review Day July 9th

2014-07-01 Thread Ken'ichi Ohmichi
2014-07-02 2:30 GMT+09:00 Matthew Treinish :
> Hi Everyone,
>
> During the last qa meeting we were discussing ways to try an increase 
> throughput
> on the qa-specs repo. We decided to have a dedicated review day for the specs
> repo. Right now specs approval is a relatively slow process, reviews seem to
> take too long (an average wait time of ~12 days) and responses are often 
> taking
> an equally long time.  So to try and stimulate increased velocity for the 
> specs
> process having a day which is mostly dedicated to reviewing specs should help.
> At the very least we should work through most of the backlog.
>
> I think having the review day scheduled next Wednesday, July 9th, should work
> well for most people. I feel that having the review day occur before the
> mid-cycle meet-up in a couple weeks would be best. With the US holiday this
> Friday, and giving more than a couple of days notice I figured having it next
> week was best.
>
> So if if everyone could spend that day concentrating on reviewing qa-specs
> proposals I think we'll start to work through the backlog. Of course we'll 
> also
> need everyone that submitted specs to be around and active if we want to clear
> out the backlog.
>
> I'll send out another reminder the day before the review day.

+1 for qa-spec review day.
I've written it on my schedule.

Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova networking API and CLI are poorly documented and buggy

2014-07-01 Thread Vishvananda Ishaya
On Jun 14, 2014, at 9:12 AM, Mike Spreitzer  wrote:

> I am not even sure what is the intent, but some of the behavior looks like it 
> is clearly unintended and not useful (a more precise formulation of "buggy" 
> that is not defeated by the lack of documentation). 
> 
> IMHO, the API and CLI documentation should explain these calls/commands in 
> enough detail that the reader can tell the difference.  And the difference 
> should be useful in at least some networking configurations.  It seems to me 
> that in some configurations an administrative user may want THREE varieties 
> of the network listing call/command: one that shows networks assigned to his 
> tenant, one that also shows networks available to be assigned, and one that 
> shows all networks.  And in no configurations should a non-administrative 
> user be blind to all categories of networks. 
> 
> In the API, there are the calls on /v2/{tenant_id}/os-networks and they are 
> documented at 
> http://docs.openstack.org/api/openstack-compute/2/content/ext-os-networks.html.
>   There are also calls on /v2/{tenant_id}/os-tenant-networks --- but I can 
> not find documentation for them. 
> 
> http://docs.openstack.org/api/openstack-compute/2/content/ext-os-networks.html
>  does not describe the meaning of the calls in much detail.  For example, 
> about "GET /v2/{tenant_id}/os-networks" that doc says only "Lists networks 
> that are available to the tenant".  In some networking configurations, there 
> are two levels of availability: a network might be assigned to a tenant, or a 
> network might be available for assignment.  In other networking 
> configurations there are NOT two levels of availability.  For example, in 
> Flat DHCP nova networking (which is the default in DevStack), a network CAN 
> NOT be assigned to a tenant.

I think it should be returning the networks which a tenant will get for their 
instance when they launch it. This is unfortunately a bit confusing in vlan 
mode if a network has not been autoassigned, but that is generally a temporary 
case. So the bug fix below would lead to the correct behavior.

> 
> You might think that the "to the tenant" qualification implies filtering by 
> the invoker's tenant.  But you would be wrong in the case of an 
> administrative user; see the model_query method in nova/db/sqlalchemy/api.py 
> 
> In the CLI, we have two sets of similar-seeming commands.  For example, 
> 
> $ nova help net-list 
> usage: nova net-list 
> 
> List networks 
> 
> $ nova help network-list 
> usage: nova network-list 
> 
> Print a list of available networks. 

IMO net-list / os-tenant-networks should be deprecated because it really isn’t 
adding any features to the original extension.
> 
> Those remarks are even briefer than the one description in the API doc, 
> omitting the qualification "to the tenant". 
> 
> Experimentation shows that, in the case of flat DHCP nova networking, both of 
> those commands show zero networks to a non-administrative user (and remember 
> that networks can not be assigned to tenants in that configuration) and all 
> the networks to an administrative user.  At the API the GET calls behave the 
> same way.  The fact that a non-administrative user sees zero networks looks 
> unintended and not useful. 
> 
> See https://bugs.launchpad.net/openstack-manuals/+bug/1152862 and 
> https://bugs.launchpad.net/nova/+bug/1327406 
> 
> Can anyone tell me why there are both /os-networks and /os-tenant-networks 
> calls and what their intended semantics are? 

The os-networks extension (nova network-list, network-create, etc.) were 
originally designed to pull features from nova-manage network commands to allow 
administration of networks through the api instead of directly talking to the 
database. The os-tenant-networks extension (nova net-list) were initially 
created as a replacement for the above but they changed the semantics slightly 
so got turned into their own extension. Since then some work has been proposed 
to improve the original extension to add some functionality to os-networks and 
improve error handling[1]. The original extension not showing networks to 
tenants is a bug which you have already identified.

[1] https://review.openstack.org/#/c/93759/
> 
> Thanks,
> Mike___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] One more lifecycle plug point - in scaling groups

2014-07-01 Thread Zane Bitter

On 01/07/14 16:30, Mike Spreitzer wrote:

Thinking about my favorite use case for lifecycle plug points for cloud
providers (i.e., giving something a chance to make a holistic placement
decision), it occurs to me that one more is needed: a scale-down plug
point.  A plugin for this point has a distinctive job: to decide which
group member(s) to remove from a scaling group (i.e.,
OS::Heat::AutoScalingGroup or OS::Heat::InstanceGroup or
OS::Heat::ResourceGroup or AWS::AutoScaling::AutoScalingGroup).  The
plugin's signature could be something like this: given a list of group
members and a number to remove, return the list of members to remove
(or, equivalently, return the list of members to keep).  What do you think?


I think you're not thinking big enough ;)

There exist a whole class of applications that would benefit from 
autoscaling but which are not quite stateless. (For example, a PaaS.) So 
it's not enough to have plugins that place the choice of which node to 
scale down under operator control; in fact it needs to be under 
_application_ control.


This is on the roadmap, and TripleO really needs it, so hopefully it 
will happen in Juno.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] - rebasing patches for CI

2014-07-01 Thread Jeremy Stanley
On 2014-07-01 10:05:45 -0700 (-0700), Kevin Benton wrote:
[...]
> As I understand it, this behavior for the main OpenStack CI check
> queue changed to the latter some time over the past few months.
[...]

I'm not sure what you think changed, but we've (upstream OpenStack
CI) been testing proposed patches merged to their target branches
for years...
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-01 Thread Zane Bitter

On 01/07/14 15:47, Mike Spreitzer wrote:

In AWS, an autoscaling group includes health maintenance functionality
--- both an ability to detect basic forms of failures and an ability to
react properly to failures detected by itself or by a load balancer.
  What is the thinking about how to get this functionality in OpenStack?
  Since OpenStack's OS::Heat::AutoScalingGroup has a more general member
type, what is the thinking about what failure detection means (and how
it would be accomplished, communicated)?

I have not found design discussion of this; have I missed something?


Yes :)

https://review.openstack.org/#/c/95907/

The idea is that Convergence will provide health maintenance for _all_ 
forms of resources in Heat. Once this is implemented, autoscaling gets 
it for free by virtue of that fact that it manages resources using Heat 
stacks.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] autoscaling across regions and availability zones

2014-07-01 Thread Zane Bitter

On 01/07/14 16:23, Mike Spreitzer wrote:

An AWS autoscaling group can span multiple availability zones in one
region.  What is the thinking about how to get analogous functionality
in OpenStack?


Correct, you specify a list of availability zones (instead of just one), 
and AWS distributes servers across them in some sort of round-robin 
fashion. We should implement this.



Warmup question: what is the thinking about how to get the levels of
isolation seen between AWS regions when using OpenStack?  What is the
thinking about how to get the level of isolation seen between AWS AZs in
the same AWS Region when using OpenStack?  Do we use OpenStack Region
and AZ, respectively?  Do we believe that OpenStack AZs can really be as
independent as we want them (note that this is phrased to not assume we
only want as much isolation as AWS provides --- they have had high
profile outages due to lack of isolation between AZs in a region)?


That seems like a question for individual operators, rather than for 
OpenStack. OpenStack allows you, as an operator, to create AZs and 
Regions... how good a job you do is up to you.



I am going to assume that the answer to the question about ASG spanning
involves spanning OpenStack regions and/or AZs.  In the case of spanning
AZs, Heat has already got one critical piece: the
OS::Heat::InstanceGroup and AWS::AutoScaling::AutoScalingGroup types of
resources take a list of AZs as an optional parameter.


That's technically true, but we don't read the list :(


Presumably all
four kinds of scaling group (i.e., also OS::Heat::AutoScalingGroup and
OS::Heat::ResourceGroup) should have such a parameter.  We would need to
change the code that generates the template for the nested stack that is
the group, so that it spreads the members across the AZs in a way that
is as balanced as is possible at the time.


+1


Currently, a stack does not have an AZ.  That makes the case of an
OS::Heat::AutoScalingGroup whose members are nested stacks interesting
--- how does one of those nested stacks get into the right AZ?  And what
does that mean, anyway?  The meaning would have to be left up to the
template author.  But he needs something he can write in his member
template to reference the desired AZ for the member stack.  I suppose we
could stipulate that if the member template has a parameter named
"availability_zone" and typed "string" then the scaling group takes care
of providing the right value to that parameter.


The concept of an availability zone for a stack is not meaningful. 
Servers have availability zones; stacks exist in one region. It is up to 
the *operator*, not the user, to deploy Heat in such a way that it 
remains highly-available assuming the Region is still up.


So yes, the tricky part is how to handle that when the scaling unit is 
not a server (or a provider template with the same interface as a server).


One solution would have been to require that the scaled unit was, 
indeed, either an OS::Nova::Server or a provider template with the same 
interface as (or a superset of) an OS::Nova::Server, but the consensus 
was against that. (Another odd consequence of this decision is that 
we'll potentially be overwriting an AZ specified in the "launch config" 
section with one from the list supplied to the scaling group itself.)


For provider templates, we could insert a pseudo-parameter containing 
the availability zone. I think that could be marginally better than 
taking over one of the user's parameters, but you're basically on the 
right track IMO.


Unfortunately, that is not the end of the story, because we still have 
to deal with other types of resources being scaled. I always advocated 
for an autoscaling resource where the scaled unit was either a provider 
stack (if you provided a template) or an OS::Nova::Server (if you 
didn't), but the implementation that landed followed the design of 
ResourceGroup by allowing (actually, requiring) you to specify an 
arbitrary resource type.


We could do something fancy here involving tagging the properties schema 
with metadata so that we could allow plugin authors to map the AZ list 
to an arbitrary property. However, I propose that we just raise a 
validation error if the AZ is specified for a resource that is not 
either an OS::Nova::Server or a provider template.



To spread across regions adds two things.  First, all four kinds of
scaling group would need the option to be given a list of regions
instead of a list of AZs.  More likely, a list of contexts as defined in
https://review.openstack.org/#/c/53313/--- that would make this handle
multi-cloud as well as multi-region.  The other thing this adds is a
concern for context health.  It is not enough to ask Ceilometer to
monitor member health --- in multi-region or multi-cloud you also have
to worry about the possibility that Ceilometer itself goes away.  It
would have to be the scaling group's responsibility to monitor for
context health, and react properly to failure o

[openstack-dev] Inter Cloud Resource Federation (Alliance)

2014-07-01 Thread Tiwari, Arvind
All,

I am working on a new service to address the problems of "Inter Cloud Resource 
Federation" use cases (e.g. multi region, cloud bursting, resource sharing 
across clouds, etc . ).

The new service will integrate multiple OpenStack cloud to work in alliance to 
provide resource federation and resource sharing across clouds.

Please take a look at link below which explains use cases for resource 
federation and solution. This link also explains high level components of the 
new service.

https://wiki.openstack.org/wiki/Inter_Cloud_Resource_Federation

Please share your thoughts and  comments.

Thanks,
Arvind
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]a problem about the implement of limit-volume-copy-bandwidth

2014-07-01 Thread Tomoki Sekiyama
Hi Zhou,

>Hi stackers,
>
>I found some problems about the current implement of
>limit-volume-copy-bandwidth (this patch has been merged in last week.)
>
>Firstly, assume that I configurate volume_copy_bps_limit=10M, If
>the path is a block device, cgroup blkio can limit copy-bandwidth
>separately for every volume.
>But If the path is a regular file, according to the current implement,
>cgroup blkio have to limit total copy-bandwidth for all volume on the
>disk device which the file lies on.
>The reason is :
>In cinder/utils.py, the method get_blkdev_major_minor
>
>elif lookup_for_file:
># lookup the mounted disk which the file lies on
>out, _err = execute('df', path)
>devpath = out.split("\n")[1].split()[0]
>return get_blkdev_major_minor(devpath, False)
>
>If invoke the method copy_volume concurrently, copy-bandwidth for a
>volume is less than 10M. In this case, the meaning of param
>volume_copy_bps_limit in cinder.conf is defferent.

Thank you for pointing this out.
I think the goal of this feature is QoS to mitigate slowdown of instances
during volume copy. In order to assure the bandwidth for instances, the
total bandwidth used by every running volume copy should be limited less
than volume_copy_bps_limit.

The current implementation satisfies this condition within each block
device, but is still insufficient for limiting total bandwidth of
concurrent volume copies among multiple block devices. From the viewpoint
of QoS, we may need to divide the value of volume_copy_bps_limit into the
number of running volume copies.

For example, when volume_copy_bps_limit is 100M and 2 copies are running,
  (1) copy an image on sda -> sdb
  (2) copy an image on sda -> sdc
to limit each copy bps to 50M (= 100M / 2 concurrent copies), we should
set the limits to:
  sda (read)  = 100M
  sdb (write) =  50M
  sdc (write) =  50M
And when copy (2) is finished before the end of (1), the limit of sdb
(write) is increased to 100M.

I appreciate any opinions for/against this idea.

>   Secondly, In NFS, the result of cmd 'df' is like this:
>[root@yuzhou yuzhou]# df /mnt/111
>Filesystem 1K-blocks  Used Available Use% Mounted
>on
>186.100.8.144:/mnt/nfs_storage   51606528  14676992  34308096  30% /mnt
>I think the method get_blkdev_major_minor can not deal with the devpath
>'186.100.8.144:/mnt/nfs_storage'.
>i.e can not limit volume copy bandwidth in nfsdriver.
>
>So I think maybe we should modify the current implement to make sure
>copy-bandwidth for every volume meet the configuration requirement.
>I suggest we use loop device associated with the regular file(losetup
>/dev/loop0 /mnt/volumes/vm.qcow2), then limit the bps of loop device.(
>cgset -r blkio.throttle.write_bps_device="7:0 1000" test) After
>copying volume, detach loop device. (losetup --detach /dev/loop0)

Interesting. I tried this locally and confirmed it's feasible.

Thanks,
Tomoki Sekiyama


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers][Nova] Containers Mid-Cycle Meetup

2014-07-01 Thread Russell Bryant
On 07/01/2014 05:59 PM, Adrian Otto wrote:
> Team,
> 
> Please help us select dates for the Containers Team Midcycle Meetup:
> 
> http://doodle.com/2mebqhdxpksf763m

Why not just join the Nova meetup?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Containers][Nova] Containers Mid-Cycle Meetup

2014-07-01 Thread Adrian Otto
Team,

Please help us select dates for the Containers Team Midcycle Meetup:

http://doodle.com/2mebqhdxpksf763m

Thanks,

Adrian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [compute][tempest] Upgrading libvirt-lxc support status

2014-07-01 Thread Nels Nelson
Greetings list,-

Over the next few weeks I will be working on developing additional Tempest
gating unit and functional tests for the libvirt-lxc compute driver.

I am trying to figure out exactly what is required in order to accomplish
the goal of ensuring the continued inclusion (without deprecation) of the
libvirt-lxc compute driver in OpenStack.  My understanding is that this
requires the upgrading of the support status in the Hypervisor Support
Matrix document by developing the necessary Tempest tests.  To that end, I
am trying to determine what tests are necessary as precisely as possible.

I have some questions:

* Who maintains the Hypervisor Support Matrix document?

  https://wiki.openstack.org/wiki/HypervisorSupportMatrix

* Who is in charge of the governance over the Support Status process?  Is
there single person in charge of evaluating every driver?

* Regarding that process, how is the information in the Hypervisor
Support Matrix substantiated?  Is there further documentation in the wiki
for this?  Is an evaluation task simply performed on the functionality for
the given driver, and the results logged in the HSM?  Is this an automated
process?  Who is responsible for that evaluation?

* How many of the boxes in the HSM must be checked positively, in
order to move the driver into a higher supported group?  (From group C to
B, and from B to A.)

* Or, must they simply all be marked with a check or minus,
substantiated by a particular gating test which passes based on the
expected support?

* In other words, is it sufficient to provide enough automated testing
to simply be able to indicate supported/not supported on the support
matrix chart?  Else, is writing supporting documentation of an evaluation
of the hypervisor sufficient to substantiate those marks in the support
matrix?

* Do "unit tests that gate commits" specifically refer to tests
written to verify the functionality described by the annotation in the
support matrix? Or are the annotations substantiated by "functional
testing that gate commits"?

Thank you for your time and attention.

Best regards,
-Nels Nelson
Software Developer
Rackspace Hosting



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday July 1st at 19:00 UTC

2014-07-01 Thread Elizabeth K. Joseph
On Mon, Jun 30, 2014 at 11:18 AM, Elizabeth K. Joseph
 wrote:
> Hi everyone,
>
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting on Tuesday July 1st at 19:00 UTC in #openstack-meeting

Meeting minutes and log available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-07-01-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-07-01-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-07-01-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to unit test scripts outside of nova/nova?

2014-07-01 Thread Matthew Treinish
On Tue, Jul 01, 2014 at 03:21:06PM -0500, Matt Riedemann wrote:
> As part of the enforce-unique-instance-uuid-in-db blueprint [1] I'm writing
> a script to scan the database and find any NULL instance_uuid records that
> will cause the new database migration to fail so that operators can run this
> before they run the migration, otherwise the migration blocks if these types
> of records are found.
> 
> I have the script written [2], but wanted to also write unit tests for it. I
> guess I assumed the script would go under nova/tools/db like the
> schema_diff.py script, but I'm not sure how to unit test anything outside of
> the nova/nova tree.
> 
> Nova's testr configuration is only discovering tests within nova/tests [3].
> But I don't think I can put the unit tests under nova/tests and then import
> the module from nova/tools.

So we hit a similar issue in tempest when we wanted to unit test some utility
scripts in tempest/tools. Changing the discovery path to find tests outside of
nova/tests is actually a pretty easy change[4], but I don't think that will 
solve
the use case with tox. What happened when we tried to do this in tempest use
case was that when the project was getting installed the tools dir wasn't
included so when we ran with tox it couldn't find the files we were trying to
test. The solution we came up there was to put the script under the tempest
namespace and add unit tests in tempest/tests. (we also added an entry point for
the script to expose it as a command when tempest was installed)

> 
> So I'm a bit stuck.  I could take the easy way out and just throw the script
> under nova/db/sqlalchemy/migrate_repo and put my unit tests under
> nova/tests/db/, and I'd also get pep8 checking with that, but that doesn't
> seem right - but I'm also possibly over-thinking this.
> 
> Anyone else have any ideas?

I think it really comes down to how you want to present the utility to the end
users. To enable unit testing it, it's just easier to put it in the nova
namespace. I couldn't come up with a good way to get around the
install/namespace issue. (maybe someone else who is more knowledgeable here has
a good way to get around this) So then you can symlink it to the tools dir or
add an entry point (or bake it into nova-manage) to make it easy to find. I
think the issue with putting it in nova/db/sqlalchemy/migrate_repo is that it's
hard to find.

> 
> [1] 
> https://blueprints.launchpad.net/nova/+spec/enforce-unique-instance-uuid-in-db
> [2] https://review.openstack.org/#/c/97946/
> [3] http://git.openstack.org/cgit/openstack/nova/tree/.testr.conf#n5
[4] 
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/test_discover/test_discover.py

-Matt Treinish


pgpcyKeg0is13.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] One more lifecycle plug point - in scaling groups

2014-07-01 Thread Mike Spreitzer
Thinking about my favorite use case for lifecycle plug points for cloud 
providers (i.e., giving something a chance to make a holistic placement 
decision), it occurs to me that one more is needed: a scale-down plug 
point.  A plugin for this point has a distinctive job: to decide which 
group member(s) to remove from a scaling group (i.e., 
OS::Heat::AutoScalingGroup or OS::Heat::InstanceGroup or 
OS::Heat::ResourceGroup or AWS::AutoScaling::AutoScalingGroup).  The 
plugin's signature could be something like this: given a list of group 
members and a number to remove, return the list of members to remove (or, 
equivalently, return the list of members to keep).  What do you think?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Feature to enable domain-related role validation

2014-07-01 Thread Rodrigo Duarte Sousa

Hi all,

We created a POC that enables domain-related role checking to components 
that do not support domains (such as Nova and Cinder). The code can be 
found here: https://github.com/rodrigods/keystone/tree/domain-check


The idea is to use the HttpCheck feature: 
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/policy.py#L849 
to check if a user has a given role in a domain. The changes were made 
exclusively into Keystone. The service willing to use the feature, just 
has to add the rule in its policy file.


Here is a list of the changes added to make it work:

1 - Create a new endpoint to handle the HttpCheck calls, for example:
/v3/projects/ /roles/

2 - Add a method to handle this endpoint at Keystone:
https://github.com/rodrigods/keystone/blob/domain-check/keystone/assignment/controllers.py#L559

 * Get domain_id from target project (from given project_id)
 * Filter all role_assignments from logged user in target domain (from
   user_id in given credentials)
 * Check if role_assignments contains target role


To test it, we added the following rule into Nova's policy file:

 * "compute:create":"rule:domain_admin"
 * "domain_admin":"http://localhost:5000/v3/projects/%(project_id)
   s/roles/admin"

Once the request arrives into Keystone, it checks if the the logged user 
has /admin/ role at /project_id/'s domain.


So, what do you think? We would like your feedback before giving extra 
efforts such as creating the bp/spec.


--

Rodrigo Duarte Sousa
MSccandidate in Computer Science
Software Engineer at OpenStack Project HP/LSD-UFCG
Distributed Systems Laboratory
Federal University of Campina Grande
Campina Grande, PB - Brazil
http://lsd.ufcg.edu.br/~rodrigod s
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] autoscaling across regions and availability zones

2014-07-01 Thread Mike Spreitzer
An AWS autoscaling group can span multiple availability zones in one 
region.  What is the thinking about how to get analogous functionality in 
OpenStack?

Warmup question: what is the thinking about how to get the levels of 
isolation seen between AWS regions when using OpenStack?  What is the 
thinking about how to get the level of isolation seen between AWS AZs in 
the same AWS Region when using OpenStack?  Do we use OpenStack Region and 
AZ, respectively?  Do we believe that OpenStack AZs can really be as 
independent as we want them (note that this is phrased to not assume we 
only want as much isolation as AWS provides --- they have had high profile 
outages due to lack of isolation between AZs in a region)?

I am going to assume that the answer to the question about ASG spanning 
involves spanning OpenStack regions and/or AZs.  In the case of spanning 
AZs, Heat has already got one critical piece: the OS::Heat::InstanceGroup 
and AWS::AutoScaling::AutoScalingGroup types of resources take a list of 
AZs as an optional parameter.  Presumably all four kinds of scaling group 
(i.e., also OS::Heat::AutoScalingGroup and OS::Heat::ResourceGroup) should 
have such a parameter.  We would need to change the code that generates 
the template for the nested stack that is the group, so that it spreads 
the members across the AZs in a way that is as balanced as is possible at 
the time.

Currently, a stack does not have an AZ.  That makes the case of an 
OS::Heat::AutoScalingGroup whose members are nested stacks interesting --- 
how does one of those nested stacks get into the right AZ?  And what does 
that mean, anyway?  The meaning would have to be left up to the template 
author.  But he needs something he can write in his member template to 
reference the desired AZ for the member stack.  I suppose we could 
stipulate that if the member template has a parameter named 
"availability_zone" and typed "string" then the scaling group takes care 
of providing the right value to that parameter.

To spread across regions adds two things.  First, all four kinds of 
scaling group would need the option to be given a list of regions instead 
of a list of AZs.  More likely, a list of contexts as defined in 
https://review.openstack.org/#/c/53313/ --- that would make this handle 
multi-cloud as well as multi-region.  The other thing this adds is a 
concern for context health.  It is not enough to ask Ceilometer to monitor 
member health --- in multi-region or multi-cloud you also have to worry 
about the possibility that Ceilometer itself goes away.  It would have to 
be the scaling group's responsibility to monitor for context health, and 
react properly to failure of a whole context.

Does this sound about right?  If so, I could draft a spec.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] how to unit test scripts outside of nova/nova?

2014-07-01 Thread Matt Riedemann
As part of the enforce-unique-instance-uuid-in-db blueprint [1] I'm 
writing a script to scan the database and find any NULL instance_uuid 
records that will cause the new database migration to fail so that 
operators can run this before they run the migration, otherwise the 
migration blocks if these types of records are found.


I have the script written [2], but wanted to also write unit tests for 
it. I guess I assumed the script would go under nova/tools/db like the 
schema_diff.py script, but I'm not sure how to unit test anything 
outside of the nova/nova tree.


Nova's testr configuration is only discovering tests within nova/tests 
[3].  But I don't think I can put the unit tests under nova/tests and 
then import the module from nova/tools.


So I'm a bit stuck.  I could take the easy way out and just throw the 
script under nova/db/sqlalchemy/migrate_repo and put my unit tests under 
nova/tests/db/, and I'd also get pep8 checking with that, but that 
doesn't seem right - but I'm also possibly over-thinking this.


Anyone else have any ideas?

[1] 
https://blueprints.launchpad.net/nova/+spec/enforce-unique-instance-uuid-in-db

[2] https://review.openstack.org/#/c/97946/
[3] http://git.openstack.org/cgit/openstack/nova/tree/.testr.conf#n5

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Mid-cycle sprint for remote people ?

2014-07-01 Thread Michael Still
We were talking about doing something with google+ for Chris Yeoh, but
haven't really progressed the plan. Does someone want to pick up the
ball with that or shall I?

Michael

On Tue, Jul 1, 2014 at 11:50 PM, Dugger, Donald D
 wrote:
> At minimum I can arrange for a phone bridge at the sprint (Intel `lives` on 
> phone conferences) so we can certainly do that.  Video might be more 
> problematic, I know we did something with Google Plus at the last sprint but 
> I don't know the details on that.
>
> --
> Don Dugger
> "Censeo Toto nos in Kansa esse decisse." - D. Gale
> Ph: 303/443-3786
>
> -Original Message-
> From: Sylvain Bauza [mailto:sba...@redhat.com]
> Sent: Tuesday, July 1, 2014 7:40 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Nova] Mid-cycle sprint for remote people ?
>
> Hi,
>
> I won't be able to attend the mid-cycle sprint due to a good family reason (a 
> new baby 2.0 release expected to land by these dates), so I'm wondering if 
> it's possible to webcast some of the sessions so people who are not there can 
> still share their voices ?
>
> -Sylvain
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-01 Thread Mike Spreitzer
In AWS, an autoscaling group includes health maintenance functionality --- 
both an ability to detect basic forms of failures and an ability to react 
properly to failures detected by itself or by a load balancer.  What is 
the thinking about how to get this functionality in OpenStack?  Since 
OpenStack's OS::Heat::AutoScalingGroup has a more general member type, 
what is the thinking about what failure detection means (and how it would 
be accomplished, communicated)?

I have not found design discussion of this; have I missed something?

I suppose the natural answer for OpenStack would be centered around 
webhooks.  An OpenStack scaling group (OS SG = OS::Heat::AutoScalingGroup 
or AWS::AutoScaling::AutoScalingGroup or OS::Heat::ResourceGroup or 
OS::Heat::InstanceGroup) could generate a webhook per member, with the 
meaning of the webhook being that the member has been detected as dead and 
should be deleted and removed from the group --- and a replacement member 
created if needed to respect the group's minimum size.  When the member is 
a Compute instance and Ceilometer exists, the OS SG could define a 
Ceilometer alarm for each member (by including these alarms in the 
template generated for the nested stack that is the SG), programmed to hit 
the member's deletion webhook when death is detected (I imagine there are 
a few ways to write a Ceilometer condition that detects instance death). 
When the member is a nested stack and Ceilometer exists, it could be the 
member stack's responsibility to include a Ceilometer alarm that detects 
the member stack's death and hit the member stack's deletion webhook. 
There is a small matter of how the author of the template used to create 
the member stack writes some template snippet that creates a Ceilometer 
alarm that is specific to a member stack that does not exist yet.  I 
suppose we could stipulate that if the member template includes a 
parameter with name "member_name" and type "string" then the OS OG takes 
care of supplying the correct value of that parameter; as illustrated in 
the asg_of_stacks.yaml of https://review.openstack.org/#/c/97366/ , a 
member template can use a template parameter to tag Ceilometer data for 
querying.  The URL of the member stack's deletion webhook could be passed 
to the member template via the same sort of convention.  When Ceilometer 
does not exist, it is less obvious to me what could usefully be done.  Are 
there any useful SG member types besides Compute instances and nested 
stacks?  Note that a nested stack could also pass its member deletion 
webhook to a load balancer (that is willing to accept such a thing, of 
course), so we get a lot of unity of mechanism between the case of 
detection by infrastructure vs. application level detection.

I am not entirely happy with the idea of a webhook per member.  If I 
understand correctly, generating webhooks is a somewhat expensive and 
problematic process.  What would be the alternative?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Swift] Question re. keystone domains

2014-07-01 Thread Dolph Mathews
On Tue, Jul 1, 2014 at 11:20 AM, Coles, Alistair 
wrote:

>  We have a change [1] under review in Swift to make access control lists
> compatible with migration to keystone v3 domains. The change makes two
> assumptions that I’d like to double-check with keystone folks:
>
>
>
> 1.  That a project can never move from one domain to another.
>
We're moving in this direction, at least. In Grizzly and Havana, we made no
such restriction. In Icehouse, we introduced such a restriction by default,
but it can be disabled. So far, we haven't gotten any complaints about
adding the restriction, so maybe we should just add additional help text to
the option in our config about why you would never want to disable the
restriction, citing how it would break swift?

>  2.  That the underscore character cannot appear in a valid domain id
> – more specifically, that the string ‘_unknown’ cannot be confused with a
> domain id.
>
That's fairly sound. All of our domain ID's are system-assigned as UUIDs,
except for the "default" domain which has an explicit id='default'. We
don't do anything to validate the assumption, though.

>
>
> Are those safe assumptions?
>
>
>
> Thanks,
>
> Alistair
>
>
>
> [1] https://review.openstack.org/86430
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-01 Thread Devananda van der Veen
On Tue, Jul 1, 2014 at 10:02 AM, Dolph Mathews  wrote:
> The argument has been made in the past that small features will require
> correspondingly small specs. If there's a counter-argument to this example
> (a "small" feature requiring a relatively large amount of spec effort), I'd
> love to have links to both the spec and the resulting implementation so we
> can discuss exactly why the spec was an unnecessary additional effort.
>
>
> On Tue, Jul 1, 2014 at 10:30 AM, Jason Dunsmore
>  wrote:
>>
>> On Mon, Jun 30 2014, Joshua Harlow wrote:
>>
>> > There is a balance here that needs to be worked out and I've seen
>> > specs start to turn into requirements for every single patch (even if
>> > the patch is pretty small). I hope we can rework the 'balance in the
>> > force' to avoid being so strict that every little thing requires a
>> > spec. This will not end well for us as a community.
>> >
>> > How have others thought the spec process has worked out so far? To
>> > much overhead, to little…?
>> >
>> > I personally am of the opinion that specs should be used for large
>> > topics (defining large is of course arbitrary); and I hope we find the
>> > right balance to avoid scaring everyone away from working with
>> > openstack. Maybe all of this is part of openstack maturing, I'm not
>> > sure, but it'd be great if we could have some guidelines around when
>> > is a spec needed and when isn't it and take it into consideration when
>> > requesting a spec that the person you have requested may get
>> > frustrated and just leave the community (and we must not have this
>> > happen) if you ask for it without explaining why and how clearly.
>>
>> +1 I think specs are too much overhead for small features.  A set of
>> guidelines about when specs are needed would be sufficient.  Leave the
>> option about when to submit a design vs. when to submit code to the
>> contributor.
>>
>> Jason
>>

Yes, there needs to be balance, but as far as I have seen, folks are
finding the balance around when to require specs within each of the
project teams. I am curious if there are any specific examples where a
project's core team required a "large spec" for what they considered
to be a "small feature".

I also feel strongly that the spec process has been very helpful for
the projects that I'm involved in for fleshing out the implications of
changes which may at first glance seem small, by requiring both
proposers and reviewers to think about and discuss the wider
ramifications for changes in a way that simply reviewing code often
does not.

Just my 2c,
Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Few hot questions related to patching for openstack

2014-07-01 Thread Aleksandr Didenko
Hi,

my 2 cents:

1) Fuel version (+1 to Dmitry)
2) Could you please clarify what exactly you mean by "our patches" / "our
first patch"?




On Tue, Jul 1, 2014 at 8:04 PM, Dmitry Borodaenko 
wrote:

> 1) Puppet manifests are part of Fuel so the version of Fuel should be
> used. It is possible to have more than one version of Fuel per
> OpenStack version, but not the other way around: if we upgrade
> OpenStack version we also increase version of Fuel.
>
> 2) Should be a combination of both: it should indicate which OpenStack
> version it is based on (2014.1.1), and version of Fuel it's included
> in (5.0.1), e.g. 2014.1.1-5.0.1. Between Fuel versions, we can have
> additional bugfix patches added to shipped OpenStack components.
>
> my 2c,
> -DmitryB
>
>
> On Tue, Jul 1, 2014 at 9:50 AM, Igor Kalnitsky 
> wrote:
> > Hi fuelers,
> >
> > I'm working on Patching for OpenStack and I have the following questions:
> >
> > 1/ We need to save new puppets and repos under some versioned folder:
> >
> > /etc/puppet/{version}/ or /var/www/nailgun/{version}/centos.
> >
> > So the question is which version to use? Fuel or OpenStack?
> >
> > 2/ Which version we have to use for our patchs? We have an OpenStack
> 2014.1.
> > Should we use 2014.1.1 for our first patch? Or we have to use another
> > format?
> >
> > I need a quick reply since these questions have to be solved for 5.0.1
> too.
> >
> > Thanks,
> > Igor
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Dmitry Borodaenko
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-01 Thread Dugger, Donald D
Much as I dislike the overhead and the extra latency involved (now you need to 
have a review cycle for the spec plus the review cycle for the patch itself) I 
agreed with the `small features require small specs’.  The problem is that even 
a small change can have a big impact.  Forcing people to create a spec even for 
small features means that it’s very clear that the implications of the feature 
have been thought about and addressed.

Note that there is a similar issue with bugs.  I would expect that a patch to 
fix a bug would have to have a corresponding bug report.  Just accepting 
patches with no known justification seems like the wrong way to go.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: Tuesday, July 1, 2014 11:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][specs] Please stop doing specs for any 
changes in projects

The argument has been made in the past that small features will require 
correspondingly small specs. If there's a counter-argument to this example (a 
"small" feature requiring a relatively large amount of spec effort), I'd love 
to have links to both the spec and the resulting implementation so we can 
discuss exactly why the spec was an unnecessary additional effort.
On Tue, Jul 1, 2014 at 10:30 AM, Jason Dunsmore 
mailto:jason.dunsm...@rackspace.com>> wrote:
On Mon, Jun 30 2014, Joshua Harlow wrote:

> There is a balance here that needs to be worked out and I've seen
> specs start to turn into requirements for every single patch (even if
> the patch is pretty small). I hope we can rework the 'balance in the
> force' to avoid being so strict that every little thing requires a
> spec. This will not end well for us as a community.
>
> How have others thought the spec process has worked out so far? To
> much overhead, to little…?
>
> I personally am of the opinion that specs should be used for large
> topics (defining large is of course arbitrary); and I hope we find the
> right balance to avoid scaring everyone away from working with
> openstack. Maybe all of this is part of openstack maturing, I'm not
> sure, but it'd be great if we could have some guidelines around when
> is a spec needed and when isn't it and take it into consideration when
> requesting a spec that the person you have requested may get
> frustrated and just leave the community (and we must not have this
> happen) if you ask for it without explaining why and how clearly.

+1 I think specs are too much overhead for small features.  A set of
guidelines about when specs are needed would be sufficient.  Leave the
option about when to submit a design vs. when to submit code to the
contributor.

Jason

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Spec Review Day July 9th

2014-07-01 Thread Matthew Treinish
Hi Everyone,

During the last qa meeting we were discussing ways to try an increase throughput
on the qa-specs repo. We decided to have a dedicated review day for the specs
repo. Right now specs approval is a relatively slow process, reviews seem to
take too long (an average wait time of ~12 days) and responses are often taking
an equally long time.  So to try and stimulate increased velocity for the specs
process having a day which is mostly dedicated to reviewing specs should help.
At the very least we should work through most of the backlog.

I think having the review day scheduled next Wednesday, July 9th, should work
well for most people. I feel that having the review day occur before the
mid-cycle meet-up in a couple weeks would be best. With the US holiday this
Friday, and giving more than a couple of days notice I figured having it next
week was best.

So if if everyone could spend that day concentrating on reviewing qa-specs
proposals I think we'll start to work through the backlog. Of course we'll also
need everyone that submitted specs to be around and active if we want to clear
out the backlog.

I'll send out another reminder the day before the review day.

-Matt Treinish


pgp4P69KvNFOd.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-01 Thread Jason Dunsmore
I meant the administrative overhead of the contributor having to submit
a spec to Gerrit and then everyone having to deal with yet another
review, not the overhead of writing/reviewing the spec itself.

On Tue, Jul 01 2014, Dolph Mathews wrote:

> The argument has been made in the past that small features will require
> correspondingly small specs. If there's a counter-argument to this example
> (a "small" feature requiring a relatively large amount of spec effort), I'd
> love to have links to both the spec and the resulting implementation so we
> can discuss exactly why the spec was an unnecessary additional effort.
>
> On Tue, Jul 1, 2014 at 10:30 AM, Jason Dunsmore <
> jason.dunsm...@rackspace.com> wrote:
>
>> On Mon, Jun 30 2014, Joshua Harlow wrote:
>>
>> > There is a balance here that needs to be worked out and I've seen
>> > specs start to turn into requirements for every single patch (even if
>> > the patch is pretty small). I hope we can rework the 'balance in the
>> > force' to avoid being so strict that every little thing requires a
>> > spec. This will not end well for us as a community.
>> >
>> > How have others thought the spec process has worked out so far? To
>> > much overhead, to little…?
>> >
>> > I personally am of the opinion that specs should be used for large
>> > topics (defining large is of course arbitrary); and I hope we find the
>> > right balance to avoid scaring everyone away from working with
>> > openstack. Maybe all of this is part of openstack maturing, I'm not
>> > sure, but it'd be great if we could have some guidelines around when
>> > is a spec needed and when isn't it and take it into consideration when
>> > requesting a spec that the person you have requested may get
>> > frustrated and just leave the community (and we must not have this
>> > happen) if you ask for it without explaining why and how clearly.
>>
>> +1 I think specs are too much overhead for small features.  A set of
>> guidelines about when specs are needed would be sufficient.  Leave the
>> option about when to submit a design vs. when to submit code to the
>> contributor.
>>
>> Jason
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] 3rd party ci names for use by official cinder mandated tests

2014-07-01 Thread Asselin, Ramy
3rd party ci names is currently becoming a bit controversial for what we're 
trying to do in cinder: https://review.openstack.org/#/c/101013/
The motivation for the above change is to aid developers understand what the 
3rd party ci systems are testing in order to avoid confusion.
The goal is to aid developers reviewing cinder changes to understand which 3rd 
party ci systems are running official cinder-mandated tests and which are 
running unofficial/proprietary tests.
Since the use of "cinder" is proposed to be "reserved" (per change under review 
above), I'd like to propose the following for Cinder third-party names under 
the following conditions:
{Company-Name}-cinder-ci
* This CI account name is to be used strictly for official 
cinder-defined dsvm-full-{driver} tests.
* No additional tests allowed on this account.
oA different account name will be used for unofficial / proprietary tests.
* Account will only post reviews to cinder patches.
oA different account name will be used to post reviews in all other 
projects.
* Format of comments will be (as jgriffith commented in that review):

{company name}-cinder-ci

   dsvm-full-{driver-name}   pass/fail


   dsvm-full-{other-driver-name} pass/fail


   dsvm-full-{yet-another-driver-name}   pass/fail


Thoughts?

Ramy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-01 Thread Dolph Mathews
The argument has been made in the past that small features will require
correspondingly small specs. If there's a counter-argument to this example
(a "small" feature requiring a relatively large amount of spec effort), I'd
love to have links to both the spec and the resulting implementation so we
can discuss exactly why the spec was an unnecessary additional effort.

On Tue, Jul 1, 2014 at 10:30 AM, Jason Dunsmore <
jason.dunsm...@rackspace.com> wrote:

> On Mon, Jun 30 2014, Joshua Harlow wrote:
>
> > There is a balance here that needs to be worked out and I've seen
> > specs start to turn into requirements for every single patch (even if
> > the patch is pretty small). I hope we can rework the 'balance in the
> > force' to avoid being so strict that every little thing requires a
> > spec. This will not end well for us as a community.
> >
> > How have others thought the spec process has worked out so far? To
> > much overhead, to little…?
> >
> > I personally am of the opinion that specs should be used for large
> > topics (defining large is of course arbitrary); and I hope we find the
> > right balance to avoid scaring everyone away from working with
> > openstack. Maybe all of this is part of openstack maturing, I'm not
> > sure, but it'd be great if we could have some guidelines around when
> > is a spec needed and when isn't it and take it into consideration when
> > requesting a spec that the person you have requested may get
> > frustrated and just leave the community (and we must not have this
> > happen) if you ask for it without explaining why and how clearly.
>
> +1 I think specs are too much overhead for small features.  A set of
> guidelines about when specs are needed would be sufficient.  Leave the
> option about when to submit a design vs. when to submit code to the
> contributor.
>
> Jason
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party] - rebasing patches for CI

2014-07-01 Thread Kevin Benton
Hello,

What is the expected behavior of 3rd-party CI systems with regard to
checking out a patch. Should it be tested 'as-is' or should it be merged
into the proposed branch first and then tested?

As I understand it, this behavior for the main OpenStack CI check queue
changed to the latter some time over the past few months. Matching its
behavior makes the most sense, especially since the 3rd party CI isn't
running in the gate so it's the closest alternative.

Is this what other CIs are doing? I didn't see anything about it in the
Neutron testing wiki[1] so I figured I would bring this to the mailing list
to get feedback.


1. https://wiki.openstack.org/wiki/NeutronThirdPartyTesting

Cheers
-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Simple and robust CI script?

2014-07-01 Thread Luke Gorrie
Howdy!

I wrote a new version of shellci today and have it up and running and
voting on the sandbox.

It's described on the Github page: https://github.com/SnabbCo/shellci

Currently this is simple shell scripts to receive review.openstack.org
gerrit events, run tests and determine results, then post reviews. It does
not yet run devstack/tempest and I hope to reuse that part from somebody
else's efforts. (I know ODL have done reusable work here and I plan to look
into that. Anybody else?)

Ideas and encouragement welcome as always. Let's find out if this point in
the design space is practical or not.

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Few hot questions related to patching for openstack

2014-07-01 Thread Dmitry Borodaenko
1) Puppet manifests are part of Fuel so the version of Fuel should be
used. It is possible to have more than one version of Fuel per
OpenStack version, but not the other way around: if we upgrade
OpenStack version we also increase version of Fuel.

2) Should be a combination of both: it should indicate which OpenStack
version it is based on (2014.1.1), and version of Fuel it's included
in (5.0.1), e.g. 2014.1.1-5.0.1. Between Fuel versions, we can have
additional bugfix patches added to shipped OpenStack components.

my 2c,
-DmitryB


On Tue, Jul 1, 2014 at 9:50 AM, Igor Kalnitsky  wrote:
> Hi fuelers,
>
> I'm working on Patching for OpenStack and I have the following questions:
>
> 1/ We need to save new puppets and repos under some versioned folder:
>
> /etc/puppet/{version}/ or /var/www/nailgun/{version}/centos.
>
> So the question is which version to use? Fuel or OpenStack?
>
> 2/ Which version we have to use for our patchs? We have an OpenStack 2014.1.
> Should we use 2014.1.1 for our first patch? Or we have to use another
> format?
>
> I need a quick reply since these questions have to be solved for 5.0.1 too.
>
> Thanks,
> Igor
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] neutron config not working

2014-07-01 Thread Kyle Mestery
Hi Rob:

Can you try adding the following config to your local.conf? I'd like
to see if this gets you going or not. It will force it to use gre
tunnels for tenant networks. By default it will not.

ENABLE_TENANT_TUNNELS=True

On Tue, Jul 1, 2014 at 10:53 AM, Rob Crittenden  wrote:
> Rob Crittenden wrote:
>> Mark Kirkwood wrote:
>>> On 25/06/14 10:59, Rob Crittenden wrote:
 Before I get punted onto the operators list, I post this here because
 this is the default config and I'd expect the defaults to just work.

 Running devstack inside a VM with a single NIC configured and this in
 localrc:

 disable_service n-net
 enable_service q-svc
 enable_service q-agt
 enable_service q-dhcp
 enable_service q-l3
 enable_service q-meta
 enable_service neutron
 Q_USE_DEBUG_COMMAND=True

 Results in a successful install but no DHCP address assigned to hosts I
 launch and other oddities like no CIDR in nova net-list output.

 Is this still the default way to set things up for single node? It is
 according to https://wiki.openstack.org/wiki/NeutronDevstack


>>>
>>> That does look ok: I have an essentially equivalent local.conf:
>>>
>>> ...
>>> ENABLED_SERVICES+=,-n-net
>>> ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,tempest
>>>
>>> I don't have 'neutron' specifically enabled... not sure if/why that
>>> might make any difference tho. However instance launching and ip address
>>> assignment seem to work ok.
>>>
>>> However I *have* seen the issue of instances not getting ip addresses in
>>> single host setups, and it is often due to use of virt io with bridges
>>> (with is the default I think). Try:
>>>
>>> nova.conf:
>>> ...
>>> libvirt_use_virtio_for_bridges=False
>>
>> Thanks for the suggestion. At least in master this was replaced by a new
>> section, libvirt, but even setting it to False didn't do the trick for
>> me. I see the same behavior.
>
> OK, I've tested the havana and icehouse branches in F-20 and they don't
> seem to have a working neutron either. I see the same thing. I can
> launch a VM but it isn't getting a DHCP address.
>
> Maybe I'll try in some Ubuntu release to see if this is Fedora-specific.
>
> rob
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2] Migration bug - Bug #1332564

2014-07-01 Thread Collins, Sean
Hi,

Anthony Veiga and I are currently working on migrating a lab environment
from Neutron Havana w/ OVS plugin to Icehouse w/ ML2 plugin and ran into
a bug[0].

Now the patch that adds the ml2 migration[1] mentions that it was tested
without any data, and that it is waiting on grenade support for neutron
to be merged. 

Firstly, who has strong mysql-fu to pair up and track down why the
foreign key statement is failing in 1332564, and secondly what can I do
to help add grenade support for the OVS/LB -> ML2 migration? This is a
critical bug and I can probably devote a great deal of my time to fixing
this.


[0]: https://bugs.launchpad.net/neutron/+bug/1332564

[1]: https://review.openstack.org/#/c/76533/10
-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Few hot questions related to patching for openstack

2014-07-01 Thread Igor Kalnitsky
Hi fuelers,

I'm working on Patching for OpenStack and I have the following questions:

1/ We need to save new puppets and repos under some versioned folder:

/etc/puppet/{version}/ or /var/www/nailgun/{version}/centos.

So the question is which version to use? Fuel or OpenStack?

2/ Which version we have to use for our patchs? We have an OpenStack 2014.1.
Should we use 2014.1.1 for our first patch? Or we have to use another
format?

I need a quick reply since these questions have to be solved for 5.0.1 too.

Thanks,
Igor
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][oslo][neutron] Need help getting oslo.messaging 1.4.0.0a2 in global requirements

2014-07-01 Thread Paul Michali (pcm)
Thanks for the update!  Is there anything I need to do for my review 103536 for 
adding 1.4.0.0a2 to global requirements?

Regards,

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Jul 1, 2014, at 11:46 AM, Monty Taylor  wrote:

> On 06/30/2014 02:28 PM, Mark McLoughlin wrote:
>> On Mon, 2014-06-30 at 16:52 +, Paul Michali (pcm) wrote:
>>> I have out for review 103536 to add this version to global
>>> requirements, so that Neutron has an oslo fix (review 102909) for
>>> encoding failure, which affects some gate runs. This review for global
>>> requirements is failing requirements check
>>> (http://logs.openstack.org/36/103536/1/check/check-requirements-integration-dsvm/6d9581c/console.html#_2014-06-30_12_34_56_921).
>>>  I did a recheck bug 1334898, but see the same error, with the release not 
>>> found, even though it is in PyPI. Infra folks say this is a known issue 
>>> with pushing out pre-releases.
>>> 
>>> 
>>> Do we have a work-around?
>>> Any proposed solution to try?
>> 
>> That makes two oslo alpha releases which are failing
>> openstack/requirements checks:
>> 
>>  https://review.openstack.org/103256
>>  https://review.openstack.org/103536
>> 
>> and an issue with the py27 stable/icehouse test jobs seemingly pulling
>> in oslo.messaging 1.4.0.0a2:
>> 
>>  http://lists.openstack.org/pipermail/openstack-dev/2014-June/039021.html
>> 
>> and these comments on IRC:
>> 
>>  
>> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2014-06-30.log
>> 
>>  2014-06-30T15:27:33   hi. Need help with getting latest 
>> oslo.messaging release added to global requirements. Can someone advise on 
>> the issues I see.
>>  2014-06-30T15:28:06   pcm__: there are issues adding oslo 
>> pre-releases to the mirror right now - we're working on a solution ... so 
>> you're not alone at least :)
>>  2014-06-30T15:29:02   mordred: Jenkins failed saying that it could 
>> not find the release, but it is available.
>>  2014-06-30T15:29:31   pcm__: mordred: is the fix to remove the 
>> check for --no-use-wheel in the check-requirements-integration-dsvm ?
>>  2014-06-30T15:29:55   bknudson: nope. it's to completely change 
>> our mirroring infrastructure :)
>> 
>> Presumably there's more information somewhere on what solution infra are
>> working on, but that's all I got ...
>> 
>> We knew this pre-release-with-wheels stuff was going to be a little
>> rocky, so this isn't surprising. Hopefully it'll get sorted out soon.
> 
> We're spinning up a new full mirror using bandersnatch. The new mirror
> is live and we've reconfigured the slaves to use it.
> 
> At this point, all builds from all slaves should be using the new full
> mirror. We have one or two things we still need to fix to fix the oslo
> pre-release thing (basically, we need to rework the requirements
> integration job)
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-07-01 Thread Asselin, Ramy
Anita,

This line [1] is effectively a sub-set of tempest-dsm-full,  and what we're 
currently running manually now. I far as I understood, this is the current 
minimum. The exact sub-set (or full set, or if additional tests are allowed) is 
still under discussion. 

I created a WIP reference patch [2] for the cinder team that mimics the above 
script to run these tests based off the similar  Jenkins job-template used by 
Openstack-Jenkins "{pipeline}-tempest-dsvm-*"

Ramy

[1] 
https://github.com/openstack-dev/devstack/blob/master/driver_certs/cinder_driver_cert.sh#L97
[2] 
https://review.openstack.org/#/c/93141/1/modules/openstack_project/files/jenkins_job_builder/config/devstack-gate.yaml

-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Tuesday, July 01, 2014 7:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

On 1 July 2014 14:44, Anita Kuno  wrote:

> On 07/01/2014 05:56 AM, Duncan Thomas wrote:
>> For the record, cinder gave a very clear definition of success in our 
>> 3rd party guidelines: Passes every test in tempest-dsm-full. If that 
>> needs documenting somewhere else, please let me know. It may of 
>> course change as we learn more about how 3rd party CI works out, so 
>> the fewer places it is duplicated the better, maybe?
>>
> Thanks Duncan, I wasn't aware of this. Can we start with a url for 
> those guidelines in your reply to this post and then go from there?

https://wiki.openstack.org/wiki/Cinder/certified-drivers should make it clear 
but doesn't, I'll get that cleared up.

https://etherpad.openstack.org/p/juno-cinder-3rd-party-cert-and-verification
mentions it, and various weekly meeting minutes also mention it.


--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] [Swift] Question re. keystone domains

2014-07-01 Thread Coles, Alistair
We have a change [1] under review in Swift to make access control lists 
compatible with migration to keystone v3 domains. The change makes two 
assumptions that I'd like to double-check with keystone folks:


1.  That a project can never move from one domain to another.

2.  That the underscore character cannot appear in a valid domain id - more 
specifically, that the string '_unknown' cannot be confused with a domain id.

Are those safe assumptions?

Thanks,
Alistair

[1] https://review.openstack.org/86430
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Small MongoDB World summary

2014-07-01 Thread Joshua Harlow
Did they change the license? ;)

Sent from my really tiny device...

> On Jul 1, 2014, at 7:17 AM, "Flavio Percoco"  wrote:
> 
> Hi,
> 
> I attended MongoDB World last week and I thought about giving y'all a
> heads up of what's coming next in mongodb-2.8.
> 
> - DB Lock will be pushed down to the document level. They demoed this
> and I gotta admit, it was quite mind-blowing.
> - Support for different storage engines (ala MySQL). They demoed an
> in-memory storage engine, the original mmap based one and one based
> on rocksdb.
> - Support for mongodb clusters deployment using MMS
> - Support for gluster upgrades using MMS
> 
> This is just a small summary. If you're interested in mongodb, I
> invite you to watch the keynotei[0]. If you're interested in spending
> some extra time listening to talks from the conference, I'd recommend
> you to take a look at:
> 
> - 
> http://www.mongodb.com/presentations/mongodb-world-2014-keynote-charity-majors
> - http://www.mongodb.com/presentations/replication-internals-life-write-0
> - 
> https://world.mongodb.com/mongodb-world/session/virtualizing-mongodb-cloud-ec2-openstack-vmsor-dedicated
> 
> There were other good talks too: https://world.mongodb.com/schedule
> 
> [0] 
> http://www.mongodb.com/presentations/mongodb-world-2014-keynote-eliot-horowitz
> 
> Flavio
> 
> -- 
> @flaper87
> Flavio Percoco
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] neutron config not working

2014-07-01 Thread Rob Crittenden
Rob Crittenden wrote:
> Mark Kirkwood wrote:
>> On 25/06/14 10:59, Rob Crittenden wrote:
>>> Before I get punted onto the operators list, I post this here because
>>> this is the default config and I'd expect the defaults to just work.
>>>
>>> Running devstack inside a VM with a single NIC configured and this in
>>> localrc:
>>>
>>> disable_service n-net
>>> enable_service q-svc
>>> enable_service q-agt
>>> enable_service q-dhcp
>>> enable_service q-l3
>>> enable_service q-meta
>>> enable_service neutron
>>> Q_USE_DEBUG_COMMAND=True
>>>
>>> Results in a successful install but no DHCP address assigned to hosts I
>>> launch and other oddities like no CIDR in nova net-list output.
>>>
>>> Is this still the default way to set things up for single node? It is
>>> according to https://wiki.openstack.org/wiki/NeutronDevstack
>>>
>>>
>>
>> That does look ok: I have an essentially equivalent local.conf:
>>
>> ...
>> ENABLED_SERVICES+=,-n-net
>> ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,tempest
>>
>> I don't have 'neutron' specifically enabled... not sure if/why that
>> might make any difference tho. However instance launching and ip address
>> assignment seem to work ok.
>>
>> However I *have* seen the issue of instances not getting ip addresses in
>> single host setups, and it is often due to use of virt io with bridges
>> (with is the default I think). Try:
>>
>> nova.conf:
>> ...
>> libvirt_use_virtio_for_bridges=False
> 
> Thanks for the suggestion. At least in master this was replaced by a new
> section, libvirt, but even setting it to False didn't do the trick for
> me. I see the same behavior.

OK, I've tested the havana and icehouse branches in F-20 and they don't
seem to have a working neutron either. I see the same thing. I can
launch a VM but it isn't getting a DHCP address.

Maybe I'll try in some Ubuntu release to see if this is Fedora-specific.

rob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-07-01 Thread Michael Kerrin
I propose making mysql an abstract element and user must choose either percona 
or mariadb-rpm element.CI must be setup correctly

Michael

On Monday 30 June 2014 12:02:09 Clint Byrum wrote:
> Excerpts from Michael Kerrin's message of 2014-06-30 02:16:07 -0700:
> > I am trying to finish off https://review.openstack.org/#/c/90134 - percona
> > xtradb cluster for debian based system.
> > 
> > I have read into this thread that I can error out on Redhat systems when
> > trying to install percona and tell them to use mariadb instead, percona
> > isn't support here. Is this correct?
> 
> Probably. But if CI for Fedora breaks as a result you'll need a solution
> first.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][oslo][neutron] Need help getting oslo.messaging 1.4.0.0a2 in global requirements

2014-07-01 Thread Monty Taylor
On 06/30/2014 02:28 PM, Mark McLoughlin wrote:
> On Mon, 2014-06-30 at 16:52 +, Paul Michali (pcm) wrote:
>> I have out for review 103536 to add this version to global
>> requirements, so that Neutron has an oslo fix (review 102909) for
>> encoding failure, which affects some gate runs. This review for global
>> requirements is failing requirements check
>> (http://logs.openstack.org/36/103536/1/check/check-requirements-integration-dsvm/6d9581c/console.html#_2014-06-30_12_34_56_921).
>>  I did a recheck bug 1334898, but see the same error, with the release not 
>> found, even though it is in PyPI. Infra folks say this is a known issue with 
>> pushing out pre-releases.
>>
>>
>> Do we have a work-around?
>> Any proposed solution to try?
> 
> That makes two oslo alpha releases which are failing
> openstack/requirements checks:
> 
>   https://review.openstack.org/103256
>   https://review.openstack.org/103536
> 
> and an issue with the py27 stable/icehouse test jobs seemingly pulling
> in oslo.messaging 1.4.0.0a2:
> 
>   http://lists.openstack.org/pipermail/openstack-dev/2014-June/039021.html
> 
> and these comments on IRC:
> 
>   
> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2014-06-30.log
> 
>   2014-06-30T15:27:33   hi. Need help with getting latest 
> oslo.messaging release added to global requirements. Can someone advise on 
> the issues I see.
>   2014-06-30T15:28:06   pcm__: there are issues adding oslo 
> pre-releases to the mirror right now - we're working on a solution ... so 
> you're not alone at least :)
>   2014-06-30T15:29:02   mordred: Jenkins failed saying that it could 
> not find the release, but it is available.
>   2014-06-30T15:29:31   pcm__: mordred: is the fix to remove the 
> check for --no-use-wheel in the check-requirements-integration-dsvm ?
>   2014-06-30T15:29:55   bknudson: nope. it's to completely change 
> our mirroring infrastructure :)
> 
> Presumably there's more information somewhere on what solution infra are
> working on, but that's all I got ...
> 
> We knew this pre-release-with-wheels stuff was going to be a little
> rocky, so this isn't surprising. Hopefully it'll get sorted out soon.

We're spinning up a new full mirror using bandersnatch. The new mirror
is live and we've reconfigured the slaves to use it.

At this point, all builds from all slaves should be using the new full
mirror. We have one or two things we still need to fix to fix the oslo
pre-release thing (basically, we need to rework the requirements
integration job)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] request for review

2014-07-01 Thread Ben Nemec
Please don't send review requests to the list.  The preferred methods
are discussed here:
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

Thanks.

-Ben

On 07/01/2014 03:45 AM, Xurong Yang wrote:
> Hi folks,
> 
> Could anyone please review this spec for adding a new API to count
> resources in Neutron?
> https://review.openstack.org/#/c/102199/
> 
> Thanks
> Xurong Yang
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-01 Thread Jason Dunsmore
On Mon, Jun 30 2014, Joshua Harlow wrote:

> There is a balance here that needs to be worked out and I've seen
> specs start to turn into requirements for every single patch (even if
> the patch is pretty small). I hope we can rework the 'balance in the
> force' to avoid being so strict that every little thing requires a
> spec. This will not end well for us as a community.
>
> How have others thought the spec process has worked out so far? To
> much overhead, to little…?
>
> I personally am of the opinion that specs should be used for large
> topics (defining large is of course arbitrary); and I hope we find the
> right balance to avoid scaring everyone away from working with
> openstack. Maybe all of this is part of openstack maturing, I'm not
> sure, but it'd be great if we could have some guidelines around when
> is a spec needed and when isn't it and take it into consideration when
> requesting a spec that the person you have requested may get
> frustrated and just leave the community (and we must not have this
> happen) if you ask for it without explaining why and how clearly.

+1 I think specs are too much overhead for small features.  A set of
guidelines about when specs are needed would be sufficient.  Leave the
option about when to submit a design vs. when to submit code to the
contributor.

Jason

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-07-01 Thread Alexei Kornienko

Hi,

Please see some minor comments inline.
Do you think we can schedule some time to discuss this topic on one of 
the upcoming meetings?
We can come out with some kind of the summary and actions plan to start 
working on.


Regards,

On 07/01/2014 05:52 PM, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 01/07/14 15:55, Alexei Kornienko wrote:

Hi,

Thanks for detailed answer. Please see my comments inline.

Regards,

On 07/01/2014 04:28 PM, Ihar Hrachyshka wrote: On 30/06/14 21:34,
Alexei Kornienko wrote:

Hello,


My understanding is that your analysis is mostly based on
running a profiler against the code. Network operations can
be bottlenecked in other places.

You compare 'simple script using kombu' with 'script using
oslo.messaging'. You don't compare script using
oslo.messaging before refactoring and 'after that. The latter
would show whether refactoring was worth the effort. Your
test shows that oslo.messaging performance sucks, but it's
not definite that hotspots you've revealed, once fixed, will
show huge boost.

My concern is that it may turn out that once all the effort
to refactor the code is done, we won't see major difference.
So we need base numbers, and performance tests would be a
great helper here.


It's really sad for me to see so little faith in what I'm
saying. The test I've done using plain kombu driver was
needed exactly to check that network is not the bottleneck
for messaging performance. If you don't believe in my
performance analysis we could ask someone else to do their
own research and provide results.

Technology is not about faith. :)

First, let me make it clear I'm *not* against refactoring or
anything that will improve performance. I'm just a bit skeptical,
but hopefully you'll be able to show everyone I'm wrong, and then
the change will occur. :)

To add more velocity to your effort, strong arguments should be
present. To facilitate that, I would start from adding performance
tests that would give us some basis for discussion of changes
proposed later.

Please see below for detailed answer about performance tests
implementation. It explains a bit why it's hard to present
arguments that would be strong enough for you. I may run
performance tests locally but it's not enough for community.

Yes, that's why shipping some tests ready to run with oslo.messaging
can help. Science is about reproducility, right? ;)


And in addition I've provided some links to existing
implementation with places that IMHO cause bottlenecks. From my
point of view that code is doing obviously stupid things (like
closing/opening sockets for each message sent).

That indeed sounds bad.


That is enough for me to rewrite it even without additional
proofs that it's wrong.

[Full disclosure: I'm not as involved into oslo.messaging internals as
you probably are, so I may speak out dumb things.]

I wonder whether there are easier ways to fix that particular issue
without rewriting everything from scratch. Like, provide a pool of
connections and make send() functions use it instead of creating new
connections (?)
I've tried to find a way to fix that without big changes but 
unfortunately I've failed to do so.
Problem I see is that connection pool is defined and used on 1 layer of 
library and the problem is on the other.
To fix this issues we need to change several layers of code and it's 
shared between 2 drivers - rabbit, qpid.
Cause of this it seems really hard to make some logically finished and 
working patches that would allow us to move in proper direction without 
big refactoring of the drivers structure.



Then, describing proposed details in a spec will give more exposure
to your ideas. At the moment, I see general will to enhance the
library, but not enough details on how to achieve this.
Specification can make us think not about the burden of change that
obviously makes people skeptic about rewrite-all approach, but
about specific technical issues.

I agree that we should start with a spec. However instead of
having spec of needed changes I would prefer to have a spec
describing needed functionality of the library (it may differ
from existing functionality).

Meaning, breaking API, again?
It's not about breaking the API it's about making it more logical and 
independent. Right now it's not clear to me what API classes are used 
and how they are used.
A lot of driver details leak outside the API and it makes it hard to 
improve driver without changing the API.
What I would like to see is a clear definition of what library should 
provide and API interface that it should implement.
It may be a little bit java like so API should be defined and frozed and 
anyone could propose their driver implementation using kombu/qpid/zeromq 
or pigeons and trained dolphins to deliver messages.


This would allow us to change drivers without touching the API and test 
their performance separately.



Using such a spec we could decide what it needed and what needs
to be removed to

Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-07-01 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 01/07/14 15:55, Alexei Kornienko wrote:
> Hi,
> 
> Thanks for detailed answer. Please see my comments inline.
> 
> Regards,
> 
> On 07/01/2014 04:28 PM, Ihar Hrachyshka wrote: On 30/06/14 21:34,
> Alexei Kornienko wrote:
 Hello,
 
 
 My understanding is that your analysis is mostly based on
 running a profiler against the code. Network operations can
 be bottlenecked in other places.
 
 You compare 'simple script using kombu' with 'script using 
 oslo.messaging'. You don't compare script using
 oslo.messaging before refactoring and 'after that. The latter
 would show whether refactoring was worth the effort. Your
 test shows that oslo.messaging performance sucks, but it's
 not definite that hotspots you've revealed, once fixed, will
 show huge boost.
 
 My concern is that it may turn out that once all the effort
 to refactor the code is done, we won't see major difference.
 So we need base numbers, and performance tests would be a
 great helper here.
 
 
 It's really sad for me to see so little faith in what I'm
 saying. The test I've done using plain kombu driver was
 needed exactly to check that network is not the bottleneck
 for messaging performance. If you don't believe in my
 performance analysis we could ask someone else to do their
 own research and provide results.
> Technology is not about faith. :)
> 
> First, let me make it clear I'm *not* against refactoring or
> anything that will improve performance. I'm just a bit skeptical,
> but hopefully you'll be able to show everyone I'm wrong, and then
> the change will occur. :)
> 
> To add more velocity to your effort, strong arguments should be 
> present. To facilitate that, I would start from adding performance 
> tests that would give us some basis for discussion of changes
> proposed later.
>> Please see below for detailed answer about performance tests 
>> implementation. It explains a bit why it's hard to present
>> arguments that would be strong enough for you. I may run
>> performance tests locally but it's not enough for community.

Yes, that's why shipping some tests ready to run with oslo.messaging
can help. Science is about reproducility, right? ;)

> 
>> And in addition I've provided some links to existing
>> implementation with places that IMHO cause bottlenecks. From my
>> point of view that code is doing obviously stupid things (like 
>> closing/opening sockets for each message sent).

That indeed sounds bad.

>> That is enough for me to rewrite it even without additional
>> proofs that it's wrong.

[Full disclosure: I'm not as involved into oslo.messaging internals as
you probably are, so I may speak out dumb things.]

I wonder whether there are easier ways to fix that particular issue
without rewriting everything from scratch. Like, provide a pool of
connections and make send() functions use it instead of creating new
connections (?)

> 
> Then, describing proposed details in a spec will give more exposure
> to your ideas. At the moment, I see general will to enhance the
> library, but not enough details on how to achieve this.
> Specification can make us think not about the burden of change that
> obviously makes people skeptic about rewrite-all approach, but
> about specific technical issues.
>> I agree that we should start with a spec. However instead of
>> having spec of needed changes I would prefer to have a spec
>> describing needed functionality of the library (it may differ
>> from existing functionality).

Meaning, breaking API, again?

>> Using such a spec we could decide what it needed and what needs
>> to be removed to achieve what we need.
> 
 Problem with refactoring that I'm planning is that it's not
 a minor refactoring that can be applied in one patch but it's
 the whole library rewritten from scratch.
> You can still maintain a long sequence of patches, like we did when
> we migrated neutron to oslo.messaging (it was like ~25 separate
> pieces).
>> Talking into account possible gate issues I would like to avoid
>> long series of patches since they won't be able to land at the
>> same time and rebasing will become a huge pain.

But you're the one proposing the change, you need to take burden.
Having a new branch for everything-rewritten version of the library
means that each bug fix or improvement to the library will require
being tracked by each developer in two branches, with significantly
different code. I think it's more honest to put rebase pain on people
who rework the code than on everyone else.

>> If we decide to start working on 2.0 API/implementation I think a
>> topic branch 2.0a would be much better.

I respectfully disagree. See above.

> 
 Existing messaging code was written long long time ago (in a
 galaxy far far away maybe?) and it was copy-pasted directly
 from nova. It was not built as a library and it was never

Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2014-07-01 Thread Lyle, David
Welcome Zhenguo and Ana to Horizon core.

David


On 6/20/14, 3:17 PM, "Lyle, David"  wrote:

>I would like to nominate Zhenguo Niu and Ana Krivokapic to Horizon core.
>
>Zhenguo has been a prolific reviewer for the past two releases providing
>high quality reviews. And providing a significant number of patches over
>the past three releases.
>
>Ana has been a significant reviewer in the Icehouse and Juno release
>cycles. She has also contributed several patches in this timeframe to both
>Horizon and tuskar-ui.
>
>Please feel free to respond in public or private your support or any
>concerns.
>
>Thanks,
>David
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] issues adding functionality to javelin2

2014-07-01 Thread Chris Dent

On Tue, 1 Jul 2014, Matthew Treinish wrote:


In the mean-time you can easily re-enable the full tracebacks by setting both
verbose and debug logging in the tempest config file.


Is there a way to say, via config, "no, I really do want exceptions to
cause the code to exit and pooh on the console"?


Second thing: When run as above the path to image files is
insufficient for them to be loaded. I overcame this by hardcoding a
BASENAME (see my review in progress[1]). Note that because of the
swallowed exceptions you can run (in create or check) and not realize
that no image files were found. The code silently exits.


Why? Looking at the code if you use a full path for the image location in the
yaml file it should just call open() on it. I can see an issue if you're using
relative paths in the yaml, which I think is probably the problem.


Sure, but the resources.yaml file is presented as if it is canonical,
and in that canonical form it can't work. Presumably it should either
work or state that it can't work. Especially since it doesn't let you
know that it didn't work when it doesn't work (because of the
swallowed errors).


So I think the takeaway here is that we should consider javelin2 still very much
a WIP. It's only been a few weeks since it was initially merged and it's still
not stable enough so that we can gate on it. Until we are running it in some
fashion as part of normal gating then we should be hesitant to add new features
and functionality to it.


I'll press pause on the ceilometer stuff for a while. Thanks for the
quick response. I just wanted to make sure I wasn't crazy. I guess
not, at least not because of this stuff.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Small MongoDB World summary

2014-07-01 Thread Flavio Percoco

Hi,

I attended MongoDB World last week and I thought about giving y'all a
heads up of what's coming next in mongodb-2.8.

- DB Lock will be pushed down to the document level. They demoed this
 and I gotta admit, it was quite mind-blowing.
- Support for different storage engines (ala MySQL). They demoed an
 in-memory storage engine, the original mmap based one and one based
 on rocksdb.
- Support for mongodb clusters deployment using MMS
- Support for gluster upgrades using MMS

This is just a small summary. If you're interested in mongodb, I
invite you to watch the keynotei[0]. If you're interested in spending
some extra time listening to talks from the conference, I'd recommend
you to take a look at:

- http://www.mongodb.com/presentations/mongodb-world-2014-keynote-charity-majors
- http://www.mongodb.com/presentations/replication-internals-life-write-0
- 
https://world.mongodb.com/mongodb-world/session/virtualizing-mongodb-cloud-ec2-openstack-vmsor-dedicated

There were other good talks too: https://world.mongodb.com/schedule

[0] 
http://www.mongodb.com/presentations/mongodb-world-2014-keynote-eliot-horowitz

Flavio

--
@flaper87
Flavio Percoco


pgpM1xpCeg652.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-07-01 Thread Duncan Thomas
On 1 July 2014 14:44, Anita Kuno  wrote:

> On 07/01/2014 05:56 AM, Duncan Thomas wrote:
>> For the record, cinder gave a very clear definition of success in our
>> 3rd party guidelines: Passes every test in tempest-dsm-full. If that
>> needs documenting somewhere else, please let me know. It may of course
>> change as we learn more about how 3rd party CI works out, so the fewer
>> places it is duplicated the better, maybe?
>>
> Thanks Duncan, I wasn't aware of this. Can we start with a url for those
> guidelines in your reply to this post and then go from there?

https://wiki.openstack.org/wiki/Cinder/certified-drivers should make
it clear but doesn't, I'll get that cleared up.

https://etherpad.openstack.org/p/juno-cinder-3rd-party-cert-and-verification
mentions it, and various weekly meeting minutes also mention it.


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-07-01 Thread Alexei Kornienko

Hi,

Thanks for detailed answer.
Please see my comments inline.

Regards,

On 07/01/2014 04:28 PM, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 30/06/14 21:34, Alexei Kornienko wrote:

Hello,


My understanding is that your analysis is mostly based on running
a profiler against the code. Network operations can be bottlenecked
in other places.

You compare 'simple script using kombu' with 'script using
oslo.messaging'. You don't compare script using oslo.messaging
before refactoring and 'after that. The latter would show whether
refactoring was worth the effort. Your test shows that
oslo.messaging performance sucks, but it's not definite that
hotspots you've revealed, once fixed, will show huge boost.

My concern is that it may turn out that once all the effort to
refactor the code is done, we won't see major difference. So we
need base numbers, and performance tests would be a great helper
here.


It's really sad for me to see so little faith in what I'm saying.
The test I've done using plain kombu driver was needed exactly to
check that network is not the bottleneck for messaging
performance. If you don't believe in my performance analysis we
could ask someone else to do their own research and provide
results.

Technology is not about faith. :)

First, let me make it clear I'm *not* against refactoring or anything
that will improve performance. I'm just a bit skeptical, but hopefully
you'll be able to show everyone I'm wrong, and then the change will
occur. :)

To add more velocity to your effort, strong arguments should be
present. To facilitate that, I would start from adding performance
tests that would give us some basis for discussion of changes proposed
later.

Please see below for detailed answer about performance tests implementation.
It explains a bit why it's hard to present arguments that would be 
strong enough for you.

I may run performance tests locally but it's not enough for community.

And in addition I've provided some links to existing implementation with 
places that IMHO cause bottlenecks.
From my point of view that code is doing obviously stupid things (like 
closing/opening sockets for each message sent).
That is enough for me to rewrite it even without additional proofs that 
it's wrong.


Then, describing proposed details in a spec will give more exposure to
your ideas. At the moment, I see general will to enhance the library,
but not enough details on how to achieve this. Specification can make
us think not about the burden of change that obviously makes people
skeptic about rewrite-all approach, but about specific technical issues.
I agree that we should start with a spec. However instead of having spec 
of needed changes I would prefer to have a spec describing needed 
functionality of the library (it may differ from existing functionality).
Using such a spec we could decide what it needed and what needs to be 
removed to achieve what we need.



Problem with refactoring that I'm planning is that it's not a
minor refactoring that can be applied in one patch but it's the
whole library rewritten from scratch.

You can still maintain a long sequence of patches, like we did when we
migrated neutron to oslo.messaging (it was like ~25 separate pieces).
Talking into account possible gate issues I would like to avoid long 
series of patches since they won't be able to land at the same time and 
rebasing will become a huge pain.
If we decide to start working on 2.0 API/implementation I think a topic 
branch 2.0a would be much better.



Existing messaging code was written long long time ago (in a galaxy
far far away maybe?) and it was copy-pasted directly from nova. It
was not built as a library and it was never intended to be used
outside of nova. Some parts of it cannot even work normally cause
it was not designed to work with drivers like zeromq (matchmaker
stuff).

oslo.messaging is NOT the code you can find in oslo-incubator rpc
module. It was hugely rewritten to expose a new, cleaner API. This is
btw one of the reasons migration to this new library is so painful. It
was painful to move to oslo.messaging, so we need clear need for a
change before switching to yet another library.
API indeed has changed but general implementation details and processing 
flow goes way back to 2011 and nova code (for example general 
Publisher/Consumer implementation in impl_rabbit)

That's the code I'm talking about.

Refactoring as I see it will do the opposite thing. It will keep intact 
as much API as possible but change internals to make it more efficient 
(that's why I call it refactoring) So 2.0 version might be (partially?) 
backwards compatible and migration won't be such a pain.



The reason I've raised this question on the mailing list was to get
some agreement about future plans of oslo.messaging development and
start working on it in coordination with community. For now I don't
see any actions plan emerging from it. I would like to see us
bringing more co

Re: [openstack-dev] [Nova] Mid-cycle sprint for remote people ?

2014-07-01 Thread Dugger, Donald D
At minimum I can arrange for a phone bridge at the sprint (Intel `lives` on 
phone conferences) so we can certainly do that.  Video might be more 
problematic, I know we did something with Google Plus at the last sprint but I 
don't know the details on that.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Sylvain Bauza [mailto:sba...@redhat.com] 
Sent: Tuesday, July 1, 2014 7:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova] Mid-cycle sprint for remote people ?

Hi,

I won't be able to attend the mid-cycle sprint due to a good family reason (a 
new baby 2.0 release expected to land by these dates), so I'm wondering if it's 
possible to webcast some of the sessions so people who are not there can still 
share their voices ?

-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] issues adding functionality to javelin2

2014-07-01 Thread Matthew Treinish
On Tue, Jul 01, 2014 at 02:01:19PM +0100, Chris Dent wrote:
> 
> I've been working to add ceilometer checks in javelin2. Doing so has
> revealed some issues that appear to be a fairly big deal but I suppose
> there's some chance I'm doing things completely wrong.
> 
> For reference my experiments are being done with a devstack with
> ceilometer enabled, running javelin as:
> 
> python tempest/cmd/javelin.py -m check \
> -r tempest/cmd/resources.yaml
> 
> replace "check" with "create" as required.
> 
> First thing I noticed: setting sys.excepthook in setup() in
> tempest/openstack/common/log.py is causing exceptions to be swallowed
> such that when making simple runs it is not obvious that things have
> gone wrong. You can check $? and then look in templest.log but the
> content of tempest.log is just the exception message, not its type nor
> any traceback. If you wish to follow along at home comment out line 427
> in tempest/openstack/common/log.py.

This was a bug with oslo logging, Sean already pushed a fix for it. I have a
sync patch here:

https://review.openstack.org/#/c/103886/

In the mean-time you can easily re-enable the full tracebacks by setting both
verbose and debug logging in the tempest config file.

> 
> Second thing: When run as above the path to image files is
> insufficient for them to be loaded. I overcame this by hardcoding a
> BASENAME (see my review in progress[1]). Note that because of the
> swallowed exceptions you can run (in create or check) and not realize
> that no image files were found. The code silently exits.

Why? Looking at the code if you use a full path for the image location in the
yaml file it should just call open() on it. I can see an issue if you're using
relative paths in the yaml, which I think is probably the problem.

> 
> Third thing: Much of the above could still work if there were a
> different resources.yaml or the PWD was set specifically for test runs.
> However, this patchset[2] adds support for checking creating and
> attaching volumes. Assuming it is expected to use the the volumes API
> under tempest/services some of the calls are being made with the wrong
> number of arguments (client.volumes.{create_volume,atach_volume}). Again
> these errors aren't obvious because the exceptions are swallowed.

So if it doesn't work then it's a bug. (I haven't verified this yet) But, fixing
issues like this one at a time as they slip through review is not a real
solution. As the next step we need to get javelin greenlit as part of the
grenade job. (or add unit tests) This is the same problem we have in the rest of
tempest, where if we don't execute code as part of gating it should just be
assumed broken. (which includes javelin2 right now)

> 
> I can provide fixes for all this stuff but I wanted to first confirm
> that I'm not doing something incorrectly or missing something obvious.
> 
> Some questions:
> 
> * When javelin will be run officially as part of the tests, what is
>   the PWD, such that we can create an accurate path to the image
>   files?
> * Is the exception swallowing intentional?
> * When run in grenade will javelin have any knowledge of whether the
>   current check run is happening before or after the upgrade stage?
> 
> Thanks for any help and input. I'm on IRC as cdent if you want to find me
> there rather than respond here.

So I think the takeaway here is that we should consider javelin2 still very much
a WIP. It's only been a few weeks since it was initially merged and it's still
not stable enough so that we can gate on it. Until we are running it in some
fashion as part of normal gating then we should be hesitant to add new features
and functionality to it.

-Matt Treinish


pgp0HRBnfpGFS.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-07-01 Thread Anita Kuno
On 07/01/2014 05:56 AM, Duncan Thomas wrote:
> On 30 June 2014 16:49, Anita Kuno  wrote:
> 
>> Right now that dashboard introduces more confusion than it alleviates
>> since the definition of "success" in regards to third party ci systems
>> has yet to be defined by the community.
> 
> For the record, cinder gave a very clear definition of success in our
> 3rd party guidelines: Passes every test in tempest-dsm-full. If that
> needs documenting somewhere else, please let me know. It may of course
> change as we learn more about how 3rd party CI works out, so the fewer
> places it is duplicated the better, maybe?
> 
Thanks Duncan, I wasn't aware of this. Can we start with a url for those
guidelines in your reply to this post and then go from there?

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-01 Thread Anita Kuno
On 06/30/2014 09:13 PM, Jay Pipes wrote:
> On 06/30/2014 07:08 PM, Anita Kuno wrote:
>> On 06/30/2014 04:22 PM, Jay Pipes wrote:
>>> Hi Stackers,
>>>
>>> Some recent ML threads [1] and a hot IRC meeting today [2] brought up
>>> some legitimate questions around how a newly-proposed Stackalytics
>>> report page for Neutron External CI systems [2] represented the results
>>> of an external CI system as "successful" or not.
>>>
>>> First, I want to say that Ilya and all those involved in the
>>> Stackalytics program simply want to provide the most accurate
>>> information to developers in a format that is easily consumed. While
>>> there need to be some changes in how data is shown (and the wording of
>>> things like "Tests Succeeded"), I hope that the community knows there
>>> isn't any ill intent on the part of Mirantis or anyone who works on
>>> Stackalytics. OK, so let's keep the conversation civil -- we're all
>>> working towards the same goals of transparency and accuracy. :)
>>>
>>> Alright, now, Anita and Kurt Taylor were asking a very poignant
>>> question:
>>>
>>> "But what does CI tested really mean? just running tests? or tested to
>>> pass some level of requirements?"
>>>
>>> In this nascent world of external CI systems, we have a set of issues
>>> that we need to resolve:
>>>
>>> 1) All of the CI systems are different.
>>>
>>> Some run Bash scripts. Some run Jenkins slaves and devstack-gate
>>> scripts. Others run custom Python code that spawns VMs and publishes
>>> logs to some public domain.
>>>
>>> As a community, we need to decide whether it is worth putting in the
>>> effort to create a single, unified, installable and runnable CI system,
>>> so that we can legitimately say "all of the external systems are
>>> identical, with the exception of the driver code for vendor X being
>>> substituted in the Neutron codebase."
>>>
>>> If the goal of the external CI systems is to produce reliable,
>>> consistent results, I feel the answer to the above is "yes", but I'm
>>> interested to hear what others think. Frankly, in the world of
>>> benchmarks, it would be unthinkable to say "go ahead and everyone run
>>> your own benchmark suite", because you would get wildly different
>>> results. A similar problem has emerged here.
>>>
>>> 2) There is no mediation or verification that the external CI system is
>>> actually testing anything at all
>>>
>>> As a community, we need to decide whether the current system of
>>> self-policing should continue. If it should, then language on reports
>>> like [3] should be very clear that any numbers derived from such systems
>>> should be taken with a grain of salt. Use of the word "Success" should
>>> be avoided, as it has connotations (in English, at least) that the
>>> result has been verified, which is simply not the case as long as no
>>> verification or mediation occurs for any external CI system.
>>>
>>> 3) There is no clear indication of what tests are being run, and
>>> therefore there is no clear indication of what "success" is
>>>
>>> I think we can all agree that a test has three possible outcomes: pass,
>>> fail, and skip. The results of a test suite run therefore is nothing
>>> more than the aggregation of which tests passed, which failed, and which
>>> were skipped.
>>>
>>> As a community, we must document, for each project, what are expected
>>> set of tests that must be run for each merged patch into the project's
>>> source tree. This documentation should be discoverable so that reports
>>> like [3] can be crystal-clear on what the data shown actually means. The
>>> report is simply displaying the data it receives from Gerrit. The
>>> community needs to be proactive in saying "this is what is expected to
>>> be tested." This alone would allow the report to give information such
>>> as "External CI system ABC performed the expected tests. X tests passed.
>>> Y tests failed. Z tests were skipped." Likewise, it would also make it
>>> possible for the report to give information such as "External CI system
>>> DEF did not perform the expected tests.", which is excellent information
>>> in and of itself.
>>>
>>> ===
>>>
>>> In thinking about the likely answers to the above questions, I believe
>>> it would be prudent to change the Stackalytics report in question [3] in
>>> the following ways:
>>>
>>> a. Change the "Success %" column header to "% Reported +1 Votes"
>>> b. Change the phrase " Green cell - tests ran successfully, red cell -
>>> tests failed" to "Green cell - System voted +1, red cell - System
>>> voted -1"
>>>
>>> and then, when we have more and better data (for example, # tests
>>> passed, failed, skipped, etc), we can provide more detailed information
>>> than just "reported +1" or not.
>>>
>>> Thoughts?
>>>
>>> Best,
>>> -jay
>>>
>>> [1]
>>> http://lists.openstack.org/pipermail/openstack-dev/2014-June/038933.html
>>> [2]
>>> http://eavesdrop.openstack.org/meetings/third_party/2014/third_party.2014-06-30-18.01.log.html
>>>
>>>
>>> [3] http:/

Re: [openstack-dev] [all] stevedore 1.0.0.0a1

2014-07-01 Thread Yuriy Taraday
Hello, Doug.

On Mon, Jun 23, 2014 at 6:11 PM, Doug Hellmann 
wrote:

> $ git log --abbrev-commit --pretty=oneline 0.15..1.0.0.0a1
> d37b47f Merge "Updated from global requirements"
> bc2d08a Updated from global requirements
> e8e9ca1 Fix incorrect image reference in documentation
> d39ef75 Fix requirement handling in tox
> 65fc0d2 Merge "use six.add_metaclass"
> 58ff35c Updated from global requirements
> d9d11fc use six.add_metaclass
> a0721b4 Merge "fix link to entry point docs"
> ff1f0fd Merge "Updated from global requirements"
> 53b4231 Merge "Add doc requirements to venv environ"
> d5fb7a8 Updated from global requirements
> 3668de2 driver: raise by default on import failure
> 6a37e5f Add doc requirements to venv environ
> cde4b1d Merge "Import run_cross_tests.sh from oslo-incubator"
> f2af694 Import run_cross_tests.sh from oslo-incubator
> 9ae8bef fix link to entry point docs
>

I think you should add either  "--no-merges" or "--merges" to your "git
log" command to avoid repetitions in this list and provide cleaner list of
changes that actually happened since last release.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Mid-cycle sprint for remote people ?

2014-07-01 Thread Sylvain Bauza
Hi,

I won't be able to attend the mid-cycle sprint due to a good family
reason (a new baby 2.0 release expected to land by these dates), so I'm
wondering if it's possible to webcast some of the sessions so people who
are not there can still share their voices ?

-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-07-01 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 30/06/14 21:34, Alexei Kornienko wrote:
> Hello,
> 
> 
> My understanding is that your analysis is mostly based on running
> a profiler against the code. Network operations can be bottlenecked
> in other places.
> 
> You compare 'simple script using kombu' with 'script using 
> oslo.messaging'. You don't compare script using oslo.messaging
> before refactoring and 'after that. The latter would show whether
> refactoring was worth the effort. Your test shows that
> oslo.messaging performance sucks, but it's not definite that
> hotspots you've revealed, once fixed, will show huge boost.
> 
> My concern is that it may turn out that once all the effort to 
> refactor the code is done, we won't see major difference. So we
> need base numbers, and performance tests would be a great helper
> here.
> 
> 
> It's really sad for me to see so little faith in what I'm saying. 
> The test I've done using plain kombu driver was needed exactly to
> check that network is not the bottleneck for messaging
> performance. If you don't believe in my performance analysis we
> could ask someone else to do their own research and provide
> results.

Technology is not about faith. :)

First, let me make it clear I'm *not* against refactoring or anything
that will improve performance. I'm just a bit skeptical, but hopefully
you'll be able to show everyone I'm wrong, and then the change will
occur. :)

To add more velocity to your effort, strong arguments should be
present. To facilitate that, I would start from adding performance
tests that would give us some basis for discussion of changes proposed
later.

Then, describing proposed details in a spec will give more exposure to
your ideas. At the moment, I see general will to enhance the library,
but not enough details on how to achieve this. Specification can make
us think not about the burden of change that obviously makes people
skeptic about rewrite-all approach, but about specific technical issues.

> 
> Problem with refactoring that I'm planning is that it's not a
> minor refactoring that can be applied in one patch but it's the
> whole library rewritten from scratch.

You can still maintain a long sequence of patches, like we did when we
migrated neutron to oslo.messaging (it was like ~25 separate pieces).

> Existing messaging code was written long long time ago (in a galaxy
> far far away maybe?) and it was copy-pasted directly from nova. It
> was not built as a library and it was never intended to be used 
> outside of nova. Some parts of it cannot even work normally cause
> it was not designed to work with drivers like zeromq (matchmaker
> stuff).

oslo.messaging is NOT the code you can find in oslo-incubator rpc
module. It was hugely rewritten to expose a new, cleaner API. This is
btw one of the reasons migration to this new library is so painful. It
was painful to move to oslo.messaging, so we need clear need for a
change before switching to yet another library.

> 
> The reason I've raised this question on the mailing list was to get
> some agreement about future plans of oslo.messaging development and
> start working on it in coordination with community. For now I don't
> see any actions plan emerging from it. I would like to see us
> bringing more constructive ideas about what should be done.
> 
> If you think that first action should be profiling lets discuss how
> it should be implemented (cause it works for me just fine on my
> local PC). I guess we'll need to define some basic scenarios that
> would show us overall performance of the library.

Let's start from basic send/receive throughput, for tiny and large
messages, multiple consumers etc.

> There are a lot of questions that should be answered to implement
> this: Where such tests would run (jenking, local PC, devstack VM)?

I would expect it to be exposed to jenkins thru 'tox'. We then can set
up a separate job to run them and compare with a base line [TBD: what
*is* baseline?] to make sure we don't introduce performance regressions.

> How such scenarios should look like? How do we measure performance
> (cProfile, etc.)?

I think we're interested in message rate, not CPU utilization.

> How do we collect results? How do we analyze results to find
> bottlenecks? etc.
> 
> Another option would be to spend some of my free time implementing 
> mentioned refactoring (as I see it) and show you the results of 
> performance testing compared with existing code.

This approach generally doesn't work beyond PoC. Openstack is a
complex project, and we need to stick to procedures - spec review,
then coding, all in upstream, with no private branches outside common
infrastructure.

> The only problem with such approach is that my code won't be 
> oslo.messaging and it won't be accepted by community. It may be
> drop in base for v2.0 but I'm afraid this won't be acceptable
> either.
> 

Future does not occur here that way. If you want your work to be
consumed by community,

Re: [openstack-dev] [Nova][Oslo.cfg] Configuration string substitution

2014-07-01 Thread Yuriy Taraday
Hello

On Fri, Jun 20, 2014 at 12:48 PM, Radoslav Gerganov 
wrote:

> Hi,
>
> > On Wed, Jun 18, 2014 at 4:47 AM, Gary Kotton  wrote:
> > > Hi,
> > > I have encountered a problem with string substitution with the nova
> > > configuration file. The motivation was to move all of the glance
> settings
> > > to
> > > their own section (https://review.openstack.org/#/c/100567/). The
> > > glance_api_servers had default setting that uses the current
> glance_host
> > > and
> > > the glance port. This is a problem when we move to the ‘glance’
> section.
> > > First and foremost I think that we need to decide on how we should
> denote
> > > the string substitutions for group variables and then we can dive into
> > > implementation details. Does anyone have any thoughts on this?
> > > My thinking is that when we use we should use a format of
> $..
> > > An
> > > example is below.
> > >
> >
> > Do we need to set the variable off somehow to allow substitutions that
> > need the literal '.' after a variable? How often is that likely to
> > come up?
>
> I would suggest to introduce a different form of placeholder for this like:
>
>   default=['${glance.host}:${glance.port}']
>
> similar to how variable substitutions are handled in Bash.  IMO, this is
> more readable and easier to parse.
>
> -Rado


I couldn't help but trying implement this:
https://review.openstack.org/103884

This change allows both ${glance.host} and ${.host} variants.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Call for Speakers Open, OpenStack Summit in Paris

2014-07-01 Thread Tom Fifield

Hi Everyone,

*The Call for Speakers is OPEN for the November OpenStack Summit in 
Paris! Submit your talks here: 
https://www.openstack.org/summit/openstack-paris-summit-2014/call-for-speakers/.*


There are a few new speaking tracks in the Summit lineup this year so 
please review the below list before you submit a talk.


Don't wait! _The Call for Speakers will close on July 28 at 11:59pm CDT._

The Summit will take place in Paris at Le Palais des Congrès, November 
3-7. The main conference and expo will run Monday - Wednesday and the 
design summit will run Tuesday - Friday. Continue to visit 
openstack.org/summit  for information 
including: event format, registration, hotel room blocks, visa letters, etc.


If you have any Summit related questions please email 
eve...@openstack.org .


Cheers,
Claire


_Proposed Speaking Tracks for the OpenStack Summit in Paris:_

 * *Enterprise IT Strategies*

 * Enterprise IT leaders building their cloud business case are facing
   unique requirements to manage legacy applications, new software
   development and shadow IT within industry regulations and business
   constraints. In this track, we'll discuss how OpenStack is meeting
   enterprise IT technical requirements and cover topics relevant to
   planning your cloud strategy, including culture change, cost
   management, vendor strategy and recruiting.

 * *Telco Strategies*

 * Telecommunications companies are one of the largest areas of growth
   for OpenStack around the world. In this track, we'll feature content
   relevant to these users, addressing the evolution of the network and
   emerging NFV architecture, the global IaaS market and role of
   telcos, industry regulation and data sovereignty, and industry
   cooperation around interoperability and federation.

 * *How to Contribute*

 * The How to Contribute track is for new community members and
   companies interested in contributing to the open source code, with a
   focus on OpenStack community processes, tools, culture and best
   practices.

 * *Planning Your OpenStack Project*

 * If you are new to OpenStack or just getting started planning your
   cloud strategy, this track will cover the basics for you to evaluate
   the technology, understand the different ways to consume OpenStack,
   review popular use cases and determine your path forward.

 * *Products, Tools & Services*

 * OpenStack's vibrant ecosystem and the different ways to consume it
   are among it's greatest strengths. In this track, you'll hear about
   the latest products, tools and services from the OpenStack ecosystem.

 * *User Stories*

 * Sharing knowledge is a core value for the OpenStack community. In
   the user stories track, you'll hear directly from enterprises,
   service providers and application developers who are using OpenStack
   to address their business problems. Learn best practices, challenges
   and recommendations directly from your industry peers.

 * *Community Building*

 * OpenStack is a large, diverse community with more than 75 user
   groups around the world. In the community building track, user group
   leaders will share their experiences growing and maturing their
   local groups, community leaders will discuss new tools and metrics,
   and we'll shine a spotlight on end user and contributing
   organizations who have experienced a significant internal culture
   change as participants of the OpenStack community.

 * *Related OSS Projects*

 * There is a rich ecosystem of open source projects that sit on top
   of, plug into or support the OpenStack cloud software. In this
   track, we'll demonstrate the capabilities and preview the roadmaps
   for open source projects relevant to OpenStack. This presentation
   track is separate from the open source project working sessions,
   which allow the contributors to those projects to gather and discuss
   features and requirements relevant to their integration with
   OpenStack. A separate application for those working sessions will be
   announced.

 * *Operations*

 * The Operations track is 100% focused on what it takes to run a
   production OpenStack cloud. Every presenter has put endless
   coffee-fueled hours into making services scale robustly, never go
   down, and automating, automating, automating. The track will cover
   efficient use of existing tools, managing upgrades and staying
   up-to-date with one of the world's fastest-moving code bases and
   "Architecture show and tell," where established clouds will lead a
   discussion around their architecture. If you're already running a
   cloud, you should also join us in the /Ops Summit/ for some serious
   working sessions (no basic intros here) on making the OpenStack
   software and ops tools for it better.

 * *Cloud Security*

 * The Security track will feature technical presentations, design and
   implementation disussions relevant to cloud security and OpenStack.

 * *

Re: [openstack-dev] [neutron]Performance of security group

2014-07-01 Thread Miguel Angel Ajo Pelayo


Ok, I was talking with Édouard @ IRC, and as I have time to work 
into this problem, I could file an specific spec for the security
group RPC optimization, a masterplan in two steps:

1) Refactor the current RPC communication for security_groups_for_devices,
   which could be used for full syncs, etc..

2) Benchmark && make use of a fanout queue per security group to make
   sure only the hosts with instances on a certain security group get
   the updates as they happen.

@shihanzhang do you find it reasonable?



- Original Message -
> - Original Message -
> > @Nachi: Yes that could a good improvement to factorize the RPC mechanism.
> > 
> > Another idea:
> > What about creating a RPC topic per security group (quid of the RPC topic
> > scalability) on which an agent subscribes if one of its ports is associated
> > to the security group?
> > 
> > Regards,
> > Édouard.
> > 
> > 
> 
> 
> Hmm, Interesting,
> 
> @Nachi, I'm not sure I fully understood:
> 
> 
> SG_LIST [ SG1, SG2]
> SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
> port[SG_ID1, SG_ID2], port2 , port3
> 
> 
> Probably we may need to include also the
> SG_IP_LIST = [SG_IP1, SG_IP2] ...
> 
> 
> and let the agent do all the combination work.
> 
> Something like this could make sense?
> 
> Security_Groups = {SG1:{IPs:[],RULES:[],
>SG2:{IPs:[],RULES:[]}
>   }
> 
> Ports = {Port1:[SG1, SG2], Port2: [SG1]  }
> 
> 
> @Edouard, actually I like the idea of having the agent subscribed
> to security groups they have ports on... That would remove the need to
> include
> all the security groups information on every call...
> 
> But would need another call to get the full information of a set of security
> groups
> at start/resync if we don't already have any.
> 
> 
> > 
> > On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang < ayshihanzh...@126.com >
> > wrote:
> > 
> > 
> > 
> > hi Miguel Ángel,
> > I am very agree with you about the following point:
> > >  * physical implementation on the hosts (ipsets, nftables, ... )
> > --this can reduce the load of compute node.
> > >  * rpc communication mechanisms.
> > -- this can reduce the load of neutron server
> > can you help me to review my BP specs?
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" < mangel...@redhat.com >
> > wrote:
> > >
> > >  Hi it's a very interesting topic, I was getting ready to raise
> > >the same concerns about our security groups implementation, shihanzhang
> > >thank you for starting this topic.
> > >
> > >  Not only at low level where (with our default security group
> > >rules -allow all incoming from 'default' sg- the iptable rules
> > >will grow in ~X^2 for a tenant, and, the
> > >"security_group_rules_for_devices"
> > >rpc call from ovs-agent to neutron-server grows to message sizes of
> > >>100MB,
> > >generating serious scalability issues or timeouts/retries that
> > >totally break neutron service.
> > >
> > >   (example trace of that RPC call with a few instances
> > > http://www.fpaste.org/104401/14008522/ )
> > >
> > >  I believe that we also need to review the RPC calling mechanism
> > >for the OVS agent here, there are several possible approaches to breaking
> > >down (or/and CIDR compressing) the information we return via this api
> > >call.
> > >
> > >
> > >   So we have to look at two things here:
> > >
> > >  * physical implementation on the hosts (ipsets, nftables, ... )
> > >  * rpc communication mechanisms.
> > >
> > >   Best regards,
> > >Miguel Ángel.
> > >
> > >- Mensaje original -
> > >
> > >> Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
> > >> It also based on the rule set mechanism.
> > >> The issue in that proposition, it's only stable since the begin of the
> > >> year
> > >> and on Linux kernel 3.13.
> > >> But there lot of pros I don't list here (leverage iptables limitation,
> > >> efficient update rule, rule set, standardization of netfilter
> > >> commands...).
> > >
> > >> Édouard.
> > >
> > >> On Thu, Jun 19, 2014 at 8:25 AM, henry hly < henry4...@gmail.com >
> > >> wrote:
> > >
> > >> > we have done some tests, but have different result: the performance is
> > >> > nearly
> > >> > the same for empty and 5k rules in iptable, but huge gap between
> > >> > enable/disable iptable hook on linux bridge
> > >> 
> > >
> > >> > On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang < ayshihanzh...@126.com
> > >> > >
> > >> > wrote:
> > >> 
> > >
> > >> > > Now I have not get accurate test data, but I can confirm the
> > >> > > following
> > >> > > points:
> > >> > 
> > >> 
> > >> > > 1. In compute node, the iptable's chain of a VM is liner, iptable
> > >> > > filter
> > >> > > it
> > >> > > one by one, if a VM in default security group and this default
> > >> > > security
> > >> > > group have many members, but ipset chain is set, the time ipset
> > >> > > filter
> > >> > > one
> > >> > > and many member is not much difference.
> > >> > 
>

[openstack-dev] [qa] issues adding functionality to javelin2

2014-07-01 Thread Chris Dent


I've been working to add ceilometer checks in javelin2. Doing so has
revealed some issues that appear to be a fairly big deal but I suppose
there's some chance I'm doing things completely wrong.

For reference my experiments are being done with a devstack with
ceilometer enabled, running javelin as:

python tempest/cmd/javelin.py -m check \
-r tempest/cmd/resources.yaml

replace "check" with "create" as required.

First thing I noticed: setting sys.excepthook in setup() in
tempest/openstack/common/log.py is causing exceptions to be swallowed
such that when making simple runs it is not obvious that things have
gone wrong. You can check $? and then look in templest.log but the
content of tempest.log is just the exception message, not its type nor
any traceback. If you wish to follow along at home comment out line 427
in tempest/openstack/common/log.py.

Second thing: When run as above the path to image files is
insufficient for them to be loaded. I overcame this by hardcoding a
BASENAME (see my review in progress[1]). Note that because of the
swallowed exceptions you can run (in create or check) and not realize
that no image files were found. The code silently exits.

Third thing: Much of the above could still work if there were a
different resources.yaml or the PWD was set specifically for test runs.
However, this patchset[2] adds support for checking creating and
attaching volumes. Assuming it is expected to use the the volumes API
under tempest/services some of the calls are being made with the wrong
number of arguments (client.volumes.{create_volume,atach_volume}). Again
these errors aren't obvious because the exceptions are swallowed.

I can provide fixes for all this stuff but I wanted to first confirm
that I'm not doing something incorrectly or missing something obvious.

Some questions:

* When javelin will be run officially as part of the tests, what is
  the PWD, such that we can create an accurate path to the image
  files?
* Is the exception swallowing intentional?
* When run in grenade will javelin have any knowledge of whether the
  current check run is happening before or after the upgrade stage?

Thanks for any help and input. I'm on IRC as cdent if you want to find me
there rather than respond here.

[1] https://review.openstack.org/#/c/102354/
[2] https://review.openstack.org/#/c/100105/

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using tmux instead of screen in devstack

2014-07-01 Thread CARVER, PAUL
Anant Patil wrote:
>I use tmux (an alternative to screen) a lot and I believe lot of other 
>developers use it.
>I have been using devstack for some time now and would like to add the option 
>of
>using tmux instead of screen for creating sessions for openstack services.
>I couldn't find a way to do that in current implementation of devstack.

Is it just for familiarity or are there specific features lacking in screen 
that you think
would benefit devstack? I’ve tried tmux a couple of times but didn’t find any
compelling reason to switch from screen. I wouldn’t argue against anyone who
wants to use it for their day to day needs. But don’t just change devstack on a 
whim,
list out the objective benefits.

Having a configuration option to switch between devstack-screen and 
devstack-tmux
seems like it would probably add more complexity than benefit, especially if 
there
are any functional differences. If there are functional differences it would be 
better
to decide which one is best (for devstack, not necessarily best for everyone in 
the world)
and go with that one only.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift: reason for using xfs on devices

2014-07-01 Thread Anne Gentle
On Tue, Jul 1, 2014 at 6:21 AM, Osanai, Hisashi <
osanai.hisa...@jp.fujitsu.com> wrote:

>
> Hi,
>
> In the following document, there is a setup up procedure for storage and
> it seems that swift recommends to use xfs.
>
>
> http://docs.openstack.org/icehouse/install-guide/install/yum/content/installing-and-configuring-storage-nodes.html
> ===
> 2. For each device on the node that you want to use for storage, set up the
> XFS volume (/dev/sdb is used as an example). Use a single partition per
> drive.
> For example, in a server with 12 disks you may use one or two disks for the
>  operating system which should not be touched in this step. The other 10
> or 11
> disks should be partitioned with a single partition, then formatted in XFS.
> ===
>
> I would like to know the reason why swift recommends xfs rather than ext4?
>

The install guide only recommends a single path, not many options, to
ensure success.

There's a little bit of discussion in the developer docs:
http://docs.openstack.org/developer/swift/deployment_guide.html#filesystem-considerations

I think that packstack gives the option of using xfs or ext4, so there must
be sufficient testing for ext4.

Anne


>
> I think ext4 has reasonable performance and can support 1EiB from design
> point of view.
> # The max file system size of ext4 is not enough???
>
> Thanks in advance,
> Hisashi Osanai
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using tmux instead of screen in devstack

2014-07-01 Thread Sean Dague
On 07/01/2014 08:11 AM, Anant Patil wrote:
> Hi,
> 
> I use tmux (an alternative to screen) a lot and I believe lot of other
> developers use it. I have been using devstack for some time now and
> would like to add the option of using tmux instead of screen for
> creating sessions for openstack services. I couldn't find a way to do
> that in current implementation of devstack. 
> 
> I have submitted an initial blueprint here:
> https://blueprints.launchpad.net/devstack/+spec/enable-tmux-option-for-screen
> 
> Any comments are welcome! It will be helpful if someone can review it
> and provide comments.

Honestly, making this optional isn't really interesting. It's just code
complexity for very little benefit. Especially when you look into the
service stop functions.

If you could do a full replacement of *all* existing functionality used
in screen with tmux, that might be. Screen -X stuff has some interesting
failure modes on loaded environments, and I'd be in favor of switching
to something else that didn't. However in looking through tmux man page,
I'm not sure I see the equivalents to the logfile stanzas.

I'd like the blueprint to address all the screen calls throughout the
codebase before acking on whether or not we'd accept such a thing.
Because the screen use is more complex than you might realize.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using tmux instead of screen in devstack

2014-07-01 Thread Chmouel Boudjnah
On Tue, Jul 1, 2014 at 2:11 PM, Anant Patil  wrote:

> Hi,
>
> I use tmux (an alternative to screen) a lot and I believe lot of other
> developers use it. I have been using devstack for some time now and would
> like to add the option of using tmux instead of screen for creating
> sessions for openstack services. I couldn't find a way to do that in
> current implementation of devstack.
>
> I have submitted an initial blueprint here:
>
> https://blueprints.launchpad.net/devstack/+spec/enable-tmux-option-for-screen
>
> Any comments are welcome! It will be helpful if someone can review it and
> provide comments.
>

Sometime ago we had tmux support in devstack which was 'kinda' working,
when I say kinda i mean we had issues with having the window/screen being
created too quickly and the support falling behind,

I can't remember the exact details but some of it is mentioned in this
email https://lists.launchpad.net/openstack/msg07405.html

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Time to break backwards compatibility for *cloud-password file location?

2014-07-01 Thread Giulio Fidente

On 06/25/2014 11:25 AM, mar...@redhat.com wrote:

On 25/06/14 10:52, James Polley wrote:

Until https://review.openstack.org/#/c/83250/, the setup-*-password scripts
used to drop password files into $CWD, which meant that if you ran the
script from a different location next time, your old passwords wouldn't be
found.

https://review.openstack.org/#/c/83250/ changed this so that the default
behaviour is to put the password files in $TRIPLEO_ROOT; but for backwards
compatibility we left the script checking to see if there's a file in the
current directory, and using that file in preference to $TRIPLEO_ROOT if it
exists.

However, this behaviour is still confusing to people. I'm not entirely
clear on why it's confusing (it makes perfect sense to me...) but I imagine
it's because we still have the problem that the code works fine if run from
one directory, but run from a different directory it can't find passwords.

There are two open patches which would break backwards compatibility and
only ever use the files in $TRIPLEO_ROOT:

https://review.openstack.org/#/c/93981/
https://review.openstack.org/#/c/97657/

The latter review is under more active development, and has suggestions
that the directory containing the password files should be parameterised,
defaulting to $TRIPLEO_ROOT. This would still break for anyone who relies
on the password files being in the directory they run the script from, but
at least there would be a fairly easy fix for them.



How about we:

* parameterize as suggested by Fabio in the review @
https://review.openstack.org/#/c/97657/

* move setting of this param to more visible location (setup, like
devtest_variables or testenv). We can then give this better visibility
in the dev/test autodocs with a warning about the 'old' behaviour

* add a deprecation warning to the code that reads from
$CWD/tripleo-overcloud-passwords to say that this will now need to be
set as a parameter in ... wherever. How long is a good period for this?


+1

actually, I have probably being the first suggesting that we should 
parametrize the path to the password files so I want to add my 
motivations here


the big win that I see here is that people may want to customize only 
some of the passwords, for example, the undercloud admin


the script creating the password files is *already* capable of pushing 
in the file only new passwords, without regenerating passwords which 
could have been manually set in there already


this basically implements the 'feature' I mentioned except people just 
doesn't know it!


so I'd like we to expose this as a feature, from the early stages as 
Marios suggests too, maybe from devtest_variables

--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Using tmux instead of screen in devstack

2014-07-01 Thread Anant Patil
Hi,

I use tmux (an alternative to screen) a lot and I believe lot of other
developers use it. I have been using devstack for some time now and would
like to add the option of using tmux instead of screen for creating
sessions for openstack services. I couldn't find a way to do that in
current implementation of devstack.

I have submitted an initial blueprint here:
https://blueprints.launchpad.net/devstack/+spec/enable-tmux-option-for-screen

Any comments are welcome! It will be helpful if someone can review it and
provide comments.

- Anant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR demo and how-to

2014-07-01 Thread Miguel Angel Ajo Pelayo
Thank you for the video, keep up the good work!,


- Original Message -
> Hi folks,
> 
> The DVR team is working really hard to complete this important task for Juno
> and Neutron.
> 
> In order to help see this feature in action, a video has been made available
> and link can be found in [2].
> 
> There is still some work to do, however I wanted to remind you that all of
> the relevant information is available on the wiki [1, 2] and Gerrit [3].
> 
> [1] - https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
> [2] - https://wiki.openstack.org/wiki/Neutron/DVR/HowTo
> [3] - https://review.openstack.org/#/q/topic:bp/neutron-ovs-dvr,n,z
> 
> More to follow!
> 
> Cheers,
> Armando
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift: reason for using xfs on devices

2014-07-01 Thread Osanai, Hisashi

Hi,

In the following document, there is a setup up procedure for storage and 
it seems that swift recommends to use xfs.

http://docs.openstack.org/icehouse/install-guide/install/yum/content/installing-and-configuring-storage-nodes.html
===
2. For each device on the node that you want to use for storage, set up the 
XFS volume (/dev/sdb is used as an example). Use a single partition per drive. 
For example, in a server with 12 disks you may use one or two disks for the
 operating system which should not be touched in this step. The other 10 or 11 
disks should be partitioned with a single partition, then formatted in XFS.
===

I would like to know the reason why swift recommends xfs rather than ext4?

I think ext4 has reasonable performance and can support 1EiB from design point 
of view.
# The max file system size of ext4 is not enough??? 

Thanks in advance,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] New official tag: "to-be-covered-by-tests"

2014-07-01 Thread Vladimir Kuklin
Fuelers,

I created new official tag in Fuel called "to-be-covered-by-tests". If you
see the bug that (or bugs similar to this) could be caught by tests, please
add this tag, so we could collect this information and write corresponding
tests in future.

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Blueprints process

2014-07-01 Thread Vladimir Kuklin
I have some objections. We are trying to follow a strict development
workflow with feature freeze stage. In this case we will have to miss small
enhancements that can emerge after FF date and can bring essential benefits
along with small risks of breaking anything (e.g. changing some config
options for galera or other stuff). We maintained such small changes as
bugs because of this FF rule. As our project is growing, these last minute
calls for small changes are going to be more and more probable. My
suggestion is that we somehow modify our workflow allowing these small
features to get through FF stage or we are risking to have an endless queue
of enhancements that users will never see in the release.


On Thu, Jun 26, 2014 at 8:07 PM, Matthew Mosesohn 
wrote:

> +1
>
> Keeping features separate as blueprints (even tiny ones with no spec)
> really will let us focus on the volume of real bugs.
>
> On Tue, Jun 24, 2014 at 5:14 PM, Dmitry Pyzhov 
> wrote:
> > Guys,
> >
> > We have a beautiful contribution guide:
> > https://wiki.openstack.org/wiki/Fuel/How_to_contribute
> >
> > However, I would like to address several issues in our blueprints/bugs
> > processes. Let's discuss and vote on my proposals.
> >
> > 1) First of all, the bug counter is an excellent metric for quality. So
> > let's use it only for bugs and track all feature requirement as
> blueprints.
> > Here is what it means:
> >
> > 1a) If a bug report does not describe a user’s pain, a blueprint should
> be
> > created and bug should be closed as invalid
> > 1b) If a bug report does relate to a user’s pain, a blueprint should be
> > created and linked to the bug
> > 1c) We have an excellent reporting tool, but it needs more metrics:
> count of
> > critical/high bugs, count of bugs assigned to each team. It will require
> > support of team members lists, but it seems that we really need it.
> >
> >
> > 2) We have a huge amount of blueprints and it is hard to work with this
> > list. A good blueprint needs a fixed scope, spec review and acceptance
> > criteria. It is obvious for me that we can not work on blueprints that do
> > not meet these requirements. Therefore:
> >
> > 2a) Let's copy the nova future series and create a fake milestone 'next'
> as
> > nova does. All unclear blueprints should be moved there. We will pick
> > blueprints from there, add spec and other info and target them to a
> > milestone when we are really ready to work on a particular blueprint. Our
> > release page will look much more close to reality and much more readable
> in
> > this case.
> > 2b) Each blueprint in a milestone should contain information about
> feature
> > lead, design reviewers, developers, qa, acceptance criteria. Spec is
> > optional for trivial blueprints. If a spec is created, the designated
> > reviewer(s) should put (+1) right into the blueprint description.
> > 2c) Every blueprint spec should be updated before feature freeze with the
> > latest actual information. Actually, I'm not sure if we care about spec
> > after feature development, but it seems to be logical to have correct
> > information in specs.
> > 2d) We should avoid creating interconnected blueprints wherever
> possible. Of
> > course we can have several blueprints for one big feature if it can be
> split
> > into several shippable blocks for several releases or for several teams.
> In
> > most cases, small parts should be tracked as work items of a single
> > blueprint.
> >
> >
> > 3) Every review request without a bug or blueprint link should be checked
> > carefully.
> >
> > 3a) It should contain a complete description of what is being done and
> why
> > 3b) It should not require backports to stable branches (backports are
> > bugfixes only)
> > 3c) It should not require changes to documentation or be mentioned in
> > release notes
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Juno priorities and spec review timeline

2014-07-01 Thread Shivanand Tendulker
Hello Devananda

Design spec for the remote firmware setting feature is under review (
https://review.openstack.org/#/c/101122 ). Have received comments on the
APIs and we are converging on the set of required APIs.

Have posted the new patch addressing the comments on the same.

Please check, if we can re-prioritize this for Juno release.

Thanks and Regards
Shiv


On Tue, Jul 1, 2014 at 4:15 PM, Shivanand Tendulker 
wrote:

> Hello Devananda
>
> Design spec for the remote firmware setting feature is under review (
> https://review.openstack.org/#/c/101122 ). Have received comments on the
> APIs and we are converging on the set of required APIs.
>
> Have posted the new patch addressing the comments on the same.
>
> Please check, if we can re-prioritize this for Juno release.
>
> Thanks and Regards
> Shiv
>
>
>
> On Tue, Jul 1, 2014 at 4:05 PM, Ramakrishnan G <
> rameshg87.openst...@gmail.com> wrote:
>
>>
>>
>> -- Forwarded message --
>> From: Devananda van der Veen 
>> Date: Tue, Jul 1, 2014 at 3:42 AM
>> Subject: [openstack-dev] [Ironic] Juno priorities and spec review timeline
>> To: OpenStack Development Mailing List > >
>>
>>
>> Hi all!
>>
>> We're roughly at the midway point between summit and release, and I
>> feel that's a good time to take a look at our progress compared to the
>> goals we set out at the design summit. To that end, I re-opened my
>> summit notes about what features we had prioritized in Atlanta, and
>> engaged many the core reviewers in a discussion last friday to
>> estimate what we'll have time to review and land in the remainder of
>> this cycle. Based on that, I've created this spreadsheet to represent
>> those expectations and our current progress towards what we think we
>> can achieve this cycle:
>>
>>
>> https://docs.google.com/spreadsheets/d/1Hxyfy60hN_Fit0b-plsPzK6yW3ePQC5IfwuzJwltlbo
>>
>> Aside from several cleanup- and test-related tasks, these goals
>> correlate to spec reviews that have already been proposed. I've
>> crossed off ones which we discussed at the summit, but for which no
>> proposal has yet been submitted. The spec-review team and I will be
>> referring to this to help us prioritize specs reviews. While I am not
>> yet formally blocking proposals which do not fit within this list of
>> priorities, the review team is working with a large back-log and
>> probably won't have time to review anything else this cycle. If you're
>> concerned that you won't be able to land your favorite feature in
>> Juno, the best thing you can do is to participate in reviewing other
>> people's code, join the core team, and help us accelerate the
>> development process of "K".
>>
>> Borrowing a little from Nova's timeline, I have proposed the following
>> timeline for Ironic. Note that dates listed are Thursdays, and numbers
>> in parentheses are weeks until feature freeze.
>>
>> You may also note that I'll be offline for two weeks immediately prior
>> to the Juno-3 milestone, which is another reason why I'd like the core
>> review team to have a solid plan (read: approved specs) in place by
>> Aug 14.
>>
>>
>>
>> July 3 (-9): spec review day on Wednesday (July 2)
>>  focus on landing specs for our priorities:
>>
>> https://docs.google.com/spreadsheets/d/1Hxyfy60hN_Fit0b-plsPzK6yW3ePQC5IfwuzJwltlbo
>>
>> Jul 24 (-6): Juno-2 milestone tagged
>>  new spec proposal freeze
>>
>> Jul 31 (-5): midcycle meetup (July 27-30)
>>  https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint
>>
>> Aug 14 (-3): last spec review day on Wednesday (Aug 13)
>>
>> Aug 21 (-2): PTL offline all week
>>
>> Aug 28 (-1): PTL offline all week
>>
>> Sep  4 ( 0): Juno-3 milestone tagged
>>  Feature freeze
>>  K opens for spec proposals
>>  Unmerged J spec proposals must rebase on K
>>  Merged J specs with no code proposed are deleted and may
>> be re-proposed for K
>>  Merged J specs with code proposed need to be reviewed for
>> feature-freeze-exception
>>
>> Sep 25 (+3): RC 1 build expected
>>  K spec reviews start
>>
>> Oct 16 (+6): Release!
>>
>> Oct 30 (+8): K summit spec proposal freeze
>>  K summit sessions should have corresponding spec proposal
>>
>> Nov  6 (+9): K design summit
>>
>>
>> Thanks!
>> Devananda
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-07-01 Thread Duncan Thomas
On 30 June 2014 16:49, Anita Kuno  wrote:

> Right now that dashboard introduces more confusion than it alleviates
> since the definition of "success" in regards to third party ci systems
> has yet to be defined by the community.

For the record, cinder gave a very clear definition of success in our
3rd party guidelines: Passes every test in tempest-dsm-full. If that
needs documenting somewhere else, please let me know. It may of course
change as we learn more about how 3rd party CI works out, so the fewer
places it is duplicated the better, maybe?

-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Bug sqashing

2014-07-01 Thread Dmitry Pyzhov
We are really close to 5.0.1 release and 5.1 feature freeze. So I skip bug
squash day this week.

And I suggest additional action for next squash: let's review all existing
bugs and link them to blueprints if fix requires new functionality. And
close every bug report if it is not related to any issue and appears to be
pure feature request.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Triaging bugs: milestones vs release series

2014-07-01 Thread Dmitry Pyzhov
+1


On Tue, Jul 1, 2014 at 2:33 AM, Dmitry Borodaenko 
wrote:

> When you create a bug against a project (in our case, fuel) in
> Launchpad, it is always initially targeted at the default release
> series (currently, 5.1.x). On the bug summary, that isn't explicitly
> stated and shows as being targeted to the project in general (Fuel for
> OpenStack). As you add more release series to a bug, these will be
> listed under release series name (e.g. 5.0.x).
>
> Unfortunately, Launchpad doesn't limit the list of milestones you can
> target to the targeted release series, so it will happily allow you to
> target a bug at 4.1.x release series and set milestone in that series
> to 5.1.
>
> A less obvious inconsistency is when a bug is found in a stable
> release series like 5.0.x: it seems natural to target it to milestone
> like 5.0.1 and be done with it. The problem with that approach is that
> there's no way to reflect whether this bug is relevant for current
> release series (5.1.x) and if it is, to track status of the fix
> separately in current and stable release series.
>
> Therefore, when triaging new bugs in stable versions of Fuel or
> Mirantis OpenStack, please set the milestone to the next release in
> the current release focus (5.1.x), and target to the series it was
> found in separately. If there are more recent stable release series,
> target those as well.
>
> Example: a bug is found in 4.1.1. Set primary milestone to 5.1 (as
> long as current release focus is 5.1.x and 5.1 is the next milestone
> in that series), target 2 more release series: 4.1.x and 5.0.x, set
> milestones for those to 4.1.2 and 5.0.1 respectively.
>
> If there is reason to believe that the bug does not apply to some of
> the targeted release series, explain that in the commit and mark the
> bug Invalid for that release series. If the bug is present in a series
> but cannot be addressed there (e.g. priority is not high enough to do
> a backport), mark it Won't Fix for that series.
>
> If there are no objections to this approach, I'll put it in Fuel wiki.
>
> Thanks,
> -DmitryB
>
> --
> Dmitry Borodaenko
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][keystone] Renewable tokens (was Proposal: FairShareScheduler.)

2014-07-01 Thread Sylvain Bauza
Le 01/07/2014 11:09, Tim Bell a écrit :
>> -Original Message-
>> From: Lisa [mailto:lisa.zangra...@pd.infn.it]
>> Sent: 01 July 2014 10:45
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.
>>
>> Hi Tim,
>>
>> for sure this is one of the main issues we are facing and the approach you
>> suggested is the same we are investigating on.
>> Could you provide some details about the Heat proxy renew mechanism?
>> Thank you very much for your feedback.
>> Cheers,
>> Lisa
>>
>>
> I was thinking about how the Keystone Trusts mechanism could be used ... 
> https://wiki.openstack.org/wiki/Keystone/Trusts. Heat was looking to use 
> something like this 
> (https://wiki.openstack.org/wiki/Heat/Blueprints/InstanceUsers#1._Use_credentials_associated_with_a_trust)
>
> Maybe one of the Keystone experts could advise how tokens could be renewed in 
> such a scenario.
>
> Tim


Tim, you can review
https://github.com/stackforge/blazar/blob/master/climate/utils/trusts.py
if you want to see an implementation proposal for trusts.

-Sylvain


>
> On 01/07/2014 08:46, Tim Bell wrote:
>> Eric,
>>
>> Thanks for sharing your work, it looks like an interesting development.
>>
>> I was wondering how the Keystone token expiry is handled since the tokens 
>> generally have a 1 day validity. If the request is scheduling for more than 
>> one day, it would no longer have a valid token. We have similar scenarios 
>> with Kerberos/AFS credentials in the CERN batch system. There are some 
>> interesting proxy renew approaches used by Heat to get tokens at a later 
>> date which may be useful for this problem.
>>
>> $ nova credentials
>> +---+-+
>> | Token | Value  
>>  |
>> +---+-+
>> | expires   | 2014-07-02T06:39:59Z   
>>  |
>> | id| 1a819279121f4235a8d85c694dea5e9e   
>>  |
>> | issued_at | 2014-07-01T06:39:59.385417 
>>  |
>> | tenant| {"id": "841615a3-ece9-4622-9fa0-fdc178ed34f8", "enabled": 
>> true, |
>> |   | "description": "Personal Project for user timbell", "name":
>>  |
>> |   | "Personal timbell"}
>>  |
>> +---+-+
>>
>> Tim
>>> -Original Message-
>>> From: Eric Frizziero [mailto:eric.frizzi...@pd.infn.it]
>>> Sent: 30 June 2014 16:05
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.
>>>
>>> Hi All,
>>>
>>> we have analyzed the nova-scheduler component (FilterScheduler) in our
>>> Openstack installation used by some scientific teams.
>>>
>>> In our scenario, the cloud resources need to be distributed among the teams 
>>> by
>>> considering the predefined share (e.g. quota) assigned to each team, the 
>>> portion
>>> of the resources currently used and the resources they have already 
>>> consumed.
>>>
>>> We have observed that:
>>> 1) User requests are sequentially processed (FIFO scheduling), i.e.
>>> FilterScheduler doesn't provide any dynamic priority algorithm;
>>> 2) User requests that cannot be satisfied (e.g. if resources are not
>>> available) fail and will be lost, i.e. on that scenario nova-scheduler 
>>> doesn't
>>> provide any queuing of the requests;
>>> 3) OpenStack simply provides a static partitioning of resources among 
>>> various
>>> projects / teams (use of quotas). If project/team 1 in a period is 
>>> systematically
>>> underutilizing its quota and the project/team 2 instead is systematically
>>> saturating its quota, the only solution to give more resource to 
>>> project/team 2 is
>>> a manual change (to be done by the admin) to the related quotas.
>>>
>>> The need to find a better approach to enable a more effective scheduling in
>>> Openstack becomes more and more evident when the number of the user
>>> requests to be handled increases significantly. This is a well known problem
>>> which has already been solved in the past for the Batch Systems.
>>>
>>> In order to solve those issues in our usage scenario of Openstack, we have
>>> developed a prototype of a pluggable scheduler, named FairShareScheduler,
>>> with the objective to extend the existing OpenStack scheduler 
>>> (FilterScheduler)
>>> by integrating a (batch like) dynamic priority algorithm.
>>>
>>> The architecture of the FairShareScheduler is explicitly designed to 
>>> provide a
>>> high scalability level. To all user requests will be assigned a priority 
>>> value
>>> calculated by considering the share allocated to the user by the 
>>> administrator
>>> and the evaluation of the effective

Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-07-01 Thread Sylvain Bauza
Le 01/07/2014 10:45, Lisa a écrit :
> Hi Tim,
>
> for sure this is one of the main issues we are facing and the approach
> you suggested is the same we are investigating on.
> Could you provide some details about the Heat proxy renew mechanism?
> Thank you very much for your feedback.
> Cheers,
> Lisa
>


Keystone already provides support for that problem, that's called trusts.
https://wiki.openstack.org/wiki/Keystone/Trusts

I would be also interested in reviewing your implementation, because
Blazar [1] is also providing some reservation mechanism based on trusts.

One long-term objective of splitting the Scheduler into a separate
project (called Gantt) would be to interoperate Blazar and Gantt for
scheduling non-immediate requests, so your project is going into the
right direction.

As Don said, feel free to join us today at 3pm UTC to discuss about your
proposal.


[1] https://wiki.openstack.org/wiki/Blazar


>
> On 01/07/2014 08:46, Tim Bell wrote:
>> Eric,
>>
>> Thanks for sharing your work, it looks like an interesting development.
>>
>> I was wondering how the Keystone token expiry is handled since the
>> tokens generally have a 1 day validity. If the request is scheduling
>> for more than one day, it would no longer have a valid token. We have
>> similar scenarios with Kerberos/AFS credentials in the CERN batch
>> system. There are some interesting proxy renew approaches used by
>> Heat to get tokens at a later date which may be useful for this problem.
>>
>> $ nova credentials
>> +---+-+
>>
>> | Token |
>> Value   |
>> +---+-+
>>
>> | expires   |
>> 2014-07-02T06:39:59Z|
>> | id|
>> 1a819279121f4235a8d85c694dea5e9e|
>> | issued_at |
>> 2014-07-01T06:39:59.385417  |
>> | tenant| {"id": "841615a3-ece9-4622-9fa0-fdc178ed34f8",
>> "enabled": true, |
>> |   | "description": "Personal Project for user timbell",
>> "name": |
>> |   | "Personal
>> timbell"} |
>> +---+-+
>>
>>
>> Tim
>>> -Original Message-
>>> From: Eric Frizziero [mailto:eric.frizzi...@pd.infn.it]
>>> Sent: 30 June 2014 16:05
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: [openstack-dev] [nova][scheduler] Proposal:
>>> FairShareScheduler.
>>>
>>> Hi All,
>>>
>>> we have analyzed the nova-scheduler component (FilterScheduler) in our
>>> Openstack installation used by some scientific teams.
>>>
>>> In our scenario, the cloud resources need to be distributed among
>>> the teams by
>>> considering the predefined share (e.g. quota) assigned to each team,
>>> the portion
>>> of the resources currently used and the resources they have already
>>> consumed.
>>>
>>> We have observed that:
>>> 1) User requests are sequentially processed (FIFO scheduling), i.e.
>>> FilterScheduler doesn't provide any dynamic priority algorithm;
>>> 2) User requests that cannot be satisfied (e.g. if resources are not
>>> available) fail and will be lost, i.e. on that scenario
>>> nova-scheduler doesn't
>>> provide any queuing of the requests;
>>> 3) OpenStack simply provides a static partitioning of resources
>>> among various
>>> projects / teams (use of quotas). If project/team 1 in a period is
>>> systematically
>>> underutilizing its quota and the project/team 2 instead is
>>> systematically
>>> saturating its quota, the only solution to give more resource to
>>> project/team 2 is
>>> a manual change (to be done by the admin) to the related quotas.
>>>
>>> The need to find a better approach to enable a more effective
>>> scheduling in
>>> Openstack becomes more and more evident when the number of the user
>>> requests to be handled increases significantly. This is a well known
>>> problem
>>> which has already been solved in the past for the Batch Systems.
>>>
>>> In order to solve those issues in our usage scenario of Openstack,
>>> we have
>>> developed a prototype of a pluggable scheduler, named
>>> FairShareScheduler,
>>> with the objective to extend the existing OpenStack scheduler
>>> (FilterScheduler)
>>> by integrating a (batch like) dynamic priority algorithm.
>>>
>>> The architecture of the FairShareScheduler is explicitly designed to
>>> provide a
>>> high scalability level. To all user requests will be assigned a
>>> priority value
>>> calculated by considering the share allocated to the user by the
>>> administrator
>>> and the evaluation of the effective resource usage consumed in the
>>> recent past.
>>> All requests will be inserted in a priority queue, and processed in
>>> parallel by a
>>> configurable pool of workers without interfering with the p

Re: [openstack-dev] [nova][scheduler][keystone] Renewable tokens (was Proposal: FairShareScheduler.)

2014-07-01 Thread Tim Bell
> -Original Message-
> From: Lisa [mailto:lisa.zangra...@pd.infn.it]
> Sent: 01 July 2014 10:45
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.
> 
> Hi Tim,
> 
> for sure this is one of the main issues we are facing and the approach you
> suggested is the same we are investigating on.
> Could you provide some details about the Heat proxy renew mechanism?
> Thank you very much for your feedback.
> Cheers,
> Lisa
> 
> 

I was thinking about how the Keystone Trusts mechanism could be used ... 
https://wiki.openstack.org/wiki/Keystone/Trusts. Heat was looking to use 
something like this 
(https://wiki.openstack.org/wiki/Heat/Blueprints/InstanceUsers#1._Use_credentials_associated_with_a_trust)

Maybe one of the Keystone experts could advise how tokens could be renewed in 
such a scenario.

Tim

On 01/07/2014 08:46, Tim Bell wrote:
> Eric,
>
> Thanks for sharing your work, it looks like an interesting development.
>
> I was wondering how the Keystone token expiry is handled since the tokens 
> generally have a 1 day validity. If the request is scheduling for more than 
> one day, it would no longer have a valid token. We have similar scenarios 
> with Kerberos/AFS credentials in the CERN batch system. There are some 
> interesting proxy renew approaches used by Heat to get tokens at a later date 
> which may be useful for this problem.
>
> $ nova credentials
> +---+-+
> | Token | Value   
> |
> +---+-+
> | expires   | 2014-07-02T06:39:59Z
> |
> | id| 1a819279121f4235a8d85c694dea5e9e
> |
> | issued_at | 2014-07-01T06:39:59.385417  
> |
> | tenant| {"id": "841615a3-ece9-4622-9fa0-fdc178ed34f8", "enabled": true, 
> |
> |   | "description": "Personal Project for user timbell", "name": 
> |
> |   | "Personal timbell"} 
> |
> +---+-+
>
> Tim
>> -Original Message-
>> From: Eric Frizziero [mailto:eric.frizzi...@pd.infn.it]
>> Sent: 30 June 2014 16:05
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.
>>
>> Hi All,
>>
>> we have analyzed the nova-scheduler component (FilterScheduler) in our
>> Openstack installation used by some scientific teams.
>>
>> In our scenario, the cloud resources need to be distributed among the teams 
>> by
>> considering the predefined share (e.g. quota) assigned to each team, the 
>> portion
>> of the resources currently used and the resources they have already consumed.
>>
>> We have observed that:
>> 1) User requests are sequentially processed (FIFO scheduling), i.e.
>> FilterScheduler doesn't provide any dynamic priority algorithm;
>> 2) User requests that cannot be satisfied (e.g. if resources are not
>> available) fail and will be lost, i.e. on that scenario nova-scheduler 
>> doesn't
>> provide any queuing of the requests;
>> 3) OpenStack simply provides a static partitioning of resources among various
>> projects / teams (use of quotas). If project/team 1 in a period is 
>> systematically
>> underutilizing its quota and the project/team 2 instead is systematically
>> saturating its quota, the only solution to give more resource to 
>> project/team 2 is
>> a manual change (to be done by the admin) to the related quotas.
>>
>> The need to find a better approach to enable a more effective scheduling in
>> Openstack becomes more and more evident when the number of the user
>> requests to be handled increases significantly. This is a well known problem
>> which has already been solved in the past for the Batch Systems.
>>
>> In order to solve those issues in our usage scenario of Openstack, we have
>> developed a prototype of a pluggable scheduler, named FairShareScheduler,
>> with the objective to extend the existing OpenStack scheduler 
>> (FilterScheduler)
>> by integrating a (batch like) dynamic priority algorithm.
>>
>> The architecture of the FairShareScheduler is explicitly designed to provide 
>> a
>> high scalability level. To all user requests will be assigned a priority 
>> value
>> calculated by considering the share allocated to the user by the 
>> administrator
>> and the evaluation of the effective resource usage consumed in the recent 
>> past.
>> All requests will be inserted in a priority queue, and processed in parallel 
>> by a
>> configurable pool of workers without interfering with the priority order.
>> Moreover all significant information (e.g. priority queue) will be stored in 
>> a
>> persistence layer in o

Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-07-01 Thread Eric Frizziero

Hi Don,

I and my colleague Lisa attend both IRC meetings (today and next week).

In the meantime you can find more info about the FairShareScheduler at 
the following link:
https://agenda.infn.it/getFile.py/access?contribId=17&sessionId=3&resId=0&materialId=slides&confId=7915 



Thank you very much!
Cheers,
 Eric.


On 07/01/2014 01:48 AM, Dugger, Donald D wrote:

Eric-

We have a weekly scheduler sub-group (code name gantt) IRC meeting at 1500 UTC 
on Tuesdays.  This would be an excellent topic to bring up at one of those 
meetings as a lot of people with interest in the scheduler will be there.  It's 
a little short notice for tomorrow but do you think you could attend next week, 
7/8, to talk about this?

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Eric Frizziero [mailto:eric.frizzi...@pd.infn.it]
Sent: Monday, June 30, 2014 8:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

Hi All,

we have analyzed the nova-scheduler component (FilterScheduler) in our 
Openstack installation used by some scientific teams.

In our scenario, the cloud resources need to be distributed among the teams by 
considering the predefined share (e.g. quota) assigned to each team, the 
portion of the resources currently used and the resources they have already 
consumed.

We have observed that:
1) User requests are sequentially processed (FIFO scheduling), i.e.
FilterScheduler doesn't provide any dynamic priority algorithm;
2) User requests that cannot be satisfied (e.g. if resources are not
available) fail and will be lost, i.e. on that scenario nova-scheduler doesn't 
provide any queuing of the requests;
3) OpenStack simply provides a static partitioning of resources among various 
projects / teams (use of quotas). If project/team 1 in a period is 
systematically underutilizing its quota and the project/team 2 instead is 
systematically saturating its quota, the only solution to give more resource to 
project/team 2 is a manual change (to be done by the admin) to the related 
quotas.

The need to find a better approach to enable a more effective scheduling in 
Openstack becomes more and more evident when the number of the user requests to 
be handled increases significantly. This is a well known problem which has 
already been solved in the past for the Batch Systems.

In order to solve those issues in our usage scenario of Openstack, we have 
developed a prototype of a pluggable scheduler, named FairShareScheduler, with 
the objective to extend the existing OpenStack scheduler (FilterScheduler) by 
integrating a (batch like) dynamic priority algorithm.

The architecture of the FairShareScheduler is explicitly designed to provide a 
high scalability level. To all user requests will be assigned a priority value 
calculated by considering the share allocated to the user by the administrator 
and the evaluation of the effective resource usage consumed in the recent past. 
All requests will be inserted in a priority queue, and processed in parallel by 
a configurable pool of workers without interfering with the priority order. 
Moreover all significant information (e.g. priority queue) will be stored in a 
persistence layer in order to provide a fault tolerance mechanism while a 
proper logging system will annotate all relevant events, useful for auditing 
processing.

In more detail, some features of the FairshareScheduler are:
a) It assigns dynamically the proper priority to every new user requests;
b) The priority of the queued requests will be recalculated periodically using 
the fairshare algorithm. This feature guarantees the usage of the cloud 
resources is distributed among users and groups by considering the portion of 
the cloud resources allocated to them (i.e. share) and the resources already 
consumed;
c) all user requests will be inserted in a (persistent) priority queue and then 
processed asynchronously by the dedicated process (filtering + weighting phase) 
when compute resources are available;
d) From the client point of view the queued requests remain in "Scheduling" 
state till the compute resources are available. No new states added: this prevents any 
possible interaction issue with the Openstack clients;
e) User requests are dequeued by a pool of WorkerThreads (configurable), i.e. 
no sequential processing of the requests;
f) The failed requests at filtering + weighting phase may be inserted again in 
the queue for n-times (configurable).

We have integrated the FairShareScheduler in our Openstack installation (release 
"HAVANA"). We're now working to adapt the FairShareScheduler to the new release 
"IceHouse".

Does anyone have experiences in those issues found in our cloud scenario?

Could the FairShareScheduler be useful for the Openstack community?
In that case, we'll be happy to share our work.

Any feedback/comment is welcome!

Cheers,

Re: [openstack-dev] [nova][sriov] weekly meeting for july 1st and july 8th

2014-07-01 Thread Irena Berezovsky
I'll chair this week PCI SR-IOV pass-through meeting for those who would like 
to attend.

BR,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Tuesday, July 01, 2014 5:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][sriov] weekly meeting for july 1st and july 8th

Hi,

I will be on PTO from Tuesday, and come back to office on July 9th Wednesday. 
Therefore, I won't be present in the next two SR-IOV weekly meetings. Regarding 
the sr-iov development status, I finally fixed all the failures in the existing 
unit tests. Rob and I are still working on adding new unit test cases in the 
PCI and libvirt driver area. Once that's done, we should be able to push 
another two patches up.

Thanks,
Robert

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >