Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-06-30 Thread Tim Bell
Eric,

Thanks for sharing your work, it looks like an interesting development.

I was wondering how the Keystone token expiry is handled since the tokens 
generally have a 1 day validity. If the request is scheduling for more than one 
day, it would no longer have a valid token. We have similar scenarios with 
Kerberos/AFS credentials in the CERN batch system. There are some interesting 
proxy renew approaches used by Heat to get tokens at a later date which may be 
useful for this problem.

$ nova credentials
+---+-+
| Token | Value   |
+---+-+
| expires   | 2014-07-02T06:39:59Z|
| id| 1a819279121f4235a8d85c694dea5e9e|
| issued_at | 2014-07-01T06:39:59.385417  |
| tenant| {"id": "841615a3-ece9-4622-9fa0-fdc178ed34f8", "enabled": true, |
|   | "description": "Personal Project for user timbell", "name": |
|   | "Personal timbell"} |
+---+-+

Tim
> -Original Message-
> From: Eric Frizziero [mailto:eric.frizzi...@pd.infn.it]
> Sent: 30 June 2014 16:05
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.
> 
> Hi All,
> 
> we have analyzed the nova-scheduler component (FilterScheduler) in our
> Openstack installation used by some scientific teams.
> 
> In our scenario, the cloud resources need to be distributed among the teams by
> considering the predefined share (e.g. quota) assigned to each team, the 
> portion
> of the resources currently used and the resources they have already consumed.
> 
> We have observed that:
> 1) User requests are sequentially processed (FIFO scheduling), i.e.
> FilterScheduler doesn't provide any dynamic priority algorithm;
> 2) User requests that cannot be satisfied (e.g. if resources are not
> available) fail and will be lost, i.e. on that scenario nova-scheduler doesn't
> provide any queuing of the requests;
> 3) OpenStack simply provides a static partitioning of resources among various
> projects / teams (use of quotas). If project/team 1 in a period is 
> systematically
> underutilizing its quota and the project/team 2 instead is systematically
> saturating its quota, the only solution to give more resource to project/team 
> 2 is
> a manual change (to be done by the admin) to the related quotas.
> 
> The need to find a better approach to enable a more effective scheduling in
> Openstack becomes more and more evident when the number of the user
> requests to be handled increases significantly. This is a well known problem
> which has already been solved in the past for the Batch Systems.
> 
> In order to solve those issues in our usage scenario of Openstack, we have
> developed a prototype of a pluggable scheduler, named FairShareScheduler,
> with the objective to extend the existing OpenStack scheduler 
> (FilterScheduler)
> by integrating a (batch like) dynamic priority algorithm.
> 
> The architecture of the FairShareScheduler is explicitly designed to provide a
> high scalability level. To all user requests will be assigned a priority value
> calculated by considering the share allocated to the user by the administrator
> and the evaluation of the effective resource usage consumed in the recent 
> past.
> All requests will be inserted in a priority queue, and processed in parallel 
> by a
> configurable pool of workers without interfering with the priority order.
> Moreover all significant information (e.g. priority queue) will be stored in a
> persistence layer in order to provide a fault tolerance mechanism while a 
> proper
> logging system will annotate all relevant events, useful for auditing 
> processing.
> 
> In more detail, some features of the FairshareScheduler are:
> a) It assigns dynamically the proper priority to every new user requests;
> b) The priority of the queued requests will be recalculated periodically 
> using the
> fairshare algorithm. This feature guarantees the usage of the cloud resources 
> is
> distributed among users and groups by considering the portion of the cloud
> resources allocated to them (i.e. share) and the resources already consumed;
> c) all user requests will be inserted in a (persistent) priority queue and 
> then
> processed asynchronously by the dedicated process (filtering + weighting 
> phase)
> when compute resources are available;
> d) From the client point of view the queued requests remain in "Scheduling"
> state till the compute resources are available. No new states added: this
> prevents any possible interaction issue with the Openstack clients;
> e) User requests are

Re: [openstack-dev] [Oslo][TaskFlow] Proposal for new core reviewer (pranesh pandurangan)

2014-06-30 Thread Ivan Melnikov
On 01.07.2014 04:05, Joshua Harlow wrote:
> Greetings all stackers,
> 
> I propose that we add Pranesh Pandurangan[1] to the taskflow-core team[2].
> 
> Pranesh has been actively contributing to taskflow for a while now, both
> in helping develop code and helping with the review load. He has provided
> quality reviews and is doing an awesome job with the various taskflow
> concepts
> and helping make taskflow the best library it can be!
> 
> Overall I think he would make a great addition to the core review team.
> 
> Please respond with +1/-1.

+1

-- 
WBR,
Ivan A. Melnikov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] scheduler group meeting agenda 7/1

2014-06-30 Thread Dugger, Donald D
1)  Forklift (tasks & status)

2)  Fair Share scheduler (just a heads up)

3)  Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][sriov] weekly meeting for july 1st and july 8th

2014-06-30 Thread Robert Li (baoli)
Hi,

I will be on PTO from Tuesday, and come back to office on July 9th Wednesday. 
Therefore, I won’t be present in the next two SR-IOV weekly meetings. Regarding 
the sr-iov development status, I finally fixed all the failures in the existing 
unit tests. Rob and I are still working on adding new unit test cases in the 
PCI and libvirt driver area. Once that’s done, we should be able to push 
another two patches up.

Thanks,
Robert

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][TaskFlow] Proposal for new core reviewer (pranesh pandurangan)

2014-06-30 Thread Changbin Liu
+1


Thanks

Changbin


On Mon, Jun 30, 2014 at 8:05 PM, Joshua Harlow 
wrote:

>  Greetings all stackers,
>
>  I propose that we add Pranesh Pandurangan[1] to the taskflow-core
> team[2].
>
>  Pranesh has been actively contributing to taskflow for a while now, both
> in helping develop code and helping with the review load. He has provided
> quality reviews and is doing an awesome job with the various taskflow
> concepts
> and helping make taskflow the best library it can be!
>
>  Overall I think he would make a great addition to the core review team.
>
>  Please respond with +1/-1.
>
>  Thanks much!
>
>  --
>
>  Joshua Harlow
>
>  It's openstack, relax... | harlo...@yahoo-inc.com
>
>  [1] https://launchpad.net/~praneshp
> [2] https://wiki.openstack.org/wiki/TaskFlow/CoreTeam
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-06-30 Thread Jay Pipes

On 06/30/2014 07:08 PM, Anita Kuno wrote:

On 06/30/2014 04:22 PM, Jay Pipes wrote:

Hi Stackers,

Some recent ML threads [1] and a hot IRC meeting today [2] brought up
some legitimate questions around how a newly-proposed Stackalytics
report page for Neutron External CI systems [2] represented the results
of an external CI system as "successful" or not.

First, I want to say that Ilya and all those involved in the
Stackalytics program simply want to provide the most accurate
information to developers in a format that is easily consumed. While
there need to be some changes in how data is shown (and the wording of
things like "Tests Succeeded"), I hope that the community knows there
isn't any ill intent on the part of Mirantis or anyone who works on
Stackalytics. OK, so let's keep the conversation civil -- we're all
working towards the same goals of transparency and accuracy. :)

Alright, now, Anita and Kurt Taylor were asking a very poignant question:

"But what does CI tested really mean? just running tests? or tested to
pass some level of requirements?"

In this nascent world of external CI systems, we have a set of issues
that we need to resolve:

1) All of the CI systems are different.

Some run Bash scripts. Some run Jenkins slaves and devstack-gate
scripts. Others run custom Python code that spawns VMs and publishes
logs to some public domain.

As a community, we need to decide whether it is worth putting in the
effort to create a single, unified, installable and runnable CI system,
so that we can legitimately say "all of the external systems are
identical, with the exception of the driver code for vendor X being
substituted in the Neutron codebase."

If the goal of the external CI systems is to produce reliable,
consistent results, I feel the answer to the above is "yes", but I'm
interested to hear what others think. Frankly, in the world of
benchmarks, it would be unthinkable to say "go ahead and everyone run
your own benchmark suite", because you would get wildly different
results. A similar problem has emerged here.

2) There is no mediation or verification that the external CI system is
actually testing anything at all

As a community, we need to decide whether the current system of
self-policing should continue. If it should, then language on reports
like [3] should be very clear that any numbers derived from such systems
should be taken with a grain of salt. Use of the word "Success" should
be avoided, as it has connotations (in English, at least) that the
result has been verified, which is simply not the case as long as no
verification or mediation occurs for any external CI system.

3) There is no clear indication of what tests are being run, and
therefore there is no clear indication of what "success" is

I think we can all agree that a test has three possible outcomes: pass,
fail, and skip. The results of a test suite run therefore is nothing
more than the aggregation of which tests passed, which failed, and which
were skipped.

As a community, we must document, for each project, what are expected
set of tests that must be run for each merged patch into the project's
source tree. This documentation should be discoverable so that reports
like [3] can be crystal-clear on what the data shown actually means. The
report is simply displaying the data it receives from Gerrit. The
community needs to be proactive in saying "this is what is expected to
be tested." This alone would allow the report to give information such
as "External CI system ABC performed the expected tests. X tests passed.
Y tests failed. Z tests were skipped." Likewise, it would also make it
possible for the report to give information such as "External CI system
DEF did not perform the expected tests.", which is excellent information
in and of itself.

===

In thinking about the likely answers to the above questions, I believe
it would be prudent to change the Stackalytics report in question [3] in
the following ways:

a. Change the "Success %" column header to "% Reported +1 Votes"
b. Change the phrase " Green cell - tests ran successfully, red cell -
tests failed" to "Green cell - System voted +1, red cell - System voted -1"

and then, when we have more and better data (for example, # tests
passed, failed, skipped, etc), we can provide more detailed information
than just "reported +1" or not.

Thoughts?

Best,
-jay

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-June/038933.html
[2]
http://eavesdrop.openstack.org/meetings/third_party/2014/third_party.2014-06-30-18.01.log.html

[3] http://stackalytics.com/report/ci/neutron/7

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi Jay:

Thanks for starting this thread. You raise some interesting questions.

The question I had identified as needing definition is "what algorithm
do we use to assess fitness of a third party ci system".

http://e

Re: [openstack-dev] [ceilometer] How to filter out meters whose resources have been deleted?

2014-06-30 Thread Ke Xia
anyone knows this?
2014-6-30 PM6:35于 "Ke Xia" 写道:

> Hi,
>
> As time goes on, meters will be a huge list, and some meters whose
> resources have been deleted may be useless for me, can I filter them out
> from meter list?
>
> Thanks
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo][TaskFlow] Proposal for new core reviewer (pranesh pandurangan)

2014-06-30 Thread Joshua Harlow
Greetings all stackers,

I propose that we add Pranesh Pandurangan[1] to the taskflow-core team[2].

Pranesh has been actively contributing to taskflow for a while now, both
in helping develop code and helping with the review load. He has provided
quality reviews and is doing an awesome job with the various taskflow concepts
and helping make taskflow the best library it can be!

Overall I think he would make a great addition to the core review team.

Please respond with +1/-1.

Thanks much!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com

[1] https://launchpad.net/~praneshp
[2] https://wiki.openstack.org/wiki/TaskFlow/CoreTeam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-06-30 Thread Dugger, Donald D
Eric-

We have a weekly scheduler sub-group (code name gantt) IRC meeting at 1500 UTC 
on Tuesdays.  This would be an excellent topic to bring up at one of those 
meetings as a lot of people with interest in the scheduler will be there.  It's 
a little short notice for tomorrow but do you think you could attend next week, 
7/8, to talk about this?

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Eric Frizziero [mailto:eric.frizzi...@pd.infn.it] 
Sent: Monday, June 30, 2014 8:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

Hi All,

we have analyzed the nova-scheduler component (FilterScheduler) in our 
Openstack installation used by some scientific teams.

In our scenario, the cloud resources need to be distributed among the teams by 
considering the predefined share (e.g. quota) assigned to each team, the 
portion of the resources currently used and the resources they have already 
consumed.

We have observed that:
1) User requests are sequentially processed (FIFO scheduling), i.e. 
FilterScheduler doesn't provide any dynamic priority algorithm;
2) User requests that cannot be satisfied (e.g. if resources are not
available) fail and will be lost, i.e. on that scenario nova-scheduler doesn't 
provide any queuing of the requests;
3) OpenStack simply provides a static partitioning of resources among various 
projects / teams (use of quotas). If project/team 1 in a period is 
systematically underutilizing its quota and the project/team 2 instead is 
systematically saturating its quota, the only solution to give more resource to 
project/team 2 is a manual change (to be done by the admin) to the related 
quotas.

The need to find a better approach to enable a more effective scheduling in 
Openstack becomes more and more evident when the number of the user requests to 
be handled increases significantly. This is a well known problem which has 
already been solved in the past for the Batch Systems.

In order to solve those issues in our usage scenario of Openstack, we have 
developed a prototype of a pluggable scheduler, named FairShareScheduler, with 
the objective to extend the existing OpenStack scheduler (FilterScheduler) by 
integrating a (batch like) dynamic priority algorithm.

The architecture of the FairShareScheduler is explicitly designed to provide a 
high scalability level. To all user requests will be assigned a priority value 
calculated by considering the share allocated to the user by the administrator 
and the evaluation of the effective resource usage consumed in the recent past. 
All requests will be inserted in a priority queue, and processed in parallel by 
a configurable pool of workers without interfering with the priority order. 
Moreover all significant information (e.g. priority queue) will be stored in a 
persistence layer in order to provide a fault tolerance mechanism while a 
proper logging system will annotate all relevant events, useful for auditing 
processing.

In more detail, some features of the FairshareScheduler are:
a) It assigns dynamically the proper priority to every new user requests;
b) The priority of the queued requests will be recalculated periodically using 
the fairshare algorithm. This feature guarantees the usage of the cloud 
resources is distributed among users and groups by considering the portion of 
the cloud resources allocated to them (i.e. share) and the resources already 
consumed;
c) all user requests will be inserted in a (persistent) priority queue and then 
processed asynchronously by the dedicated process (filtering + weighting phase) 
when compute resources are available;
d) From the client point of view the queued requests remain in "Scheduling" 
state till the compute resources are available. No new states added: this 
prevents any possible interaction issue with the Openstack clients;
e) User requests are dequeued by a pool of WorkerThreads (configurable), i.e. 
no sequential processing of the requests;
f) The failed requests at filtering + weighting phase may be inserted again in 
the queue for n-times (configurable).

We have integrated the FairShareScheduler in our Openstack installation 
(release "HAVANA"). We're now working to adapt the FairShareScheduler to the 
new release "IceHouse".

Does anyone have experiences in those issues found in our cloud scenario?

Could the FairShareScheduler be useful for the Openstack community?
In that case, we'll be happy to share our work.

Any feedback/comment is welcome!

Cheers,
Eric.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-06-30 Thread Anita Kuno
On 06/30/2014 04:22 PM, Jay Pipes wrote:
> Hi Stackers,
> 
> Some recent ML threads [1] and a hot IRC meeting today [2] brought up
> some legitimate questions around how a newly-proposed Stackalytics
> report page for Neutron External CI systems [2] represented the results
> of an external CI system as "successful" or not.
> 
> First, I want to say that Ilya and all those involved in the
> Stackalytics program simply want to provide the most accurate
> information to developers in a format that is easily consumed. While
> there need to be some changes in how data is shown (and the wording of
> things like "Tests Succeeded"), I hope that the community knows there
> isn't any ill intent on the part of Mirantis or anyone who works on
> Stackalytics. OK, so let's keep the conversation civil -- we're all
> working towards the same goals of transparency and accuracy. :)
> 
> Alright, now, Anita and Kurt Taylor were asking a very poignant question:
> 
> "But what does CI tested really mean? just running tests? or tested to
> pass some level of requirements?"
> 
> In this nascent world of external CI systems, we have a set of issues
> that we need to resolve:
> 
> 1) All of the CI systems are different.
> 
> Some run Bash scripts. Some run Jenkins slaves and devstack-gate
> scripts. Others run custom Python code that spawns VMs and publishes
> logs to some public domain.
> 
> As a community, we need to decide whether it is worth putting in the
> effort to create a single, unified, installable and runnable CI system,
> so that we can legitimately say "all of the external systems are
> identical, with the exception of the driver code for vendor X being
> substituted in the Neutron codebase."
> 
> If the goal of the external CI systems is to produce reliable,
> consistent results, I feel the answer to the above is "yes", but I'm
> interested to hear what others think. Frankly, in the world of
> benchmarks, it would be unthinkable to say "go ahead and everyone run
> your own benchmark suite", because you would get wildly different
> results. A similar problem has emerged here.
> 
> 2) There is no mediation or verification that the external CI system is
> actually testing anything at all
> 
> As a community, we need to decide whether the current system of
> self-policing should continue. If it should, then language on reports
> like [3] should be very clear that any numbers derived from such systems
> should be taken with a grain of salt. Use of the word "Success" should
> be avoided, as it has connotations (in English, at least) that the
> result has been verified, which is simply not the case as long as no
> verification or mediation occurs for any external CI system.
> 
> 3) There is no clear indication of what tests are being run, and
> therefore there is no clear indication of what "success" is
> 
> I think we can all agree that a test has three possible outcomes: pass,
> fail, and skip. The results of a test suite run therefore is nothing
> more than the aggregation of which tests passed, which failed, and which
> were skipped.
> 
> As a community, we must document, for each project, what are expected
> set of tests that must be run for each merged patch into the project's
> source tree. This documentation should be discoverable so that reports
> like [3] can be crystal-clear on what the data shown actually means. The
> report is simply displaying the data it receives from Gerrit. The
> community needs to be proactive in saying "this is what is expected to
> be tested." This alone would allow the report to give information such
> as "External CI system ABC performed the expected tests. X tests passed.
> Y tests failed. Z tests were skipped." Likewise, it would also make it
> possible for the report to give information such as "External CI system
> DEF did not perform the expected tests.", which is excellent information
> in and of itself.
> 
> ===
> 
> In thinking about the likely answers to the above questions, I believe
> it would be prudent to change the Stackalytics report in question [3] in
> the following ways:
> 
> a. Change the "Success %" column header to "% Reported +1 Votes"
> b. Change the phrase " Green cell - tests ran successfully, red cell -
> tests failed" to "Green cell - System voted +1, red cell - System voted -1"
> 
> and then, when we have more and better data (for example, # tests
> passed, failed, skipped, etc), we can provide more detailed information
> than just "reported +1" or not.
> 
> Thoughts?
> 
> Best,
> -jay
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-June/038933.html
> [2]
> http://eavesdrop.openstack.org/meetings/third_party/2014/third_party.2014-06-30-18.01.log.html
> 
> [3] http://stackalytics.com/report/ci/neutron/7
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Hi Jay:

Thanks for starting this thread. You raise so

Re: [openstack-dev] [Nova] Meeting this week?

2014-06-30 Thread Michael Still
Ok, let's skip the meeting this week so people can have a break.

Cheers,
Michael

On Tue, Jul 1, 2014 at 3:45 AM, melanie witt  wrote:
> On Jun 29, 2014, at 17:01, Michael Still  wrote:
>
>> Hi. The meeting this week would be on the 3rd of July, which I assume
>> means that many people will be out of the office. Do people think its
>> worth running the meeting or shall we give this week a miss?
>
> July 3 is a company holiday where I am, so I'm also +1 on skipping this week.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Triaging bugs: milestones vs release series

2014-06-30 Thread Dmitry Borodaenko
When you create a bug against a project (in our case, fuel) in
Launchpad, it is always initially targeted at the default release
series (currently, 5.1.x). On the bug summary, that isn't explicitly
stated and shows as being targeted to the project in general (Fuel for
OpenStack). As you add more release series to a bug, these will be
listed under release series name (e.g. 5.0.x).

Unfortunately, Launchpad doesn't limit the list of milestones you can
target to the targeted release series, so it will happily allow you to
target a bug at 4.1.x release series and set milestone in that series
to 5.1.

A less obvious inconsistency is when a bug is found in a stable
release series like 5.0.x: it seems natural to target it to milestone
like 5.0.1 and be done with it. The problem with that approach is that
there's no way to reflect whether this bug is relevant for current
release series (5.1.x) and if it is, to track status of the fix
separately in current and stable release series.

Therefore, when triaging new bugs in stable versions of Fuel or
Mirantis OpenStack, please set the milestone to the next release in
the current release focus (5.1.x), and target to the series it was
found in separately. If there are more recent stable release series,
target those as well.

Example: a bug is found in 4.1.1. Set primary milestone to 5.1 (as
long as current release focus is 5.1.x and 5.1 is the next milestone
in that series), target 2 more release series: 4.1.x and 5.0.x, set
milestones for those to 4.1.2 and 5.0.1 respectively.

If there is reason to believe that the bug does not apply to some of
the targeted release series, explain that in the commit and mark the
bug Invalid for that release series. If the bug is present in a series
but cannot be addressed there (e.g. priority is not high enough to do
a backport), mark it Won't Fix for that series.

If there are no objections to this approach, I'll put it in Fuel wiki.

Thanks,
-DmitryB

-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Juno priorities and spec review timeline

2014-06-30 Thread Devananda van der Veen
Hi all!

We're roughly at the midway point between summit and release, and I
feel that's a good time to take a look at our progress compared to the
goals we set out at the design summit. To that end, I re-opened my
summit notes about what features we had prioritized in Atlanta, and
engaged many the core reviewers in a discussion last friday to
estimate what we'll have time to review and land in the remainder of
this cycle. Based on that, I've created this spreadsheet to represent
those expectations and our current progress towards what we think we
can achieve this cycle:

https://docs.google.com/spreadsheets/d/1Hxyfy60hN_Fit0b-plsPzK6yW3ePQC5IfwuzJwltlbo

Aside from several cleanup- and test-related tasks, these goals
correlate to spec reviews that have already been proposed. I've
crossed off ones which we discussed at the summit, but for which no
proposal has yet been submitted. The spec-review team and I will be
referring to this to help us prioritize specs reviews. While I am not
yet formally blocking proposals which do not fit within this list of
priorities, the review team is working with a large back-log and
probably won't have time to review anything else this cycle. If you're
concerned that you won't be able to land your favorite feature in
Juno, the best thing you can do is to participate in reviewing other
people's code, join the core team, and help us accelerate the
development process of "K".

Borrowing a little from Nova's timeline, I have proposed the following
timeline for Ironic. Note that dates listed are Thursdays, and numbers
in parentheses are weeks until feature freeze.

You may also note that I'll be offline for two weeks immediately prior
to the Juno-3 milestone, which is another reason why I'd like the core
review team to have a solid plan (read: approved specs) in place by
Aug 14.



July 3 (-9): spec review day on Wednesday (July 2)
 focus on landing specs for our priorities:
https://docs.google.com/spreadsheets/d/1Hxyfy60hN_Fit0b-plsPzK6yW3ePQC5IfwuzJwltlbo

Jul 24 (-6): Juno-2 milestone tagged
 new spec proposal freeze

Jul 31 (-5): midcycle meetup (July 27-30)
 https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint

Aug 14 (-3): last spec review day on Wednesday (Aug 13)

Aug 21 (-2): PTL offline all week

Aug 28 (-1): PTL offline all week

Sep  4 ( 0): Juno-3 milestone tagged
 Feature freeze
 K opens for spec proposals
 Unmerged J spec proposals must rebase on K
 Merged J specs with no code proposed are deleted and may
be re-proposed for K
 Merged J specs with code proposed need to be reviewed for
feature-freeze-exception

Sep 25 (+3): RC 1 build expected
 K spec reviews start

Oct 16 (+6): Release!

Oct 30 (+8): K summit spec proposal freeze
 K summit sessions should have corresponding spec proposal

Nov  6 (+9): K design summit


Thanks!
Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Temporary project leadership team

2014-06-30 Thread Stephen Balukoff
Howdy folks,

Given the feedback from people on this list last week to my suggestion
about holding elections for PTL and core reviewers for the Octavia project,
it's clear that since we've only recently been added as a stackforge
project, we don't have any code or review history on which people can base
their opinions for voting on project leadership.

The problem is that if we want to be able to use the gerrit system for
reviewing code, specs, and documentation then somebody needs the ability to
+2 proposed changes and workflow so they get merged into the source tree
(ie. if we want to follow the usual OpenStack process on this, which we do.)

So after discussing this with the group of people who have been working
together on the design for this project for the last several months, the
consensus was to elect from among ourselves an interim PTL and interim core
reviewers until we have enough official history on this project to be able
to actually do meaningful public elections. We are anticipating holding
these elections around the time of the Juno summit.

Obviously, please feel free to disagree with our process and/or decision
for doing this, but I would ask that if you do, please also do us the
courtesy of explaining how we're supposed to merge code using the gerrit
system without having a PTL or core reviewers.

Anyway, this is the list of folks we agreed upon:

Interim PTL: Stephen Balukoff
Interim core reviewers: Vivek Jain, Brandon Logan, German Eichberger

Thanks,
Stephen

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] DVR demo and how-to

2014-06-30 Thread Armando M.
Hi folks,

The DVR team is working really hard to complete this important task for
Juno and Neutron.

In order to help see this feature in action, a video has been made
available and link can be found in [2].

There is still some work to do, however I wanted to remind you that all of
the relevant information is available on the wiki [1, 2] and Gerrit [3].

[1] - https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
[2] - https://wiki.openstack.org/wiki/Neutron/DVR/HowTo
[3] - https://review.openstack.org/#/q/topic:bp/neutron-ovs-dvr,n,z

More to follow!

Cheers,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] OSLO messaging update and icehouse config files

2014-06-30 Thread Mark McLoughlin
On Mon, 2014-06-30 at 15:35 -0600, John Griffith wrote:
> 
> 
> 
> 
> On Mon, Jun 30, 2014 at 3:17 PM, Mark McLoughlin 
> wrote:
> On Mon, 2014-06-30 at 12:04 -0600, John Griffith wrote:
> > Hey Everyone,
> >
> >
> > So I sent a note out yesterday asking about config changes
> brought in
> > to Icehouse due to the OSLO Messaging update that went out
> over the
> > week-end here.  My initial email prior to realizing the
> update that
> > caused the problem was OSLO Messaging update here [1].
> 
> 
> (Periodic reminder that Oslo is not an acronym)
> 
> > In the meantime I tried updating the cinder.conf sample in
> Cinder's
> > stable Icehouse branch, but noticed that py26 doesn't seem
> to pick up
> > the changes when running the oslo conf generation tools
> against
> > oslo.messaging.  I haven't spent any time digging into this
> yet, was
> > hoping that perhaps somebody from the OSLO team or somewhere
> else
> > maybe had some insight as to what's going on here.
> >
> >
> > Here's the patch I submitted that shows the failure on py26
> and
> > success on py27 [2].  I'll get around to this eventually if
> nobody
> > else knows anything off the top of their head.
> >
> >
> > Thanks,
> > John
> >
> >
> > [1]:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2014-June/038926.html
> > [2]: https://review.openstack.org/#/c/103426/
> 
> 
> Ok, that new cinder.conf.sample is showing changes caused by
> these
> oslo.messaging changes:
> 
>   https://review.openstack.org/101583
>   https://review.openstack.org/99291
> 
> Both of those changes were first released in 1.4.0.0a1 which
> is an alpha
> version targeting Juno and are not available in the 1.3.0
> Icehouse
> version - i.e. 1.4.0.0a1 should not be used with
> stable/icehouse Cinder.
> 
> It seems 1.3.0 *is* being used:
> 
> 
> 
> http://logs.openstack.org/26/103426/1/check/gate-cinder-python26/5c6c1dd/console.html
> 
> 2014-06-29 19:17:50.154 | oslo.messaging==1.3.0
> 
> and the output is just confusing:
> 
>   2014-06-29 19:17:49.900 |
> --- /tmp/cinder.UtGHjm/cinder.conf.sample   2014-06-29
> 19:17:50.270071741 +
>   2014-06-29 19:17:49.900 | +++ etc/cinder/cinder.conf.sample
> 2014-06-29 19:10:48.396072037 +
> 
>   ...
> 
>   2014-06-29 19:17:49.903 | +[matchmaker_redis]
>   2014-06-29 19:17:49.903 | +
> 
> i.e. it's showing that the file you proposed was generated
> with
> 1.4.0.0a1 and the file generated during the test job was
> generated with
> 1.3.0. Which is what I'd expect - the update you proposed is
> not
> appropriate for stable/icehouse.
> 
> So why is the py27 job passing?
> 
> 
> 
> http://logs.openstack.org/26/103426/1/check/gate-cinder-python27/7844c61/console.html
> 
>   2014-06-29 19:21:12.875 | oslo.messaging==1.4.0.0a2
> 
> That's the problem right there - 1.4.0.0a2 should not be
> getting
> installed on the stable/icehouse branch. I'm not sure why it
> is. Someone
> on #openstack-infra could probably help figure it out.
> 
> Thanks,
> Mark.
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ​Thanks Mark... so the problem is Oslo messaging in requirements is >=
> in stable/icehouse [1](please note I used Oslo not OSLO).
> 
> 
> [1]:
> https://github.com/openstack/requirements/blob/stable/icehouse/global-requirements.txt#L49
> 
> 
> Thanks for pointing me in the right direction.

Ah, yes! This is the problem:

  oslo.messaging>=1.3.0a9

This essentially allows *any* alpha release of oslo.messaging to be
used. We should change stable/icehouse to simply be:

  oslo.messaging>=1.3.0

I'm happy to do that tomorrow, but I suspect you'll get there first

Thanks,
Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-06-30 Thread Belmiro Moreira
Hi Eric,
definitely...

In my view a "FairShareScheduler" could be a very interesting option for
private clouds that support scientific communities. Basically this is the
model used by batch systems in order to fully use the available resources.

I'm very curious about the work that you are doing.
Is it available in github?

Belmiro

--

Belmiro Moreira

CERN

Email: belmiro.more...@cern.ch

IRC: belmoreira


On Mon, Jun 30, 2014 at 4:05 PM, Eric Frizziero 
wrote:

> Hi All,
>
> we have analyzed the nova-scheduler component (FilterScheduler) in our
> Openstack installation used by some scientific teams.
>
> In our scenario, the cloud resources need to be distributed among the
> teams by considering the predefined share (e.g. quota) assigned to each
> team, the portion of the resources currently used and the resources they
> have already consumed.
>
> We have observed that:
> 1) User requests are sequentially processed (FIFO scheduling), i.e.
> FilterScheduler doesn't provide any dynamic priority algorithm;
> 2) User requests that cannot be satisfied (e.g. if resources are not
> available) fail and will be lost, i.e. on that scenario nova-scheduler
> doesn't provide any queuing of the requests;
> 3) OpenStack simply provides a static partitioning of resources among
> various projects / teams (use of quotas). If project/team 1 in a period is
> systematically underutilizing its quota and the project/team 2 instead is
> systematically saturating its quota, the only solution to give more
> resource to project/team 2 is a manual change (to be done by the admin) to
> the related quotas.
>
> The need to find a better approach to enable a more effective scheduling
> in Openstack becomes more and more evident when the number of the user
> requests to be handled increases significantly. This is a well known
> problem which has already been solved in the past for the Batch Systems.
>
> In order to solve those issues in our usage scenario of Openstack, we have
> developed a prototype of a pluggable scheduler, named FairShareScheduler,
> with the objective to extend the existing OpenStack scheduler
> (FilterScheduler) by integrating a (batch like) dynamic priority algorithm.
>
> The architecture of the FairShareScheduler is explicitly designed to
> provide a high scalability level. To all user requests will be assigned a
> priority value calculated by considering the share allocated to the user by
> the administrator and the evaluation of the effective resource usage
> consumed in the recent past. All requests will be inserted in a priority
> queue, and processed in parallel by a configurable pool of workers without
> interfering with the priority order. Moreover all significant information
> (e.g. priority queue) will be stored in a persistence layer in order to
> provide a fault tolerance mechanism while a proper logging system will
> annotate all relevant events, useful for auditing processing.
>
> In more detail, some features of the FairshareScheduler are:
> a) It assigns dynamically the proper priority to every new user requests;
> b) The priority of the queued requests will be recalculated periodically
> using the fairshare algorithm. This feature guarantees the usage of the
> cloud resources is distributed among users and groups by considering the
> portion of the cloud resources allocated to them (i.e. share) and the
> resources already consumed;
> c) all user requests will be inserted in a (persistent) priority queue and
> then processed asynchronously by the dedicated process (filtering +
> weighting phase) when compute resources are available;
> d) From the client point of view the queued requests remain in
> “Scheduling” state till the compute resources are available. No new states
> added: this prevents any possible interaction issue with the Openstack
> clients;
> e) User requests are dequeued by a pool of WorkerThreads (configurable),
> i.e. no sequential processing of the requests;
> f) The failed requests at filtering + weighting phase may be inserted
> again in the queue for n-times (configurable).
>
> We have integrated the FairShareScheduler in our Openstack installation
> (release "HAVANA"). We're now working to adapt the FairShareScheduler to
> the new release "IceHouse".
>
> Does anyone have experiences in those issues found in our cloud scenario?
>
> Could the FairShareScheduler be useful for the Openstack community?
> In that case, we'll be happy to share our work.
>
> Any feedback/comment is welcome!
>
> Cheers,
> Eric.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-06-30 Thread Clint Byrum
Excerpts from Boris Pavlovic's message of 2014-06-30 14:11:08 -0700:
> Hi all,
> 
> Specs are interesting idea, that may be really useful, when you need to
> discuss large topics:
> 1) work on API
> 2) Large refactoring
> 3) Large features
> 4) Performance, scale, ha, security issues that requires big changes
> 
> And I really dislike idea of adding spec for every patch. Especially when
> changes (features) are small, don't affect too much, and they are optional.
> It really kills OpenStack. And will drastically slow down process of
> contribution and reduce amount of contributors.

Who says there needs to be a spec for every patch?

I agree with your items above. Any other change is likely just fixing a
bug.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] OSLO messaging update and icehouse config files

2014-06-30 Thread John Griffith
On Mon, Jun 30, 2014 at 3:17 PM, Mark McLoughlin  wrote:

> On Mon, 2014-06-30 at 12:04 -0600, John Griffith wrote:
> > Hey Everyone,
> >
> >
> > So I sent a note out yesterday asking about config changes brought in
> > to Icehouse due to the OSLO Messaging update that went out over the
> > week-end here.  My initial email prior to realizing the update that
> > caused the problem was OSLO Messaging update here [1].
>
> (Periodic reminder that Oslo is not an acronym)
>
> > In the meantime I tried updating the cinder.conf sample in Cinder's
> > stable Icehouse branch, but noticed that py26 doesn't seem to pick up
> > the changes when running the oslo conf generation tools against
> > oslo.messaging.  I haven't spent any time digging into this yet, was
> > hoping that perhaps somebody from the OSLO team or somewhere else
> > maybe had some insight as to what's going on here.
> >
> >
> > Here's the patch I submitted that shows the failure on py26 and
> > success on py27 [2].  I'll get around to this eventually if nobody
> > else knows anything off the top of their head.
> >
> >
> > Thanks,
> > John
> >
> >
> > [1]:
> http://lists.openstack.org/pipermail/openstack-dev/2014-June/038926.html
> > [2]: https://review.openstack.org/#/c/103426/
>
> Ok, that new cinder.conf.sample is showing changes caused by these
> oslo.messaging changes:
>
>   https://review.openstack.org/101583
>   https://review.openstack.org/99291
>
> Both of those changes were first released in 1.4.0.0a1 which is an alpha
> version targeting Juno and are not available in the 1.3.0 Icehouse
> version - i.e. 1.4.0.0a1 should not be used with stable/icehouse Cinder.
>
> It seems 1.3.0 *is* being used:
>
>
> http://logs.openstack.org/26/103426/1/check/gate-cinder-python26/5c6c1dd/console.html
>
> 2014-06-29 19:17:50.154 | oslo.messaging==1.3.0
>
> and the output is just confusing:
>
>   2014-06-29 19:17:49.900 | --- /tmp/cinder.UtGHjm/cinder.conf.sample
> 2014-06-29 19:17:50.270071741 +
>   2014-06-29 19:17:49.900 | +++ etc/cinder/cinder.conf.sample   2014-06-29
> 19:10:48.396072037 +
>
>   ...
>
>   2014-06-29 19:17:49.903 | +[matchmaker_redis]
>   2014-06-29 19:17:49.903 | +
>
> i.e. it's showing that the file you proposed was generated with
> 1.4.0.0a1 and the file generated during the test job was generated with
> 1.3.0. Which is what I'd expect - the update you proposed is not
> appropriate for stable/icehouse.
>
> So why is the py27 job passing?
>
>
> http://logs.openstack.org/26/103426/1/check/gate-cinder-python27/7844c61/console.html
>
>   2014-06-29 19:21:12.875 | oslo.messaging==1.4.0.0a2
>
> That's the problem right there - 1.4.0.0a2 should not be getting
> installed on the stable/icehouse branch. I'm not sure why it is. Someone
> on #openstack-infra could probably help figure it out.
>
> Thanks,
> Mark.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​Thanks Mark... so the problem is Oslo messaging in requirements is >= in
stable/icehouse [1](please note I used Oslo not OSLO).

[1]:
https://github.com/openstack/requirements/blob/stable/icehouse/global-requirements.txt#L49

Thanks for pointing me in the right direction.
John
​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-06-30 Thread Joshua Harlow
Thanks boris for starting this topic,

There is a balance here that needs to be worked out and I've seen specs start 
to turn into requirements for every single patch (even if the patch is pretty 
small). I hope we can rework the 'balance in the force' to avoid being so 
strict that every little thing requires a spec. This will not end well for us 
as a community.

How have others thought the spec process has worked out so far? To much 
overhead, to little…?

I personally am of the opinion that specs should be used for large topics 
(defining large is of course arbitrary); and I hope we find the right balance 
to avoid scaring everyone away from working with openstack. Maybe all of this 
is part of openstack maturing, I'm not sure, but it'd be great if we could have 
some guidelines around when is a spec needed and when isn't it and take it into 
consideration when requesting a spec that the person you have requested may get 
frustrated and just leave the community (and we must not have this happen) if 
you ask for it without explaining why and how clearly.

From: Boris Pavlovic mailto:bpavlo...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 30, 2014 at 2:11 PM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [all][specs] Please stop doing specs for any changes 
in projects

Hi all,

Specs are interesting idea, that may be really useful, when you need to discuss 
large topics:
1) work on API
2) Large refactoring
3) Large features
4) Performance, scale, ha, security issues that requires big changes

And I really dislike idea of adding spec for every patch. Especially when 
changes (features) are small, don't affect too much, and they are optional. It 
really kills OpenStack. And will drastically slow down process of contribution 
and reduce amount of contributors.

Thanks.


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][oslo][neutron] Need help getting oslo.messaging 1.4.0.0a2 in global requirements

2014-06-30 Thread Mark McLoughlin
On Mon, 2014-06-30 at 16:52 +, Paul Michali (pcm) wrote:
> I have out for review 103536 to add this version to global
> requirements, so that Neutron has an oslo fix (review 102909) for
> encoding failure, which affects some gate runs. This review for global
> requirements is failing requirements check
> (http://logs.openstack.org/36/103536/1/check/check-requirements-integration-dsvm/6d9581c/console.html#_2014-06-30_12_34_56_921).
>  I did a recheck bug 1334898, but see the same error, with the release not 
> found, even though it is in PyPI. Infra folks say this is a known issue with 
> pushing out pre-releases.
> 
> 
> Do we have a work-around?
> Any proposed solution to try?

That makes two oslo alpha releases which are failing
openstack/requirements checks:

  https://review.openstack.org/103256
  https://review.openstack.org/103536

and an issue with the py27 stable/icehouse test jobs seemingly pulling
in oslo.messaging 1.4.0.0a2:

  http://lists.openstack.org/pipermail/openstack-dev/2014-June/039021.html

and these comments on IRC:

  
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2014-06-30.log

  2014-06-30T15:27:33   hi. Need help with getting latest oslo.messaging 
release added to global requirements. Can someone advise on the issues I see.
  2014-06-30T15:28:06   pcm__: there are issues adding oslo 
pre-releases to the mirror right now - we're working on a solution ... so 
you're not alone at least :)
  2014-06-30T15:29:02   mordred: Jenkins failed saying that it could not 
find the release, but it is available.
  2014-06-30T15:29:31   pcm__: mordred: is the fix to remove the 
check for --no-use-wheel in the check-requirements-integration-dsvm ?
  2014-06-30T15:29:55   bknudson: nope. it's to completely change our 
mirroring infrastructure :)

Presumably there's more information somewhere on what solution infra are
working on, but that's all I got ...

We knew this pre-release-with-wheels stuff was going to be a little
rocky, so this isn't surprising. Hopefully it'll get sorted out soon.

Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-30 Thread Ben Nemec
I'm far from an oslo.messaging expert, but a few general thoughts below.

On 06/30/2014 02:34 PM, Alexei Kornienko wrote:
> Hello,
> 
> 
>> My understanding is that your analysis is mostly based on running a
>> profiler against the code. Network operations can be bottlenecked in
>> other places.
>>
>> You compare 'simple script using kombu' with 'script using
>> oslo.messaging'. You don't compare script using oslo.messaging before
>> refactoring and 'after that. The latter would show whether refactoring
>> was worth the effort. Your test shows that oslo.messaging performance
>> sucks, but it's not definite that hotspots you've revealed, once
>> fixed, will show huge boost.
>>
>> My concern is that it may turn out that once all the effort to
>> refactor the code is done, we won't see major difference. So we need
>> base numbers, and performance tests would be a great helper here.
>>
> 
> It's really sad for me to see so little faith in what I'm saying.
> The test I've done using plain kombu driver was needed exactly to check
> that network is not the bottleneck for messaging performance.
> If you don't believe in my performance analysis we could ask someone else
> to do their own research and provide results.

The problem is that extremely simple test cases are often not
representative of overall performance, so comparing a purpose-built test
doing a single thing as fast as possible to a full library that has to
be able to handle all of OpenStack's messaging against every supported
back end isn't sufficient on its own to convince me that there is a
"rewrite all the things" issue here.

> 
> Problem with refactoring that I'm planning is that it's not a minor
> refactoring that can be applied in one patch but it's the whole library
> rewritten from scratch.

Which is exactly why we want to make sure it's something that needs to
be done before heading down that path.  I know I've wasted more time
than I'd like to admit optimizing the wrong code paths, only to find
that my changes made a .1% difference because I was mistaken about what
the bottleneck was.

Add to that the fact that we're _just_ completing the migration to
oslo.messaging in the first place and I hope you can understand why no
one wants to undertake another massive, possibly compatibility breaking,
refactoring unless we're absolutely certain it's the only way to address
the performance limitations of the existing code.

> Existing messaging code was written long long time ago (in a galaxy far far
> away maybe?) and it was copy-pasted directly from nova.
> It was not built as a library and it was never intended to be used outside
> of nova.

This isn't really true anymore.  The oslo.messaging code underwent
significant changes in the move from the incubator rpc module to the
oslo.messaging library.  One of the major points of emphasis in all Oslo
graduations is to make sure the new lib has a proper API and isn't just
a naive copy-paste of the existing code.

> Some parts of it cannot even work normally cause it was not designed to
> work with drivers like zeromq (matchmaker stuff).
> 
> The reason I've raised this question on the mailing list was to get some
> agreement about future plans of oslo.messaging development and start
> working on it in coordination with community.
> For now I don't see any actions plan emerging from it. I would like to see
> us bringing more constructive ideas about what should be done.
> 
> If you think that first action should be profiling lets discuss how it
> should be implemented (cause it works for me just fine on my local PC).
> I guess we'll need to define some basic scenarios that would show us
> overall performance of the library.
> There are a lot of questions that should be answered to implement this:
> Where such tests would run (jenking, local PC, devstack VM)?
> How such scenarios should look like?
> How do we measure performance (cProfile, etc.)?
> How do we collect results?
> How do we analyze results to find bottlenecks?
> etc.
> 
> Another option would be to spend some of my free time implementing
> mentioned refactoring (as I see it) and show you the results of performance
> testing compared with existing code.
> The only problem with such approach is that my code won't be oslo.messaging
> and it won't be accepted by community. It may be drop in base for v2.0 but
> I'm afraid this won't be acceptable either.
> 
> Regards,
> Alexei Kornienko
> 
> 
> 2014-06-30 17:51 GMT+03:00 Gordon Sim :
> 
>> On 06/30/2014 12:22 PM, Ihar Hrachyshka wrote:
>>
>>  Alexei Kornienko wrote:

> Some performance tests may be introduced but they would be more
> like functional tests since they require setup of actual
> messaging server (rabbit, etc.).
>

>>> Yes. I think we already have some. F.e.
>>> tests/drivers/test_impl_qpid.py attempts to use local Qpid server
>>> (backing up to fake server if it's not available).
>>>
>>
>> I always get failures when there is a real qpidd service listening on t

[openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-06-30 Thread Boris Pavlovic
Hi all,

Specs are interesting idea, that may be really useful, when you need to
discuss large topics:
1) work on API
2) Large refactoring
3) Large features
4) Performance, scale, ha, security issues that requires big changes

And I really dislike idea of adding spec for every patch. Especially when
changes (features) are small, don't affect too much, and they are optional.
It really kills OpenStack. And will drastically slow down process of
contribution and reduce amount of contributors.

Thanks.


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-06-30 Thread Franck Yelles
Hi Jay,

Couple of points.

I support the fact that we need to define what is "success" is.
I believe that the metrics that should be used are "Voted +1" and
"Skipped".
But to certain valid case, I would say that the  "Voted -1" is really
mostly a metric of bad health of a CI.
Most of the -1 are due to environment issue, configuration problem, etc...
In my case, the -1 are done manually since I want to avoid giving some
extra work to the developer.

That are some possible solutions ?

On the Jenkins, I think we could develop a script that will parse the
result html file.
Jenkins will then vote (+1, 0, -1) on the behalf of the 3rd party CI.
- It would prevent the abusive +1
- If the result HTML is empty, it would indicate the CI health is bad
- if all the result are failing, it would also indicate that CI health is
bad


Franck


Franck




Franck


On Mon, Jun 30, 2014 at 1:22 PM, Jay Pipes  wrote:

> Hi Stackers,
>
> Some recent ML threads [1] and a hot IRC meeting today [2] brought up some
> legitimate questions around how a newly-proposed Stackalytics report page
> for Neutron External CI systems [2] represented the results of an external
> CI system as "successful" or not.
>
> First, I want to say that Ilya and all those involved in the Stackalytics
> program simply want to provide the most accurate information to developers
> in a format that is easily consumed. While there need to be some changes in
> how data is shown (and the wording of things like "Tests Succeeded"), I
> hope that the community knows there isn't any ill intent on the part of
> Mirantis or anyone who works on Stackalytics. OK, so let's keep the
> conversation civil -- we're all working towards the same goals of
> transparency and accuracy. :)
>
> Alright, now, Anita and Kurt Taylor were asking a very poignant question:
>
> "But what does CI tested really mean? just running tests? or tested to
> pass some level of requirements?"
>
> In this nascent world of external CI systems, we have a set of issues that
> we need to resolve:
>
> 1) All of the CI systems are different.
>
> Some run Bash scripts. Some run Jenkins slaves and devstack-gate scripts.
> Others run custom Python code that spawns VMs and publishes logs to some
> public domain.
>
> As a community, we need to decide whether it is worth putting in the
> effort to create a single, unified, installable and runnable CI system, so
> that we can legitimately say "all of the external systems are identical,
> with the exception of the driver code for vendor X being substituted in the
> Neutron codebase."
>
> If the goal of the external CI systems is to produce reliable, consistent
> results, I feel the answer to the above is "yes", but I'm interested to
> hear what others think. Frankly, in the world of benchmarks, it would be
> unthinkable to say "go ahead and everyone run your own benchmark suite",
> because you would get wildly different results. A similar problem has
> emerged here.
>
> 2) There is no mediation or verification that the external CI system is
> actually testing anything at all
>
> As a community, we need to decide whether the current system of
> self-policing should continue. If it should, then language on reports like
> [3] should be very clear that any numbers derived from such systems should
> be taken with a grain of salt. Use of the word "Success" should be avoided,
> as it has connotations (in English, at least) that the result has been
> verified, which is simply not the case as long as no verification or
> mediation occurs for any external CI system.
>
> 3) There is no clear indication of what tests are being run, and therefore
> there is no clear indication of what "success" is
>
> I think we can all agree that a test has three possible outcomes: pass,
> fail, and skip. The results of a test suite run therefore is nothing more
> than the aggregation of which tests passed, which failed, and which were
> skipped.
>
> As a community, we must document, for each project, what are expected set
> of tests that must be run for each merged patch into the project's source
> tree. This documentation should be discoverable so that reports like [3]
> can be crystal-clear on what the data shown actually means. The report is
> simply displaying the data it receives from Gerrit. The community needs to
> be proactive in saying "this is what is expected to be tested." This alone
> would allow the report to give information such as "External CI system ABC
> performed the expected tests. X tests passed. Y tests failed. Z tests were
> skipped." Likewise, it would also make it possible for the report to give
> information such as "External CI system DEF did not perform the expected
> tests.", which is excellent information in and of itself.
>
> ===
>
> In thinking about the likely answers to the above questions, I believe it
> would be prudent to change the Stackalytics report in question [3] in the
> following ways:
>
> a. Change the "Success %" column header to "% Reported 

Re: [openstack-dev] [neutron] [third-party] Neutron 3rd Party CI status dashboard

2014-06-30 Thread Sukhdev Kapur
Sorry, accidentally hit the wrong key and message went out.

Was making a mention about the definition of Success. I thought the debate
in the meeting was very productive - when a CI posts a +1 that is success,
and when a CI posts a -1 (or no vote with comment) is also a success - as
this reflects that the CI is doing what it is suppose to do.

So, when it comes to stackalytics, it is more critical to show if a given
CI is operational or not - and for how long?
Another thing we can debate is how to present the +1/-1 votes by a given CI
- unless we have some benchmark, it will be hard to consider
success/failure.

So, I am of the  opinion that, initially, we only report on the operational
status and duration of the CIs, and a counter of +1 and -1 votes over a
period of time. For example, looking at Arista CI, it has casted 7,958
votes so far and it has been operational for past 6 months. This
information is not available anywhere - hence, presenting this kind of
information on a dashboard created by Ilya would be very useful to the
community as well to the vendors..

thoughts?

-Sukhdev





On Mon, Jun 30, 2014 at 12:49 PM, Sukhdev Kapur 
wrote:

> Well, Luke, this is collaborative effort by everybody. By having these CI
> systems in place ensures that one person's code does not break other
> person's code and vice versa. Therefore, having these CI systems
> operational and voting 24x7 is a critical step in achieving this goal.
>
> However, the details as to how and what should be tested is definitely
> debatable and the team has done excellent job in converging on that.
>
> Now, as to the issue at hand which Anita is describing, I attended the
> meeting this morning and was very pleased with the debate that took place
> and the definition as to Sucess
>
>
> On Mon, Jun 30, 2014 at 12:27 PM, Luke Gorrie  wrote:
>
>> On 30 June 2014 21:08, Anita Kuno  wrote:
>>
>>> I am disappointed to realize that Ilya (or stackalytics, I don't know
>>> where this is coming from) is unwilling to cease making up definitions
>>> of success for third party ci systems to allow the openstack community
>>> to arrive at its own definition.
>>>
>>
>> There is indeed a risk that the new dashboards won't give a meaningful
>> view of whether a 3rd party CI is voting correctly or not.
>>
>>  However, there is an elephant in the room and a much more important
>> problem:
>>
>> To measure how accurately a CI is voting says much more about a driver
>> author's "Gerrit-fu" ability to operate a CI system than it does about
>> whether the code they have contributed to OpenStack actually works, and the
>> latter is what is actually important.
>>
>> To my mind the whole 3rd party testing discussion should refocus on
>> helping developers maintain good working code and less on waving "you will
>> be kicked out of OpenStack if you don't keep your swarm of nebulous daemons
>> running 24/7".
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Latest revision of log guidelines

2014-06-30 Thread Sean Dague
The latest revisions of the logging guidelines is available in the
nova-specs repo - https://review.openstack.org/#/c/91446/

I tried to integrate the comments from the last go around into that. I
do think we're at a state where if we believe this is a set of first
pass guidelines that we're good with, we take them as they are, and move
forward with fixes during this cycle. I suspect things such as the AUDIT
removal are actually doable on most projects, and would simplify things.

I've started trying to fix a few of these in nova based on the
assumption we're going to move forward here.
https://review.openstack.org/#/c/103535/ - which removes one of the
secret decoder rings merged today.

http://logs.openstack.org/35/103535/3/gate/gate-tempest-dsvm-full/20e8a0e/logs/screen-n-cpu.txt.gz#_2014-06-30_16_56_32_723

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-06-30 Thread Jay Pipes

Hi Stackers,

Some recent ML threads [1] and a hot IRC meeting today [2] brought up 
some legitimate questions around how a newly-proposed Stackalytics 
report page for Neutron External CI systems [2] represented the results 
of an external CI system as "successful" or not.


First, I want to say that Ilya and all those involved in the 
Stackalytics program simply want to provide the most accurate 
information to developers in a format that is easily consumed. While 
there need to be some changes in how data is shown (and the wording of 
things like "Tests Succeeded"), I hope that the community knows there 
isn't any ill intent on the part of Mirantis or anyone who works on 
Stackalytics. OK, so let's keep the conversation civil -- we're all 
working towards the same goals of transparency and accuracy. :)


Alright, now, Anita and Kurt Taylor were asking a very poignant question:

"But what does CI tested really mean? just running tests? or tested to 
pass some level of requirements?"


In this nascent world of external CI systems, we have a set of issues 
that we need to resolve:


1) All of the CI systems are different.

Some run Bash scripts. Some run Jenkins slaves and devstack-gate 
scripts. Others run custom Python code that spawns VMs and publishes 
logs to some public domain.


As a community, we need to decide whether it is worth putting in the 
effort to create a single, unified, installable and runnable CI system, 
so that we can legitimately say "all of the external systems are 
identical, with the exception of the driver code for vendor X being 
substituted in the Neutron codebase."


If the goal of the external CI systems is to produce reliable, 
consistent results, I feel the answer to the above is "yes", but I'm 
interested to hear what others think. Frankly, in the world of 
benchmarks, it would be unthinkable to say "go ahead and everyone run 
your own benchmark suite", because you would get wildly different 
results. A similar problem has emerged here.


2) There is no mediation or verification that the external CI system is 
actually testing anything at all


As a community, we need to decide whether the current system of 
self-policing should continue. If it should, then language on reports 
like [3] should be very clear that any numbers derived from such systems 
should be taken with a grain of salt. Use of the word "Success" should 
be avoided, as it has connotations (in English, at least) that the 
result has been verified, which is simply not the case as long as no 
verification or mediation occurs for any external CI system.


3) There is no clear indication of what tests are being run, and 
therefore there is no clear indication of what "success" is


I think we can all agree that a test has three possible outcomes: pass, 
fail, and skip. The results of a test suite run therefore is nothing 
more than the aggregation of which tests passed, which failed, and which 
were skipped.


As a community, we must document, for each project, what are expected 
set of tests that must be run for each merged patch into the project's 
source tree. This documentation should be discoverable so that reports 
like [3] can be crystal-clear on what the data shown actually means. The 
report is simply displaying the data it receives from Gerrit. The 
community needs to be proactive in saying "this is what is expected to 
be tested." This alone would allow the report to give information such 
as "External CI system ABC performed the expected tests. X tests passed. 
Y tests failed. Z tests were skipped." Likewise, it would also make it 
possible for the report to give information such as "External CI system 
DEF did not perform the expected tests.", which is excellent information 
in and of itself.


===

In thinking about the likely answers to the above questions, I believe 
it would be prudent to change the Stackalytics report in question [3] in 
the following ways:


a. Change the "Success %" column header to "% Reported +1 Votes"
b. Change the phrase " Green cell - tests ran successfully, red cell - 
tests failed" to "Green cell - System voted +1, red cell - System voted -1"


and then, when we have more and better data (for example, # tests 
passed, failed, skipped, etc), we can provide more detailed information 
than just "reported +1" or not.


Thoughts?

Best,
-jay

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-June/038933.html
[2] 
http://eavesdrop.openstack.org/meetings/third_party/2014/third_party.2014-06-30-18.01.log.html

[3] http://stackalytics.com/report/ci/neutron/7

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] checking success response in clients

2014-06-30 Thread David Kranz
We approved 
https://github.com/openstack/qa-specs/blob/master/specs/client-checks-success.rst 
which recommends that checking of correct success codes be moved to the 
tempest clients. This has been done for the image tests but not others 
yet. But new client/test code coming in should definitely be doing the 
checks in the client rather then the test bodies. Here is the image 
change for reference: https://review.openstack.org/#/c/101310/


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Neutron 3rd Party CI status dashboard

2014-06-30 Thread Sukhdev Kapur
Well, Luke, this is collaborative effort by everybody. By having these CI
systems in place ensures that one person's code does not break other
person's code and vice versa. Therefore, having these CI systems
operational and voting 24x7 is a critical step in achieving this goal.

However, the details as to how and what should be tested is definitely
debatable and the team has done excellent job in converging on that.

Now, as to the issue at hand which Anita is describing, I attended the
meeting this morning and was very pleased with the debate that took place
and the definition as to Sucess


On Mon, Jun 30, 2014 at 12:27 PM, Luke Gorrie  wrote:

> On 30 June 2014 21:08, Anita Kuno  wrote:
>
>> I am disappointed to realize that Ilya (or stackalytics, I don't know
>> where this is coming from) is unwilling to cease making up definitions
>> of success for third party ci systems to allow the openstack community
>> to arrive at its own definition.
>>
>
> There is indeed a risk that the new dashboards won't give a meaningful
> view of whether a 3rd party CI is voting correctly or not.
>
> However, there is an elephant in the room and a much more important
> problem:
>
> To measure how accurately a CI is voting says much more about a driver
> author's "Gerrit-fu" ability to operate a CI system than it does about
> whether the code they have contributed to OpenStack actually works, and the
> latter is what is actually important.
>
> To my mind the whole 3rd party testing discussion should refocus on
> helping developers maintain good working code and less on waving "you will
> be kicked out of OpenStack if you don't keep your swarm of nebulous daemons
> running 24/7".
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ovs-neutron-agent wipes out all flows on startup

2014-06-30 Thread Amir Sadoughi
Indeed a blueprint has been filed (on Launchpad, not neutron-specs) on this 
already[0], but there has been no work on this as far as I can tell. I think it 
would be worthwhile contribution. 

Amir

[0] https://blueprints.launchpad.net/neutron/+spec/neutron-agent-soft-restart

From: Paul Ward [wpw...@linux.vnet.ibm.com]
Sent: Monday, June 30, 2014 2:11 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [neutron] ovs-neutron-agent wipes out all flows on 
startup

The current design for ovs-neutron-agent is that it will wipe out all
flows configured on the system when it starts up, recreating them for
each neutron port it's aware of.  This has a not-so-desirable side
effects that there's a temporary hiccup in network connectivity for the
VMs on the host.

My questions to the list: Is there a reason it was designed this way
(other than "Everything on the system must be managed by OpenStack")?
Is there ongoing work to address this or would it be a worthwhile
contribution from our side?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-30 Thread Alexei Kornienko
Hello,


> My understanding is that your analysis is mostly based on running a
> profiler against the code. Network operations can be bottlenecked in
> other places.
>
> You compare 'simple script using kombu' with 'script using
> oslo.messaging'. You don't compare script using oslo.messaging before
> refactoring and 'after that. The latter would show whether refactoring
> was worth the effort. Your test shows that oslo.messaging performance
> sucks, but it's not definite that hotspots you've revealed, once
> fixed, will show huge boost.
>
> My concern is that it may turn out that once all the effort to
> refactor the code is done, we won't see major difference. So we need
> base numbers, and performance tests would be a great helper here.
>

It's really sad for me to see so little faith in what I'm saying.
The test I've done using plain kombu driver was needed exactly to check
that network is not the bottleneck for messaging performance.
If you don't believe in my performance analysis we could ask someone else
to do their own research and provide results.

Problem with refactoring that I'm planning is that it's not a minor
refactoring that can be applied in one patch but it's the whole library
rewritten from scratch.
Existing messaging code was written long long time ago (in a galaxy far far
away maybe?) and it was copy-pasted directly from nova.
It was not built as a library and it was never intended to be used outside
of nova.
Some parts of it cannot even work normally cause it was not designed to
work with drivers like zeromq (matchmaker stuff).

The reason I've raised this question on the mailing list was to get some
agreement about future plans of oslo.messaging development and start
working on it in coordination with community.
For now I don't see any actions plan emerging from it. I would like to see
us bringing more constructive ideas about what should be done.

If you think that first action should be profiling lets discuss how it
should be implemented (cause it works for me just fine on my local PC).
I guess we'll need to define some basic scenarios that would show us
overall performance of the library.
There are a lot of questions that should be answered to implement this:
Where such tests would run (jenking, local PC, devstack VM)?
How such scenarios should look like?
How do we measure performance (cProfile, etc.)?
How do we collect results?
How do we analyze results to find bottlenecks?
etc.

Another option would be to spend some of my free time implementing
mentioned refactoring (as I see it) and show you the results of performance
testing compared with existing code.
The only problem with such approach is that my code won't be oslo.messaging
and it won't be accepted by community. It may be drop in base for v2.0 but
I'm afraid this won't be acceptable either.

Regards,
Alexei Kornienko


2014-06-30 17:51 GMT+03:00 Gordon Sim :

> On 06/30/2014 12:22 PM, Ihar Hrachyshka wrote:
>
>  Alexei Kornienko wrote:
>>>
 Some performance tests may be introduced but they would be more
 like functional tests since they require setup of actual
 messaging server (rabbit, etc.).

>>>
>> Yes. I think we already have some. F.e.
>> tests/drivers/test_impl_qpid.py attempts to use local Qpid server
>> (backing up to fake server if it's not available).
>>
>
> I always get failures when there is a real qpidd service listening on the
> expected port. Does anyone else see this?
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Neutron 3rd Party CI status dashboard

2014-06-30 Thread Anita Kuno
On 06/30/2014 03:27 PM, Luke Gorrie wrote:
> On 30 June 2014 21:08, Anita Kuno  wrote:
> 
>> I am disappointed to realize that Ilya (or stackalytics, I don't know
>> where this is coming from) is unwilling to cease making up definitions
>> of success for third party ci systems to allow the openstack community
>> to arrive at its own definition.
>>
> 
> There is indeed a risk that the new dashboards won't give a meaningful view
> of whether a 3rd party CI is voting correctly or not.
> 
> However, there is an elephant in the room and a much more important problem:
> 
> To measure how accurately a CI is voting says much more about a driver
> author's "Gerrit-fu" ability to operate a CI system than it does about
> whether the code they have contributed to OpenStack actually works, and the
> latter is what is actually important.
> 
> To my mind the whole 3rd party testing discussion should refocus on helping
> developers maintain good working code and less on waving "you will be
> kicked out of OpenStack if you don't keep your swarm of nebulous daemons
> running 24/7".
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Well I am glad you have reached that conclusion, Luke.

Given that I gave you months of my time with that exact approach and you
squandered it away, claiming I wasn't doing enough to help you, I am
glad you have finally reached a productive place.

I hope your direction produces results.

Thanks Luke,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Neutron 3rd Party CI status dashboard

2014-06-30 Thread Luke Gorrie
On 30 June 2014 21:08, Anita Kuno  wrote:

> I am disappointed to realize that Ilya (or stackalytics, I don't know
> where this is coming from) is unwilling to cease making up definitions
> of success for third party ci systems to allow the openstack community
> to arrive at its own definition.
>

There is indeed a risk that the new dashboards won't give a meaningful view
of whether a 3rd party CI is voting correctly or not.

However, there is an elephant in the room and a much more important problem:

To measure how accurately a CI is voting says much more about a driver
author's "Gerrit-fu" ability to operate a CI system than it does about
whether the code they have contributed to OpenStack actually works, and the
latter is what is actually important.

To my mind the whole 3rd party testing discussion should refocus on helping
developers maintain good working code and less on waving "you will be
kicked out of OpenStack if you don't keep your swarm of nebulous daemons
running 24/7".
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ovs-neutron-agent wipes out all flows on startup

2014-06-30 Thread Kyle Mestery
On Mon, Jun 30, 2014 at 2:11 PM, Paul Ward  wrote:
> The current design for ovs-neutron-agent is that it will wipe out all flows
> configured on the system when it starts up, recreating them for each neutron
> port it's aware of.  This has a not-so-desirable side effects that there's a
> temporary hiccup in network connectivity for the VMs on the host.
>
> My questions to the list: Is there a reason it was designed this way (other
> than "Everything on the system must be managed by OpenStack")? Is there
> ongoing work to address this or would it be a worthwhile contribution from
> our side?
>
This was actually the result of a bug fix in Juno-1 [1]. As reported
by the TripleO folks, having the agent default to setting up a
"NORMAL" flow added may have allowed for VMs to talk to each other,
but it was also a huge security hole. I'm curious what ideas you have
around this, though.

Thanks,
Kyle

[1] https://bugs.launchpad.net/tripleo/+bug/1290486 and
https://bugs.launchpad.net/neutron/+bug/1324703

>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [third-party] An update on third-party CI systems in Neutron

2014-06-30 Thread Kyle Mestery
Hi folks:

I wanted to give an update to people following the third-party CI
conversation for Neutron. I sent an email out with a status update in
June [1]. In that email, I had indicated that in-tree plugins and
drivers needed to have functioning CI running by Juno-2. That is still
the case, and I've seen some great work and efforts by many people to
make this happen. But for systems which are not functional by Juno-2,
we will look to remove code in Juno-3. The etherpad tracking this
effort [3], while not perhaps the ideal place, is where we're doing
this now. I encourage people to update this as their CI systems become
functional.

I encourage anyone operating a CI system for Neutron (or other
OpenStack projects) to also join the weekly meeting to discuss issues,
share ideas, and help each out out [2]. If you're having issues, this
meeting, the ML, and #openstack-[infra,qa] are the best places to
reach out for help.

Thanks!
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-June/037665.html
[2] https://wiki.openstack.org/wiki/Meetings/ThirdParty
[3] https://etherpad.openstack.org/p/ZLp9Ow3tNq

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] ovs-neutron-agent wipes out all flows on startup

2014-06-30 Thread Paul Ward
The current design for ovs-neutron-agent is that it will wipe out all 
flows configured on the system when it starts up, recreating them for 
each neutron port it's aware of.  This has a not-so-desirable side 
effects that there's a temporary hiccup in network connectivity for the 
VMs on the host.


My questions to the list: Is there a reason it was designed this way 
(other than "Everything on the system must be managed by OpenStack")? 
Is there ongoing work to address this or would it be a worthwhile 
contribution from our side?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Neutron 3rd Party CI status dashboard

2014-06-30 Thread Anita Kuno
On 06/29/2014 07:59 PM, Anita Kuno wrote:
> On 06/29/2014 07:43 PM, Anita Kuno wrote:
>> On 06/29/2014 03:25 PM, Ilya Shakhat wrote:
>>> Hi!
>>>
>>> During last couple weeks there is an increasing demand on tracking
>>> 3rd-party CI statuses. We at Stackalytics decided to be in trend and (with
>>> some inspiration from Salvatore's proposal) implemented report that shows
>>> summary on external CI status. The initial version is available for Neutron
>>> - http://stackalytics.com/report/ci/neutron/7
>>>
>>> The report shows summary of all CI jobs during specified period of time,
>>> including:
>>>  * stats of runs on merged patch sets:
>>> - total number of runs
>>> - success rate (success to total ratio)
>>> - time of the latest run
>>> - last test result
>>>   * stats for all patch sets (the same set as for merged)
>>>   * last test results for every merged patch set grouped by days (useful to
>>> see how different CIs correlate with each other and how often they run)
>>>
>>> Under "merged patch set" report means "the last patch set in the merged
>>> change request", thus it is almost the same as the trunk code. CI
>>> configuration is taken from DriverLog's default data
>>> .
>>> Standard Stackalytics screen is also available for CIs -
>>> http://stackalytics.com/?metric=ci, including votes breakdown and activity
>>> log.
>>>
>>> Since this is the first version there are some open questions:
>>>  * Currently report shows results per CI id, but there are CIs that run
>>> tests against multiple drivers and this case is not supported. What would
>>> be more useful: to get stats for a driver or for CI?
>>>  * Most CIs run tests when patch set is posted. So even if change request
>>> is merged within selected time period corresponding CI results may be
>>> missing.
>>>  * Patterns for non-voting CIs need to be verified. For example Cisco CI
>>> now runs 5 jobs, but DriverLog data still contains old data.
>>>
>>> Thanks,
>>> Ilya
>>>
>>> 2014-06-16 17:52 GMT+04:00 Salvatore Orlando :
>>>

 However, it would be great if we could start devising a solution for
 having "health" reports from the various CI systems.
 This report should report the following kind of information:
 - timestamp of last run
 - timestamp of last vote (a system might start job which then get aborted
 for CI infra problems)
 - % of success vs failures (not sure how important is that one but
 provides a metric of comparison with upstream jenkins)
 - % of disagreements with jenkins (this might allow us to quickly spot
 those CI systems which are randomly -1'ing patches)

 The percentage metrics might be taken over a 48 hours or 7 days interval,
 or both.
 Does this idea sound reasonable?


>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> Hi Ilya:
>>
>> I look forward to hearing more about this dashboard and ensuring you or
>> someone else associated with this dashboard are available for questions
>> at the third party meeting tomorrow:
>> https://wiki.openstack.org/wiki/Meetings/ThirdParty
>>
>> We missed you last week.
>>
>> Thanks Ilya,
>> Anita.
>>
> And one question I will have when we discuss this is regarding this
> statement: "Green cell - tests ran successfully,"
> 
> Currently we don't have community consensus around the use of the
> statement "tests ran successfully" regarding third party ci systems.
> This is as statement, you recall, we had discussed at the third party
> meeting when we talked about driverlog.
> 
> 18:46:02  but what does CI tested really mean? just running
> tests? or tested to pass some level of requirements?
> http://eavesdrop.openstack.org/meetings/third_party/2014/third_party.2014-06-16-18.00.log.html
> 
> Using a statement to convey success prior to having a definition from
> the community about what the statement means will create confusion in
> the community and frustration from folks, including myself, who are
> willing to have the conversation about what it means, who feel
> circumvented by this second use of a phrase which implies a decided
> meaning where none yet exists. Please participate in conversations
> around the definition of phrases of success and failure regarding third
> party systems and point to the logs where consensus is reached prior it
> its use in future.
> 
> In addition to attending the third party meeting, please attend the
> infra meeting or the qa meeting, and hopefully meetings that include
> programs that have interactions with third party ci systems including
> nova, neutron, and cinder (if there are other programs interacting with
> third party ci systems please attend the third party meeting so I know
> about you).
> 
> Thanks Ilya, I look forward to our fut

[openstack-dev] [third party] - minimum response time for 3rd party CI responses

2014-06-30 Thread Kevin Benton
Hello all,

The subject of 3rd party CI voting responses came up in the 3rd-party IRC
meeting today.[1] We would like to get feedback from the larger dev
community on what acceptable response times are for third party CI systems.

As a maintainer of a small CI system that tends to get backed up during
milestone rush hours, it would be nice if we were allowed up to 12 hours.
However, as a developer this seems like too long to have to wait for the
results of a patch.

One option might be to peg the response time to the main CI response
assuming that it gets backed up in a similar manner.

What would be acceptable to the community?

1.
http://eavesdrop.openstack.org/meetings/third_party/2014/third_party.2014-06-30-18.01.html

-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-06-30 Thread Clint Byrum
Excerpts from Michael Kerrin's message of 2014-06-30 02:16:07 -0700:
> I am trying to finish off https://review.openstack.org/#/c/90134 - percona 
> xtradb 
> cluster for debian based system.
> 
> I have read into this thread that I can error out on Redhat systems when 
> trying to 
> install percona and tell them to use mariadb instead, percona isn't support 
> here. Is 
> this correct?
> 

Probably. But if CI for Fedora breaks as a result you'll need a solution
first.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSG][OSSN] Cinder SSH Pool will auto-accept SSH host signatures by default

2014-06-30 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Cinder SSH Pool will auto-accept SSH host signatures by default
- ---

### Summary###
In OpenStack releases prior to Juno, the SSH connection pool used by
Cinder drivers to control SAN hosts will silently auto-accept SSH host
fingerprints. This potentially allows for a man in the middle attack
through the impersonation of a legitimate storage host.

### Affected Services / Software ###
Cinder, Icehouse, Havana, Grizzly, Folsom

### Discussion ###
Cinder drivers for controlling SAN hardware communicate with storage
hosts over SSH. To facilitate creation of these drivers, Cinder provides
a utility mechanism to manage pooled SSH connections. This connection
pool is using a policy that will silently accept the SSH fingerprint
of any unknown host when it first connects. However, it is not properly
maintaing the list of known hosts and will thus permit connections to a
host regardless of the SSH fingerprint presented. This impacts all
drivers built using the utility. At the time of writing these drivers
include, but may not be limited to:

- - Solaris ISCSI driver
- - HP LeftHand SAN ISCSI driver
- - Huawei OceanStor T series and Dorado series storage arrays
- - Dell EqualLogic Storage
- - IBM Storwize SVC

In the event that a malicious adversary has a point of presence on the
storage network, they could undermine network communications between
Cinder and the SAN host. Should an adversary manage to impersonate the
storage host, Cinder will silently accept the newly presented
fingerprint of the bogus host and allow the connection. This behaviour
constitutes a typical Man in the Middle attack that could intercept and
manipulate communications with the storage host, possibly leaking login
credentials.

If login credentials can be acquired, then direct interaction with the
legitimate storage host becomes possible. This could result in Cinder
volumes being accessed or modified to export compromised code and data
to other services.

The presence of this defect can be detected by initially connecting to a
storage host and then re-generating that hosts local SSH details. Cinder
will still allow connections to the host despite its now modified
fingerprint. This is the default configuration.

### Recommended Actions ###
Deployers should pay attention to the SSH interface between the Cinder
driver and the SAN host and take appropriate measures to defend the
storage network. These measures could include physical network isolation
or placing an Intrusion Detection System on the network. The IDS should
detect attacks such as ARP table poisoning, DHCP spoofing or DNS forgery
that could be used to impersonate a SAN host and enact an Man in the
Middle attack.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0019
Original LaunchPad Bug : https://bugs.launchpad.net/cinder/+bug/1320056
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTsbC4AAoJEJa+6E7Ri+EVK44IAKmfcak6QIBtd9QT4bC013/8
083WqUa6rnhX7jGtRkwm6lELVDw5Vk8jUpNYqnu7W7X+7+q24S4R/52UrxJBE8f7
dkxIcTS6Nx9qxGeoVVWFa4QLEuuG82K0PYhyEasbn7m8e672QeqLVHxUzAH7L1Yg
hyXyZvxpN3bz38PpOKjf2Sj4lG3g1DZkZTL1cW2HIla9ZFiqZ9IMa5f2FItrgLEJ
epLtsEhkhM/M/Nk9Qqbvvn0Ir3WTFN0l43hGJP4iF+frEsSewZqDXwNafVXl8k9v
4He6I1gpR2bpmYGIv4Bd+9jnjuiujFUfIIZKQg4LvNpH0FB+DqvCGUS5A0D1WjU=
=SGiN
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Simple and robust CI script?

2014-06-30 Thread Luke Gorrie
I have a really early sketch of this project on Github now.

shellci - OpenStack 3rd party CI in 100 lines of shell
https://github.com/SnabbCo/shellci

This is not finished yet but I'll try to use it for the new Neutron mech
driver that I want to contribute to Juno.

Ideas and encouragement welcome :-).

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Simple and robust CI script?

2014-06-30 Thread Luke Gorrie
On 30 June 2014 19:37, Asselin, Ramy  wrote:

>  Not sure if these are “minimalist” but at least they setup
> automagically, so you don’t need to do it from scratch:
>

I'm aiming to do exactly the opposite of this i.e. no automagic.

My experience has been that the really heavy-duty CI setups are too
difficult to understand and troubleshoot for end-users like me. I spent a
week working on installing Jay's tools, with very generous help from Jay
himself, and for me it was a dead end [*]. I now consider these to be tools
for openstack-infra insiders rather than guys like me who are maintaining
tiny drivers.

I don't know if other people have had a different experience from me. It
does seem like 3rd party CI is broadly a lot more problematic than
advertised. I have seen only relatively few frank experience reports from
people operating CIs and I suspect there is a rich and interesting untold
story there :-). (Maybe it would be interesting to do a survey some day.)

[*] http://openstack-dev-openstack.git.net/2014-March/030022.html & other
posts in that thread.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Meeting this week?

2014-06-30 Thread melanie witt
On Jun 29, 2014, at 17:01, Michael Still  wrote:

> Hi. The meeting this week would be on the 3rd of July, which I assume
> means that many people will be out of the office. Do people think its
> worth running the meeting or shall we give this week a miss?

July 3 is a company holiday where I am, so I'm also +1 on skipping this week.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Simple and robust CI script?

2014-06-30 Thread Asselin, Ramy
Not sure if these are “minimalist” but at least they setup automagically, so 
you don’t need to do it from scratch:

You can check out these repos which automate the 3rd party ci setup:

Jay Pipe’s solution:
https://github.com/jaypipes/os-ext-testing
https://github.com/jaypipes/os-ext-testing-data

My WIP fork of Jay Pipe’s that also includes Nodepool:
https://github.com/rasselin/os-ext-testing
https://github.com/rasselin/os-ext-testing-data

Ramy


From: Luke Gorrie [mailto:l...@tail-f.com]
Sent: Monday, June 30, 2014 9:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][third-party] Simple and robust CI script?

On 30 June 2014 17:34, Kyle Mestery 
mailto:mest...@noironetworks.com>> wrote:
It would be great to get you to join the 3rd Party meeting [1] in
#openstack-meeting at 1800UTC to discuss this. Can you make it today
Luke?

Yes, I'll be there.

Currently I'm looking into "the simplest 3rd party CI that could possibly work" 
which would be less sophisticated than devstack-gate but easier for the 
operator to understand and be responsible for. (Or: "CI in 100 lines of shell" 
with no Jenkins or other large pieces of software to install.)

I'm on #openstack-neutron if anybody wants to kick around ideas or share 
scripts that I can borrow ideas from (thanks ODL gang for doing this already).

Cheers,
-Luke


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-06-30 Thread Sandy Walsh
woot! 



From: Mike Bayer [mba...@redhat.com]
Sent: Monday, June 30, 2014 1:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [oslo] Openstack and SQLAlchemy

Hi all -

For those who don't know me, I'm Mike Bayer, creator/maintainer of
SQLAlchemy, Alembic migrations and Dogpile caching.   In the past month
I've become a full time Openstack developer working for Red Hat, given
the task of carrying Openstack's database integration story forward.
To that extent I am focused on the oslo.db project which going forward
will serve as the basis for database patterns used by other Openstack
applications.

I've summarized what I've learned from the community over the past month
in a wiki entry at:

https://wiki.openstack.org/wiki/Openstack_and_SQLAlchemy

The page also refers to an ORM performance proof of concept which you
can see at https://github.com/zzzeek/nova_poc.

The goal of this wiki page is to publish to the community what's come up
for me so far, to get additional information and comments, and finally
to help me narrow down the areas in which the community would most
benefit by my contributions.

I'd like to get a discussion going here, on the wiki, on IRC (where I am
on freenode with the nickname zzzeek) with the goal of solidifying the
blueprints, issues, and SQLAlchemy / Alembic features I'll be focusing
on as well as recruiting contributors to help in all those areas.  I
would welcome contributors on the SQLAlchemy / Alembic projects directly
as well, as we have many areas that are directly applicable to Openstack.

I'd like to thank Red Hat and the Openstack community for welcoming me
on board and I'm looking forward to digging in more deeply in the coming
months!

- mike



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] Reminder: Mid-cycle Meetup - Attendance Confirmation

2014-06-30 Thread Jordan OMara

On 25/06/14 14:32 -0400, Jordan OMara wrote:

On 25/06/14 18:20 +, Carlino, Chuck (OpenStack TripleO, Neutron) wrote:

Is $179/day the expected rate?

Thanks,
Chuck


Yes, that's the best rate available from both of the downtown
(walkable) hotels.


Just an update that we only have a few rooms left in our block at the
Marriott. Please book ASAP if you haven't 
--

Jordan O'Mara 
Red Hat Engineering, Raleigh 


pgplWdMerEsnR.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Openstack and SQLAlchemy

2014-06-30 Thread Mike Bayer
Hi all -

For those who don't know me, I'm Mike Bayer, creator/maintainer of
SQLAlchemy, Alembic migrations and Dogpile caching.   In the past month
I've become a full time Openstack developer working for Red Hat, given
the task of carrying Openstack's database integration story forward.  
To that extent I am focused on the oslo.db project which going forward
will serve as the basis for database patterns used by other Openstack
applications.

I've summarized what I've learned from the community over the past month
in a wiki entry at:

https://wiki.openstack.org/wiki/Openstack_and_SQLAlchemy 

The page also refers to an ORM performance proof of concept which you
can see at https://github.com/zzzeek/nova_poc.

The goal of this wiki page is to publish to the community what's come up
for me so far, to get additional information and comments, and finally
to help me narrow down the areas in which the community would most
benefit by my contributions.

I'd like to get a discussion going here, on the wiki, on IRC (where I am
on freenode with the nickname zzzeek) with the goal of solidifying the
blueprints, issues, and SQLAlchemy / Alembic features I'll be focusing
on as well as recruiting contributors to help in all those areas.  I
would welcome contributors on the SQLAlchemy / Alembic projects directly
as well, as we have many areas that are directly applicable to Openstack.

I'd like to thank Red Hat and the Openstack community for welcoming me
on board and I'm looking forward to digging in more deeply in the coming
months!

- mike



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][oslo][neutron] Need help getting oslo.messaging 1.4.0.0a2 in global requirements

2014-06-30 Thread Paul Michali (pcm)
I have out for review 103536 to add this version to global requirements, so 
that Neutron has an oslo fix (review 102909) for encoding failure, which 
affects some gate runs. This review for global requirements is failing 
requirements check 
(http://logs.openstack.org/36/103536/1/check/check-requirements-integration-dsvm/6d9581c/console.html#_2014-06-30_12_34_56_921).
 I did a recheck bug 1334898, but see the same error, with the release not 
found, even though it is in PyPI. Infra folks say this is a known issue with 
pushing out pre-releases.

Do we have a work-around?
Any proposed solution to try?

Thanks!

 
PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting minutes/log - 06/30/2014

2014-06-30 Thread Renat Akhmerov
Thanks everyone for joining us!

As usually,
Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-06-30-16.01.html
Full log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-06-30-16.01.log.html

The next meeting will be on July 7th.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Simple and robust CI script?

2014-06-30 Thread Luke Gorrie
On 30 June 2014 17:34, Kyle Mestery  wrote:

> It would be great to get you to join the 3rd Party meeting [1] in
> #openstack-meeting at 1800UTC to discuss this. Can you make it today
> Luke?
>

Yes, I'll be there.

Currently I'm looking into "the simplest 3rd party CI that could possibly
work" which would be less sophisticated than devstack-gate but easier for
the operator to understand and be responsible for. (Or: "CI in 100 lines of
shell" with no Jenkins or other large pieces of software to install.)

I'm on #openstack-neutron if anybody wants to kick around ideas or share
scripts that I can borrow ideas from (thanks ODL gang for doing this
already).

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][i18n] why isn't the client translated?

2014-06-30 Thread Sungjin Kang
forwarding I18n Team.

-- 
About.me

부터: Matt Riedemann mrie...@linux.vnet.ibm.com
답장: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
날짜: 2014년 7월 1일 at 오전 12:02:41
에게: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
제목:  [openstack-dev] [nova][i18n] why isn't the client translated?  

I noticed that there is no locale directory or setup.cfg entry for  
babel, which surprises me. The v1_1 shell in python-novaclient has a  
lot of messages marked for translation using the _() function but the v3  
shell doesn't, presumably because someone figured out we don't translate  
the client messages anyway.  

I'm just wondering why we don't translate the client?  

--  

Thanks,  

Matt Riedemann  


___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-06-30 Thread Dan Smith
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

> There is a similar old bug for that, with a good suggestion for how
> it could possibly be done:
> 
> https://bugs.launchpad.net/openstack-ci/+bug/1251758

This isn't what I'm talking about. What we need is, for each new
patchset on a given change, an empty table listing all the answers we
expect to see (i.e. one for each of the usual suspect nova CI
systems). The above bug (AFAICT) is simply for tracking last-status of
each, so that if one stops reporting entirely (as minesweeper often
does), we get some indication that the *system* is broken.

- --Dan
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTsYc+AAoJEBeZxaMESjNVcQYH+wayg4T9QFe4tTvGn24PisCf
5cEaeSkwXl+Adiae5cCfCGSTjlErK4lpFtzFapKukcM0+eEp464toskl7vNC0izp
UWCpcg2gbON6Ef/AMa1+PT8uXR9OYAo+/eU8NUJNM01ajeZqqe3H3jnltgoUau0O
fq3O3+Wa2PxBTAVVGi3HXJl4SWpVdEuYDZYOBOtkDXwhIS/hvBdIRuJwt0CygxHx
78WatFsQ09tBHaQCJbs2E+Oar0rD4sF93qjG8jAFiVB/0SJ6wV7AsLColVId2hbe
Qfua3Q6CufJBO2WHV7JORX2fBSOTUmcPcOM1IE4/lgXGiyu3aw5ataL9e4qxudQ=
=VoHH
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-06-30 Thread Anita Kuno
On 06/30/2014 11:41 AM, Ilya Shakhat wrote:
> 2014-06-30 19:17 GMT+04:00 Kurt Taylor :
> 
>> Dan Smith  wrote on 06/27/2014 12:33:48 PM:
>>> If it really does show up right in Gerrit as if it were integrated,
>>> then that would be fine with me. I think the biggest problem we have
>>> right now is that a lot of the CI systems are very inconsistent in
>>> their reporting and we often don't realize when one of them *hasn't*
>>> voted. If the new thing could fill out the chart based on everything
>>> we expect to see a vote from, so that it's very clear that one is
>>> missing, then that's a net win right there.
>>
>>
>> There is a similar old bug for that, with a good suggestion for how it
>> could possibly be done:
>>
>> https://bugs.launchpad.net/openstack-ci/+bug/1251758
>>
>>
> What about to have report like this:
> http://stackalytics.com/report/ci/neutron/7 ?
> 
> Thanks,
> Ilya
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
As I responded here, Ilya:
http://lists.openstack.org/pipermail/openstack-dev/2014-June/038933.html

Right now that dashboard introduces more confusion than it alleviates
since the definition of "success" in regards to third party ci systems
has yet to be defined by the community.

I look forward to discussing this more at today's third party meeting,
as Kurt has already linked to. It would be great if you would be willing
to add an item to the agenda.

Thanks Ilya,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-06-30 Thread Ilya Shakhat
2014-06-30 19:17 GMT+04:00 Kurt Taylor :

> Dan Smith  wrote on 06/27/2014 12:33:48 PM:
> > If it really does show up right in Gerrit as if it were integrated,
> > then that would be fine with me. I think the biggest problem we have
> > right now is that a lot of the CI systems are very inconsistent in
> > their reporting and we often don't realize when one of them *hasn't*
> > voted. If the new thing could fill out the chart based on everything
> > we expect to see a vote from, so that it's very clear that one is
> > missing, then that's a net win right there.
>
>
> There is a similar old bug for that, with a good suggestion for how it
> could possibly be done:
>
> https://bugs.launchpad.net/openstack-ci/+bug/1251758
>
>
What about to have report like this:
http://stackalytics.com/report/ci/neutron/7 ?

Thanks,
Ilya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Meeting this week?

2014-06-30 Thread Matt Riedemann



On 6/30/2014 8:30 AM, Joe Gordon wrote:


On Jun 30, 2014 9:23 AM, "Andrew Laski" mailto:andrew.la...@rackspace.com>> wrote:
 >
 >
 > On 06/29/2014 08:01 PM, Michael Still wrote:
 >>
 >> Hi. The meeting this week would be on the 3rd of July, which I assume
 >> means that many people will be out of the office. Do people think its
 >> worth running the meeting or shall we give this week a miss?
 >
 >
 > I will be traveling during the meeting so I'm +1 on giving it a miss.

Same here.

 >
 >>
 >> Thanks,
 >> Michael
 >>
 >
 >
 > ___
 > OpenStack-dev mailing list
 > OpenStack-dev@lists.openstack.org

 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



+1 to skip.

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Simple and robust CI script?

2014-06-30 Thread Kyle Mestery
On Mon, Jun 30, 2014 at 3:19 AM, Luke Gorrie  wrote:
> Howdy!
>
> Paging other 3rd party CI operators...
>
> I would like to run a simple and robust 3rd party CI. Simple as in a small
> number of moving parts, robust as in unlikely to make mistakes due to
> unexpected problems.
>
> I'm imagining:
>
> - 100 lines of shell for the implementation.
>
> - Minimum of daemons. (Just Jenkins? Ideally not even that...)
>
> - Robust detection of operational problems (CI bugs) vs system-under-test
> problems (changes that should be voted against).
>
> - Effective notifications on all errors or negative votes.
>
> Does anybody already have an implementation like this that they would like
> to share? (Is anybody else wanting something in this direction?)
>
> I've been iterating on 3rd party CI mechanisms (currently onto my 3rd
> from-scratch setup) and I have not been completely satisfied with any of
> them. I would love to hear from someone who has a minimalist implementation
> that they are happy with :).
>
> Cheers,
> -Luke
>
It would be great to get you to join the 3rd Party meeting [1] in
#openstack-meeting at 1800UTC to discuss this. Can you make it today
Luke?

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Meetings/ThirdParty
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking

2014-06-30 Thread Kyle Mestery
On Mon, Jun 30, 2014 at 10:18 AM, Armando M.  wrote:
> Hi Gary,
>
> Thanks for sending this out, comments inline.
>
Indeed, thanks Gary!

> On 29 June 2014 00:15, Gary Kotton  wrote:
>>
>> Hi,
>> At the moment there are a number of different BP’s that are proposed to
>> enable different VMware network management solutions. The following specs
>> are in review:
>>
>> VMware NSX-vSphere plugin: https://review.openstack.org/102720
>> Neutron mechanism driver for VMWare vCenter DVS network
>> creation:https://review.openstack.org/#/c/101124/
>> VMware dvSwitch/vSphere API support for Neutron ML2:
>> https://review.openstack.org/#/c/100810/
>>
I've commented in these reviews about combining efforts here, I'm glad
you're taking the lead to make this happen Gary. This is much
appreciated!

>> In addition to this there is also talk about HP proposing some for of
>> VMware network management.
>
>
> I believe this is blueprint [1]. This was proposed a while ago, but now it
> needs to go through the new BP review process.
>
> [1] - https://blueprints.launchpad.net/neutron/+spec/ovsvapp-esxi-vxlan
>
>>
>> Each of the above has specific use case and will enable existing vSphere
>> users to adopt and make use of Neutron.
>>
>> Items #2 and #3 offer a use case where the user is able to leverage and
>> manage VMware DVS networks. This support will have the following
>> limitations:
>>
>> Only VLANs are supported (there is no VXLAN support)
>> No security groups
>> #3 – the spec indicates that it will make use of pyvmomi
>> (https://github.com/vmware/pyvmomi). There are a number of disclaimers here:
>>
>> This is currently blocked regarding the integration into the requirements
>> project (https://review.openstack.org/#/c/69964/)
>> The idea was to have oslo.vmware leverage this in the future
>> (https://github.com/openstack/oslo.vmware)
>>
>> Item #1 will offer support for all of the existing Neutron API’s and there
>> functionality. This solution will require a additional component called NSX
>> (https://www.vmware.com/support/pubs/nsx_pubs.html).
>>
>
> It's great to see this breakdown, it's very useful in order to identify the
> potential gaps and overlaps amongst the various efforts around ESX and
> Neutron. This will also ensure a path towards a coherent code contribution.
>
>> It would be great if we could all align our efforts and have some clear
>> development items for the community. In order to do this I’d like suggest
>> that we meet to sync and discuss all efforts. Please let me know if the
>> following sounds ok for an initial meeting to discuss how we can move
>> forwards:
>>  - Tuesday 15:00 UTC
>>  - IRC channel #openstack-vmware
>
>
> I am available to join.
>
>>
>>
>> We can discuss the following:
>>
>> Different proposals
>> Combining efforts
>> Setting a formal time for meetings and follow ups
>>
>> Looking forwards to working on this stuff with the community and providing
>> a gateway to using Neutron and further enabling the adaption of OpenStack.
>
>
> I think code contribution is only one aspect of this story; my other concern
> is that from a usability standpoint we would need to provide a clear
> framework for users to understand what these solutions can do for them and
> which one to choose.
>
> Going forward I think it would be useful if we produced an overarching
> blueprint that outlines all the ESX options being proposed for OpenStack
> Networking (and the existing ones, like NSX - formerly known as NVP, or
> nova-network), their benefits and drawbacks, their technical dependencies,
> system requirements, API supported etc. so that a user can make an informed
> decision when looking at ESX deployments in OpenStack.
>
>>
>>
>> Thanks
>> Gary
>>
>
> Cheers,
> Armando
>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-06-30 Thread Kurt Taylor

Sean Dague  wrote on 06/30/2014 06:03:50 AM:

> From:
>
> Sean Dague 
>
> To:
>
> "OpenStack Development Mailing List (not for usage questions)"
> ,
>
> Date:
>
> 06/30/2014 06:09 AM
>
> Subject:
>
> Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit
>
> On 06/29/2014 09:39 AM, Joshua Hesketh wrote:
> > On 6/28/14 10:40 AM, James E. Blair wrote:
> >> An alternate approach would be to have third-party CI systems register
> >> jobs with OpenStack's Zuul rather than using their own account.  This
> >> would mean only a single report of all jobs (upstream and 3rd-party)
> >> per-patchset.  It significantly reduces clutter and makes results more
> >> accessible -- but even with one system we've never actually wanted to
> >> have Jenkins results in comments, so I think one of the other options
> >> would be preferred.  Nonetheless, this is possible with a little bit
of
> >> work.
> >
> > I agree this isn't the preferred solution, but I disagree with the
> > little bit of work. This would require CI systems registering with
> > gearman which would mean security issues. The biggest problem with this
> > though is that zuul would be stuck waiting from results from 3rd
parties
> > which often have very slow return times.
>
> Right, one of the other issues is the quality of the CI results varies
> as well.

Agreed. After last summit, Anita, Jay and I decided to start gathering a
team
of 3rd party testers that have the goal of improving the quality of the
third
party systems. We are starting with gathering global unwritten
requirements,
improving documentation and reaching out to new projects that will have
third
party testing needs. We are still in the early stages but now have weekly
meetings to discuss what needs to be done and track progress.

https://wiki.openstack.org/wiki/Meetings/ThirdParty

> I think one of the test result burn out issues right now is based on the
> fact that they are too rolled up as it is. For instance, a docs only
> change gets Tempest results, which humans know are irrelevant, but
> Jenkins insists they aren't. I think that if we rolled up more
> information, and waited longer, we'd be in a worse state.

Maybe it could promptly timeout and then report the system that did not
complete? That would also have the benefit of enforcing a time limit on
reporting results.


Kurt Taylor (krtaylor)
OpenStack Development Lead - PowerKVM CI
IBM Linux Technology Center
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking

2014-06-30 Thread Armando M.
Hi Gary,

Thanks for sending this out, comments inline.

On 29 June 2014 00:15, Gary Kotton  wrote:

>  Hi,
>  At the moment there are a number of different BP’s that are proposed to
> enable different VMware network management solutions. The following specs
> are in review:
>
>1. VMware NSX-vSphere plugin: https://review.openstack.org/102720
>2. Neutron mechanism driver for VMWare vCenter DVS network creation:
>https://review.openstack.org/#/c/101124/
>3. VMware dvSwitch/vSphere API support for Neutron ML2:
>https://review.openstack.org/#/c/100810/
>
> In addition to this there is also talk about HP proposing some for
> of VMware network management.
>

I believe this is blueprint [1]. This was proposed a while ago, but now it
needs to go through the new BP review process.

[1] - https://blueprints.launchpad.net/neutron/+spec/ovsvapp-esxi-vxlan


>  Each of the above has specific use case and will enable existing vSphere
> users to adopt and make use of Neutron.
>
>  Items #2 and #3 offer a use case where the user is able to leverage and
> manage VMware DVS networks. This support will have the following
> limitations:
>
>- Only VLANs are supported (there is no VXLAN support)
>- No security groups
>- #3 – the spec indicates that it will make use of pyvmomi (
>https://github.com/vmware/pyvmomi). There are a number of disclaimers
>here:
>   - This is currently blocked regarding the integration into the
>   requirements project (https://review.openstack.org/#/c/69964/)
>   - The idea was to have oslo.vmware leverage this in the future (
>   https://github.com/openstack/oslo.vmware)
>
> Item #1 will offer support for all of the existing Neutron API’s and there
> functionality. This solution will require a additional component called NSX
> (https://www.vmware.com/support/pubs/nsx_pubs.html).
>
>
It's great to see this breakdown, it's very useful in order to identify the
potential gaps and overlaps amongst the various efforts around ESX and
Neutron. This will also ensure a path towards a coherent code contribution.

 It would be great if we could all align our efforts and have some clear
> development items for the community. In order to do this I’d like suggest
> that we meet to sync and discuss all efforts. Please let me know if the
> following sounds ok for an initial meeting to discuss how we can move
> forwards:
>  - Tuesday 15:00 UTC
>  - IRC channel #openstack-vmware
>

I am available to join.


>
>  We can discuss the following:
>
>1. Different proposals
>2. Combining efforts
>3. Setting a formal time for meetings and follow ups
>
> Looking forwards to working on this stuff with the community and providing
> a gateway to using Neutron and further enabling the adaption of OpenStack.
>

I think code contribution is only one aspect of this story; my other
concern is that from a usability standpoint we would need to provide a
clear framework for users to understand what these solutions can do for
them and which one to choose.

Going forward I think it would be useful if we produced an overarching
blueprint that outlines all the ESX options being proposed for OpenStack
Networking (and the existing ones, like NSX - formerly known as NVP, or
nova-network), their benefits and drawbacks, their technical dependencies,
system requirements, API supported etc. so that a user can make an informed
decision when looking at ESX deployments in OpenStack.


>
>  Thanks
> Gary
>
>
Cheers,
Armando


> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-06-30 Thread Kurt Taylor
Dan Smith  wrote on 06/27/2014 12:33:48 PM:
>
> > What if 3rd Party CI didn't vote in Gerrit? What if it instead
> > published to some 3rd party test reporting site (a thing that
> > doesn't yet exist). Gerrit has the facility so that we could inject
> > the dashboard content for this in Gerrit in a little table
> > somewhere, but the data would fundamentally live outside of Gerrit.
> > It would also mean that all the aggregate reporting of 3rd Party CI
> > that's being done in custom gerrit scripts, could be integrated
> > directly into such a thing.
>
> If it really does show up right in Gerrit as if it were integrated,
> then that would be fine with me. I think the biggest problem we have
> right now is that a lot of the CI systems are very inconsistent in
> their reporting and we often don't realize when one of them *hasn't*
> voted. If the new thing could fill out the chart based on everything
> we expect to see a vote from, so that it's very clear that one is
> missing, then that's a net win right there.

There is a similar old bug for that, with a good suggestion for how it
could possibly be done:

https://bugs.launchpad.net/openstack-ci/+bug/1251758


Kurt Taylor (krtaylor)
OpenStack Development Lead - PowerKVM CI
IBM Linux Technology Center
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] SCSS conversion and SCSS upgrade patches

2014-06-30 Thread Douglas Fish
It's possible that the only purpose of this discussion will be to fix my
thinking, but I still can't understand why we are so anxious to integrate a
patch that makes Horizon look funny.  I expect the patch Radomir has our
for review will be quite stable.  People should be able to contribute work
based on it now using dependent patches.
https://wiki.openstack.org/wiki/Gerrit_Workflow#Add_dependency   Once we
have a set of patches with good Horizon behavior we can merge the set.

Doug Fish

Jason Rist  wrote on 06/30/2014 09:22:20 AM:

> From: Jason Rist 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Cc: Douglas Fish/Rochester/IBM@IBMUS
> Date: 06/30/2014 09:22 AM
> Subject: Re: [openstack-dev] [Horizon] SCSS conversion and SCSS
> upgrade patches
>
> On Mon 30 Jun 2014 08:09:40 AM MDT, Douglas Fish wrote:
> >
> > I had an IRC discussion with jtomasek and rdopieralski recently
regarding
> > Radomir's patch for converting to SCSS bootstrap.
> > https://review.openstack.org/#/c/90371/   We were trying to sort out
how
> > soon this patch should merge.  We'd like to discuss this at the team
> > meeting tomorrow, but I'm sharing here to get an initial discussion
> > started.
> >
> > It seems that in this version of bootstrap SCSS had some bugs.  They
are
> > low impact, but pretty obvious bugs - I saw mouseover styling and
disable
> > button styling problems.
> >
> > The most straightforward fix to this is to upgrade bootstrap.  I
understand
> > that Jiri is working on that this week, and would like to have the SCSS
> > patch merged ASAP to help with that effort.
> >
> > My feeling is that we shouldn't merge the SCSS patch until we have a
> > bootstrap patch ready to merge.  I'd like to see dependent patches used
to
> > manage this so we can get both changes reviewed and merged at the same
> > time.
> >
> > Radomir has shared that he thinks dependent patches are too painful to
use
> > for this situation.  Also he'd like to see the SCSS bootstrap patch
merged
> > ASAP because the horizon split depends on that work as well.
> >
> > Doug Fish
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> I agree with Radomir and will voice that same opinion during the
> meeting.  Lets get the SCSS patch up, and then everyone can contribute
> to improvements, not just Jiri.
>
> -J
>
> --
> Jason E. Rist
> Senior Software Engineer
> OpenStack Management UI
> Red Hat, Inc.
> openuc: +1.972.707.6408
> mobile: +1.720.256.3933
> Freenode: jrist
> github/identi.ca: knowncitizen
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-06-30 Thread Giulio Fidente

On 06/30/2014 11:16 AM, Michael Kerrin wrote:

I am trying to finish off https://review.openstack.org/#/c/90134 -
percona xtradb cluster for debian based system.

I have read into this thread that I can error out on Redhat systems when
trying to install percona and tell them to use mariadb instead, percona
isn't support here. Is this correct?


I think on rhel mysql/xtradb is expected to work from the percona 
tarball, as it used to happen on fedora


yet this change [1] is eventually going to switch the default solution 
to mariadb/galera for rhel too which is consistent with the previous 
idea from James


there could still be unexpected issues on distro for which we don't have 
CI but I wouldn't expect troubles caused by your change


1. https://review.openstack.org/#/c/102818/
--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-06-30 Thread James E. Blair
Joshua Hesketh  writes:

> On 6/28/14 10:40 AM, James E. Blair wrote:
>> An alternate approach would be to have third-party CI systems register
>> jobs with OpenStack's Zuul rather than using their own account.  This
>> would mean only a single report of all jobs (upstream and 3rd-party)
>> per-patchset.  It significantly reduces clutter and makes results more
>> accessible -- but even with one system we've never actually wanted to
>> have Jenkins results in comments, so I think one of the other options
>> would be preferred.  Nonetheless, this is possible with a little bit of
>> work.
>
> I agree this isn't the preferred solution, but I disagree with the
> little bit of work. This would require CI systems registering with
> gearman which would mean security issues. The biggest problem with
> this though is that zuul would be stuck waiting from results from 3rd
> parties which often have very slow return times.

"Security issues" is a bit vague.  They already register with Gerrit;
I'm only suggesting that the point of aggregation would change.  I'm
anticipating that they would use authenticated SSL, with ACLs scoped to
the names of jobs each system is permitted to run.  From the perspective
of overall security as well as network topology (ie, firewalls), very
little changes.  The main differences are third party CI systems don't
have to run Zuul anymore, and we go back to having a smaller number of
votes/comments.

Part of the "little bit of work" I was referring to was adding a
timeout.  That should truly be not much work, and work we're planning on
doing anyway to help with the tripleo cloud.

But anyway, it's not important to design this out if we prefer another
solution (and I prefer the table of results separated from comments).

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][i18n] why isn't the client translated?

2014-06-30 Thread Matt Riedemann
I noticed that there is no locale directory or setup.cfg entry for 
babel, which surprises me.  The v1_1 shell in python-novaclient has a 
lot of messages marked for translation using the _() function but the v3 
shell doesn't, presumably because someone figured out we don't translate 
the client messages anyway.


I'm just wondering why we don't translate the client?

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Neutron IPv6 in Icehouse and further

2014-06-30 Thread Maksym Lobur
Many thanks everyone for the feedback

Best regards,
Max Lobur,
OpenStack Developer, Mirantis, Inc.

Mobile: +38 (093) 665 14 28
Skype: max_lobur

38, Lenina ave. Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru


On Mon, Jun 30, 2014 at 4:10 PM, Robert Li (baoli)  wrote:

>  Hi,
>
>  There is a patch for radvd https://review.openstack.org/#/c/102648/2 that
> you can use in addition to the devstack patch. You want to make sure that
> ipv6 is enabled and ra accepted with your VM’s image. Both patches are
> under development.
>
>  To use dhcpv6,  the current dhcp agent should be working. However, it
> might be broken due to a recent commit. If you found a Traceback in your
> dhcp agent log complaining about an uninitialized reference to the variable
> ‘mode', you may hit that issue. You can just initialize it to be ‘static’.
> In addition, only recently, dhcp messages maybe dropped before entering the
> VM. Therefore, you might need to disable the ipv6 rules manually by
> “ip6tables –F” after a VM is launched .  Also make sure you are using the
> latest dnsmasq (2.6.8)
>
>  Thanks,
> Robert
>
>   On 6/27/14, 3:47 AM, "Jaume Devesa"  wrote:
>
>Hello Maksym,
>
>  last week I had more or less the same questions than you and I
> investigate a little bit... Currently we have the *ipv6_ra_mode *and 
> *ipv6_address_mode
> *in the subnet entity. The way you combine these two values will
> determine how and who will configure your VM´s IPv6 addresses. Not all the
> combinations are possible. This document[1] and the
> *upstream-slaac-support* *spec*[2] provide the possible combinations. Not
> sure which one is more updated...
>
>  If you want to try IPv6 current support, you can use the Baodong Li's
> devstack patch [3], although is still in development. Follow the message
> commit instructions to provide a radvd daemon. That means that there is no
> RA advertiser in Neutron currently. There is a *spec* in review[4] to
> fill this gap.
>
>  The changes for allow DHCPv6 in dnsmasq are in review in this patch[5].
>
>  This is what I found... I hope some IPv6 folks can correct me if this
> information is not accurate enough (or wrong)
>
>
>  [1]:
> https://www.dropbox.com/s/9bojvv9vywsz8sd/IPv6%20Two%20Modes%20v3.0.pdf
> [2]:
> http://docs-draft.openstack.org/43/88043/9/gate/gate-neutron-specs-docs/82c251a/doc/build/html/specs/juno/ipv6-provider-nets-slaac.html
> [3]: https://review.openstack.org/#/c/87987
> [4]: https://review.openstack.org/#/c/101306/
> [5]: https://review.openstack.org/#/c/70649/
>
>
>
> On 27 June 2014 00:51, Martinx - ジェームズ  wrote:
>
>> Hi! I'm waiting for that too...
>>
>>  Currently, I'm running IceHouse with static IPv6 address, with the
>> topology "VLAN Provider Networks" and, to make it easier, I'm counting on
>> the following blueprint:
>>
>>  https://blueprints.launchpad.net/neutron/+spec/ipv6-provider-nets-slaac
>>
>>  ...but, I'm not sure if it will be enough to enable basic IPv6 support
>> (without using Neutron as Instance's default gateway)...
>>
>>  Cheers!
>> Thiago
>>
>>
>>  On 26 June 2014 19:35, Maksym Lobur  wrote:
>>
>>>  Hi Folks,
>>>
>>>  Could you please tell what is the current state of IPv6 in Neutron?
>>> Does it have DHCPv6 working?
>>>
>>>  What is the best point to start hacking from? Devstack stable/icehouse
>>> or maybe some tag? Are there any docs / raw deployment guides?
>>> I see some patches not landed yet [1] ... I assume it won't work without
>>> them, right?
>>>
>>>  Somehow I can't open any of the code reviews from the [2] (Not Found)
>>>
>>>   [1]
>>> https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:%255E.*%255Cipv6.*,n,z
>>> [2] https://wiki.openstack.org/wiki/Neutron/IPv6
>>>
>>>  Best regards,
>>> Max Lobur,
>>> Python Developer, Mirantis, Inc.
>>>
>>>  Mobile: +38 (093) 665 14 28
>>> Skype: max_lobur
>>>
>>>  38, Lenina ave. Kharkov, Ukraine
>>> www.mirantis.com
>>> www.mirantis.ru
>>>
>>>  ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
>  --
> Jaume Devesa
> Software Engineer at Midokura
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rubick] Proposal to make py33 job voting for stackforge/rubick

2014-06-30 Thread Jeremy Stanley
On 2014-06-30 22:14:53 +0800 (+0800), Thomas Goirand wrote:
[...]
> please also consider adding support for Python 3.4 at least,

This is in progress. Just last week we started running some CI jobs
on Ubuntu Trusty, which has Python 3.4 available in its main
archive. The plan is to add new gate-{name}-python34 job templates
in our Jenkins Job Builder configuration, and then start switching
projects over fairly soon (the delta between 3.3 and 3.4 support is
minimal enough that we expect 3.4 will "just work" for most of our
projects which are already being gated on 3.3).

> and Python 3.2 if you can.

This is fairly unlikely to happen. Python 3.2 doesn't play well with
simultaneous 2.x support, and I don't foresee us adding any new CI
platforms where it would be available. Also the probability that
someone will need to run OpenStack software on a platform which has
Python 3.2 as its default interpreter (not 2.7 or 2.6) is rapidly
shrinking... outdated releases of Arch Linux maybe? I can't think of
many other places that might be an issue honestly.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-30 Thread Gordon Sim

On 06/30/2014 12:22 PM, Ihar Hrachyshka wrote:

Alexei Kornienko wrote:

Some performance tests may be introduced but they would be more
like functional tests since they require setup of actual
messaging server (rabbit, etc.).


Yes. I think we already have some. F.e.
tests/drivers/test_impl_qpid.py attempts to use local Qpid server
(backing up to fake server if it's not available).


I always get failures when there is a real qpidd service listening on 
the expected port. Does anyone else see this?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslosphinx & hacking build-depending on each other

2014-06-30 Thread Jeremy Stanley
On 2014-06-30 22:11:30 +0800 (+0800), Thomas Goirand wrote:
> It'd be nice to fix the fact that oslosphinx & hacking are
> build-depending on each other. How can we fix this?

They're only build-depending on one another (in the Debian sense)
because you have made them to do so. Hacking does have a build-time
dependency on oslosphinx for generating its documentation, yes. On
the other hand oslosphinx only has a test-time dependency on
hacking. It's the decision to also run your tests at build-time
(presumably because Debian lacks any package build provision to
differentiate doc build requirements from testing requirements)
which make this at all circular.

Perhaps you could skip running hacking tests on oslosphinx (you
could run the rest of flake8 against it, just omit the hacking
extension)? Alternatively, use build profiles... oslosphinx pretty
clearly falls into the "documentation tools" description in
https://wiki.debian.org/DebianBootstrap#Circular_dependencies.2Fstaged_builds
so might be a good fit.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-06-30 Thread Jesse Pretorius
On 30 June 2014 16:05, Eric Frizziero  wrote:

> In more detail, some features of the FairshareScheduler are:
> a) It assigns dynamically the proper priority to every new user requests;
> b) The priority of the queued requests will be recalculated periodically
> using the fairshare algorithm. This feature guarantees the usage of the
> cloud resources is distributed among users and groups by considering the
> portion of the cloud resources allocated to them (i.e. share) and the
> resources already consumed;
> c) all user requests will be inserted in a (persistent) priority queue and
> then processed asynchronously by the dedicated process (filtering +
> weighting phase) when compute resources are available;
> d) From the client point of view the queued requests remain in
> “Scheduling” state till the compute resources are available. No new states
> added: this prevents any possible interaction issue with the Openstack
> clients;
> e) User requests are dequeued by a pool of WorkerThreads (configurable),
> i.e. no sequential processing of the requests;
> f) The failed requests at filtering + weighting phase may be inserted
> again in the queue for n-times (configurable).
>
> We have integrated the FairShareScheduler in our Openstack installation
> (release "HAVANA"). We're now working to adapt the FairShareScheduler to
> the new release "IceHouse".
>
> Does anyone have experiences in those issues found in our cloud scenario?
>
> Could the FairShareScheduler be useful for the Openstack community?
> In that case, we'll be happy to share our work.
>

Sounds like an interesting option to have available.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] SCSS conversion and SCSS upgrade patches

2014-06-30 Thread Jason Rist
On Mon 30 Jun 2014 08:09:40 AM MDT, Douglas Fish wrote:
>
> I had an IRC discussion with jtomasek and rdopieralski recently regarding
> Radomir's patch for converting to SCSS bootstrap.
> https://review.openstack.org/#/c/90371/   We were trying to sort out how
> soon this patch should merge.  We'd like to discuss this at the team
> meeting tomorrow, but I'm sharing here to get an initial discussion
> started.
>
> It seems that in this version of bootstrap SCSS had some bugs.  They are
> low impact, but pretty obvious bugs - I saw mouseover styling and disable
> button styling problems.
>
> The most straightforward fix to this is to upgrade bootstrap.  I understand
> that Jiri is working on that this week, and would like to have the SCSS
> patch merged ASAP to help with that effort.
>
> My feeling is that we shouldn't merge the SCSS patch until we have a
> bootstrap patch ready to merge.  I'd like to see dependent patches used to
> manage this so we can get both changes reviewed and merged at the same
> time.
>
> Radomir has shared that he thinks dependent patches are too painful to use
> for this situation.  Also he'd like to see the SCSS bootstrap patch merged
> ASAP because the horizon split depends on that work as well.
>
> Doug Fish
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I agree with Radomir and will voice that same opinion during the 
meeting.  Lets get the SCSS patch up, and then everyone can contribute 
to improvements, not just Jiri.

-J

--
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] SCSS conversion and SCSS upgrade patches

2014-06-30 Thread Douglas Fish

I had an IRC discussion with jtomasek and rdopieralski recently regarding
Radomir's patch for converting to SCSS bootstrap.
https://review.openstack.org/#/c/90371/   We were trying to sort out how
soon this patch should merge.  We'd like to discuss this at the team
meeting tomorrow, but I'm sharing here to get an initial discussion
started.

It seems that in this version of bootstrap SCSS had some bugs.  They are
low impact, but pretty obvious bugs - I saw mouseover styling and disable
button styling problems.

The most straightforward fix to this is to upgrade bootstrap.  I understand
that Jiri is working on that this week, and would like to have the SCSS
patch merged ASAP to help with that effort.

My feeling is that we shouldn't merge the SCSS patch until we have a
bootstrap patch ready to merge.  I'd like to see dependent patches used to
manage this so we can get both changes reviewed and merged at the same
time.

Radomir has shared that he thinks dependent patches are too painful to use
for this situation.  Also he'd like to see the SCSS bootstrap patch merged
ASAP because the horizon split depends on that work as well.

Doug Fish


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rubick] Proposal to make py33 job voting for stackforge/rubick

2014-06-30 Thread Thomas Goirand
On 06/27/2014 06:25 PM, Oleg Gelbukh wrote:
> Hello,
> 
> As our commits consistently pass py33 tests for last month (although not
> so many changes were made), I propose to enable py33 job voting on
> stackforge/rubick repository.
> 
> What do you think?
> 
> --
> Best regards,
> Oleg Gelbukh

Hi,

While gating with Python 3 is a very good thing, I have to raise the
fact that both Debian Sid and Ubuntu are now using Python 3.4 (Python
3.3 was removed from Sid a few weeks ago). And when it comes to Wheezy,
we're still stuck with Python 3.2.

So please go ahead and gate with Python 3.3, but please also consider
adding support for Python 3.4 at least, and Python 3.2 if you can.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslosphinx & hacking build-depending on each other

2014-06-30 Thread Thomas Goirand
Hi,

It'd be nice to fix the fact that oslosphinx & hacking are
build-depending on each other. How can we fix this?

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-06-30 Thread Eric Frizziero

Hi All,

we have analyzed the nova-scheduler component (FilterScheduler) in our 
Openstack installation used by some scientific teams.


In our scenario, the cloud resources need to be distributed among the 
teams by considering the predefined share (e.g. quota) assigned to each 
team, the portion of the resources currently used and the resources they 
have already consumed.


We have observed that:
1) User requests are sequentially processed (FIFO scheduling), i.e. 
FilterScheduler doesn't provide any dynamic priority algorithm;
2) User requests that cannot be satisfied (e.g. if resources are not 
available) fail and will be lost, i.e. on that scenario nova-scheduler 
doesn't provide any queuing of the requests;
3) OpenStack simply provides a static partitioning of resources among 
various projects / teams (use of quotas). If project/team 1 in a period 
is systematically underutilizing its quota and the project/team 2 
instead is systematically saturating its quota, the only solution to 
give more resource to project/team 2 is a manual change (to be done by 
the admin) to the related quotas.


The need to find a better approach to enable a more effective scheduling 
in Openstack becomes more and more evident when the number of the user 
requests to be handled increases significantly. This is a well known 
problem which has already been solved in the past for the Batch Systems.


In order to solve those issues in our usage scenario of Openstack, we 
have developed a prototype of a pluggable scheduler, named 
FairShareScheduler, with the objective to extend the existing OpenStack 
scheduler (FilterScheduler) by integrating a (batch like) dynamic 
priority algorithm.


The architecture of the FairShareScheduler is explicitly designed to 
provide a high scalability level. To all user requests will be assigned 
a priority value calculated by considering the share allocated to the 
user by the administrator and the evaluation of the effective resource 
usage consumed in the recent past. All requests will be inserted in a 
priority queue, and processed in parallel by a configurable pool of 
workers without interfering with the priority order. Moreover all 
significant information (e.g. priority queue) will be stored in a 
persistence layer in order to provide a fault tolerance mechanism while 
a proper logging system will annotate all relevant events, useful for 
auditing processing.


In more detail, some features of the FairshareScheduler are:
a) It assigns dynamically the proper priority to every new user requests;
b) The priority of the queued requests will be recalculated periodically 
using the fairshare algorithm. This feature guarantees the usage of the 
cloud resources is distributed among users and groups by considering the 
portion of the cloud resources allocated to them (i.e. share) and the 
resources already consumed;
c) all user requests will be inserted in a (persistent) priority queue 
and then processed asynchronously by the dedicated process (filtering + 
weighting phase) when compute resources are available;
d) From the client point of view the queued requests remain in 
“Scheduling” state till the compute resources are available. No new 
states added: this prevents any possible interaction issue with the 
Openstack clients;
e) User requests are dequeued by a pool of WorkerThreads (configurable), 
i.e. no sequential processing of the requests;
f) The failed requests at filtering + weighting phase may be inserted 
again in the queue for n-times (configurable).


We have integrated the FairShareScheduler in our Openstack installation 
(release "HAVANA"). We're now working to adapt the FairShareScheduler to 
the new release "IceHouse".


Does anyone have experiences in those issues found in our cloud scenario?

Could the FairShareScheduler be useful for the Openstack community?
In that case, we'll be happy to share our work.

Any feedback/comment is welcome!

Cheers,
Eric.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] DuplicateOptError: duplicate option: fake_rabbit

2014-06-30 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi,

in case you wonder why your neutron checkout fails recently with
'DuplicateOptError: duplicate option: fake_rabbit' error, this is
because neutron migrated to oslo.messaging recently, and old RPC layer
was removed from the tree. The problem is that while .py files were
removed, .pyc files, being untracked by git, were left on the disk. So
now you import both RPC implementations at the same time, and the
error shows up.

To fix the issue, do the following in your local checkout:

$ rm -r neutron/openstack/common/notifier
$ rm -r neutron/openstack/common/rpc

This should fix the issue.

Cheers,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTsW2DAAoJEC5aWaUY1u57ehwIAKxPkfIQgZquWfh60YEqrYhm
HdBdH/nZlInqNpZzfhXDmRApMD4Nncs2NUMmtBAhSRbSDsf1zJ87aDInNj24Fv9P
4i1D2fBjBbqzDxwyXSjkmq+fbVTSzWWXL2n5hBHUWka95PP410W4anKX/k22FmL3
SapteGLMzVVm7QjeHQghlhVHSvPhWKHgxq7RWPswINmZ02TpYawPhTHXWXp0mKIM
BD+WJV7NmJWJvsq4JX8toNsCnO/uKW/1ygppvWRSLEn7FJWlGQp6B+1pBL0ZQgwd
qkYpJpxop3YDJJhD6JbMVRgbldxNDtwVE2kwq4UbOS8d9uQK5RpXRenq1hjMWks=
=nQF1
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Meeting this week?

2014-06-30 Thread Joe Gordon
On Jun 30, 2014 9:23 AM, "Andrew Laski"  wrote:
>
>
> On 06/29/2014 08:01 PM, Michael Still wrote:
>>
>> Hi. The meeting this week would be on the 3rd of July, which I assume
>> means that many people will be out of the office. Do people think its
>> worth running the meeting or shall we give this week a miss?
>
>
> I will be traveling during the meeting so I'm +1 on giving it a miss.

Same here.

>
>>
>> Thanks,
>> Michael
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Meeting this week?

2014-06-30 Thread Andrew Laski


On 06/29/2014 08:01 PM, Michael Still wrote:

Hi. The meeting this week would be on the 3rd of July, which I assume
means that many people will be out of the office. Do people think its
worth running the meeting or shall we give this week a miss?


I will be traveling during the meeting so I'm +1 on giving it a miss.



Thanks,
Michael




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Neutron IPv6 in Icehouse and further

2014-06-30 Thread Robert Li (baoli)
Hi,

There is a patch for radvd https://review.openstack.org/#/c/102648/2 that you 
can use in addition to the devstack patch. You want to make sure that ipv6 is 
enabled and ra accepted with your VM’s image. Both patches are under 
development.

To use dhcpv6,  the current dhcp agent should be working. However, it might be 
broken due to a recent commit. If you found a Traceback in your dhcp agent log 
complaining about an uninitialized reference to the variable ‘mode', you may 
hit that issue. You can just initialize it to be ‘static’. In addition, only 
recently, dhcp messages maybe dropped before entering the VM. Therefore, you 
might need to disable the ipv6 rules manually by “ip6tables –F” after a VM is 
launched .  Also make sure you are using the latest dnsmasq (2.6.8)

Thanks,
Robert

On 6/27/14, 3:47 AM, "Jaume Devesa" 
mailto:devv...@gmail.com>> wrote:

Hello Maksym,

last week I had more or less the same questions than you and I investigate a 
little bit... Currently we have the ipv6_ra_mode and ipv6_address_mode in the 
subnet entity. The way you combine these two values will determine how and who 
will configure your VM´s IPv6 addresses. Not all the combinations are possible. 
This document[1] and the upstream-slaac-support spec[2] provide the possible 
combinations. Not sure which one is more updated...

If you want to try IPv6 current support, you can use the Baodong Li's devstack 
patch [3], although is still in development. Follow the message commit 
instructions to provide a radvd daemon. That means that there is no RA 
advertiser in Neutron currently. There is a spec in review[4] to fill this gap.

The changes for allow DHCPv6 in dnsmasq are in review in this patch[5].

This is what I found... I hope some IPv6 folks can correct me if this 
information is not accurate enough (or wrong)


[1]: https://www.dropbox.com/s/9bojvv9vywsz8sd/IPv6%20Two%20Modes%20v3.0.pdf
[2]: 
http://docs-draft.openstack.org/43/88043/9/gate/gate-neutron-specs-docs/82c251a/doc/build/html/specs/juno/ipv6-provider-nets-slaac.html
[3]: https://review.openstack.org/#/c/87987
[4]: https://review.openstack.org/#/c/101306/
[5]: https://review.openstack.org/#/c/70649/



On 27 June 2014 00:51, Martinx - ジェームズ 
mailto:thiagocmarti...@gmail.com>> wrote:
Hi! I'm waiting for that too...

Currently, I'm running IceHouse with static IPv6 address, with the topology 
"VLAN Provider Networks" and, to make it easier, I'm counting on the following 
blueprint:

https://blueprints.launchpad.net/neutron/+spec/ipv6-provider-nets-slaac

...but, I'm not sure if it will be enough to enable basic IPv6 support (without 
using Neutron as Instance's default gateway)...

Cheers!
Thiago


On 26 June 2014 19:35, Maksym Lobur 
mailto:mlo...@mirantis.com>> wrote:
Hi Folks,

Could you please tell what is the current state of IPv6 in Neutron? Does it 
have DHCPv6 working?

What is the best point to start hacking from? Devstack stable/icehouse or maybe 
some tag? Are there any docs / raw deployment guides?
I see some patches not landed yet [1] ... I assume it won't work without them, 
right?

Somehow I can't open any of the code reviews from the [2] (Not Found)

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:%255E.*%255Cipv6.*,n,z
[2] https://wiki.openstack.org/wiki/Neutron/IPv6

Best regards,
Max Lobur,
Python Developer, Mirantis, Inc.

Mobile: +38 (093) 665 14 28
Skype: max_lobur

38, Lenina ave. Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Jaume Devesa
Software Engineer at Midokura
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [olso] olso incubator logging issues

2014-06-30 Thread Sean Dague
Every time I crack open a nova logs in detail, at least 2 new olso
incubator log issues have been introduced.

The current ones is clearly someone is over exploding arrays, as we're
getting things like:
2014-06-29 13:36:41.403 19459 DEBUG nova.openstack.common.processutils
[-] Running cmd (subprocess): [ ' e n v ' ,   ' L C _ A L L = C ' ,   '
L A N G = C ' ,   ' q e m u - i m g ' ,   ' i n f o ' ,   ' / o p t / s
t a c k / d a t a / n o v a / i n s t a n c e s / e f f 7 3 1 3 a - 1 1
b 2 - 4 0 2 b - 9 c c d - 6 5 7 8 c b 8 7 9 2 d b / d i s k ' ] execute
/opt/stack/new/nova/nova/openstack/common/processutils.py:160

(yes all those spaces are in there, which now effectively inhibits search).

Also on every wsgi request to Nova API we get something like this:


2014-06-29 13:26:43.836 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute:get will be now enforced enforce
/opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.837 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:security_groups will be now enforced enforce
/opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.838 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:security_groups will be now enforced enforce
/opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.838 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:keypairs will be now enforced enforce
/opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.838 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:hide_server_addresses will be now enforced
enforce /opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.838 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:extended_volumes will be now enforced enforce
/opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.842 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:config_drive will be now enforced enforce
/opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.842 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:server_usage will be now enforced enforce
/opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.842 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:extended_status will be now enforced enforce
/opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.843 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:extended_server_attributes will be now enforced
enforce /opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.843 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:extended_ips_mac will be now enforced enforce
/opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.843 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:extended_ips will be now enforced enforce
/opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.843 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:extended_availability_zone will be now enforced
enforce /opt/stack/new/nova/nova/openstack/common/policy.py:288
2014-06-29 13:26:43.844 DEBUG nova.openstack.common.policy
[req-86680d63-6d6c-4962-9274-1de7de8ca37d
FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
Rule compute_extension:disk_config will be now enforced enforce
/opt/stack/new/nova/nova/openstack/common/policy.py:288

On *every* request.

oslo code, by definition, is going to be used a lot, inside of tight
loops

Re: [openstack-dev] [mistral] Using __init__.py files

2014-06-30 Thread Renat Akhmerov
Ok, Oleg, thanks!

> Jun 30, 2014, в 19:26, Oleg Gelbukh  написал(а):
> 
> Renat,
> 
> As far as I can tell, it is de-facto standard to not place anything at all to 
> __init__.py across the majority of OpenStack projects.
> 
> --
> Best regards,
> Oleg Gelbukh
> 
> 
>> On Mon, Jun 30, 2014 at 3:50 PM, Renat Akhmerov  
>> wrote:
>> Hi,
>> 
>> What would be your opinion on the question “Should we place any important 
>> functionality into __init__.py files or just use it for package level 
>> initialization and exporting variables from module level to a package 
>> level?”.
>> 
>> I personally would prefer not to keep there anything like class Engine 
>> (which is one of the most important parts of Mistral now). It’s somewhat 
>> confusing to me, especially when I navigate through the project structure. 
>> It’s not a critical urgent thing, of course, but would be nice if you share 
>> your opinion.
>> 
>> What do you guys think?
>> 
>> Renat Akhmerov
>> @ Mirantis Inc.
>> 
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Using __init__.py files

2014-06-30 Thread Oleg Gelbukh
Renat,

As far as I can tell, it is de-facto standard to not place anything at all
to __init__.py across the majority of OpenStack projects.

--
Best regards,
Oleg Gelbukh


On Mon, Jun 30, 2014 at 3:50 PM, Renat Akhmerov 
wrote:

> Hi,
>
> What would be your opinion on the question “Should we place any important
> functionality into __init__.py files or just use it for package level
> initialization and exporting variables from module level to a package
> level?”.
>
> I personally would prefer not to keep there anything like class Engine
> (which is one of the most important parts of Mistral now). It’s somewhat
> confusing to me, especially when I navigate through the project structure.
> It’s not a critical urgent thing, of course, but would be nice if you share
> your opinion.
>
> What do you guys think?
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Using __init__.py files

2014-06-30 Thread Renat Akhmerov
Hi,

What would be your opinion on the question “Should we place any important 
functionality into __init__.py files or just use it for package level 
initialization and exporting variables from module level to a package level?”.

I personally would prefer not to keep there anything like class Engine (which 
is one of the most important parts of Mistral now). It’s somewhat confusing to 
me, especially when I navigate through the project structure. It’s not a 
critical urgent thing, of course, but would be nice if you share your opinion.

What do you guys think?

Renat Akhmerov
@ Mirantis Inc.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting - 06/30/2014

2014-06-30 Thread Renat Akhmerov
Hi,

This is a reminder that we’ll have a community meeting today at 
#openstack-meeting at 16.00 UTC.
Review action items
Current status (quickly by team members)
Further plans
Open discussion

You can also find the agenda and the links to the previous meeting minutes and 
logs at https://wiki.openstack.org/wiki/Meetings/MistralAgenda.

Thanks!

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-30 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

> On 06/27/2014 04:04 PM, Ihar Hrachyshka wrote: On 26/06/14 22:38,
> Alexei Kornienko wrote:
 Hello Jay,
 
 Benchmark for oslo.messaging is really simple: You create a
 client that sends messages infinitively and a server that
 processes them. After you can use rabbitmq management plugin
 to see average throughput in the queue. Simple example can be
 found here - 
 https://github.com/andreykurilin/profiler-for-oslo-messaging
 
 I've mentioned some of this already in my previous mails but
 here it is again:
 
> "Huge" is descriptive but not quantitative :) Do you have
> any numbers that pinpoint the amount of time that is being
> spent reconstructing and declaring the queues, say,
> compared to the time spent doing transmission?
 I don't have precise slowdown percentage for each issue. I've
 just identified hotspots using cProfile and strace.
 
> This ten times number... is that an estimate or do you have
> hard numbers that show that? Just curious.
 Using a script that I've mentioned earlier I get average
 throughput on my PC ~700 cast calls per second. I've written
 a simple and stupid script that uses kombu directly (in a
 single threaded and synchronous way with single connection)
 It gave me throughput ~8000 messages per second. Thats why I
 say that library should work at least 10 times faster.
> It doesn't show that those major issues you've pointed out result
> in such large message processing speed dropdown though. Maybe there
> are other causes of slowdown we observe. Neither it shows that
> refactoring of the code will actually help to boost the library
> significantly.
>> It doesn't show that those major issues are the *only* reason for
>> slowdown. Bit it shows that those issues are biggest that are
>> currently visible.

My understanding is that your analysis is mostly based on running a
profiler against the code. Network operations can be bottlenecked in
other places.

You compare 'simple script using kombu' with 'script using
oslo.messaging'. You don't compare script using oslo.messaging before
refactoring and 'after that. The latter would show whether refactoring
was worth the effort. Your test shows that oslo.messaging performance
sucks, but it's not definite that hotspots you've revealed, once
fixed, will show huge boost.

My concern is that it may turn out that once all the effort to
refactor the code is done, we won't see major difference. So we need
base numbers, and performance tests would be a great helper here.

>> We'll get a major speed boost if we fix them (and possibly
>> discover new issues the would prevent us of reaching full
>> speed).
> 
> Though having some hard numbers from using kombu directly is still
> a good thing. Is it possible that we introduce some performance
> tests into oslo.messaging itself?
>> Some performance tests may be introduced but they would be more
>> like functional tests since they require setup of actual
>> messaging server (rabbit, etc.).

Yes. I think we already have some. F.e.
tests/drivers/test_impl_qpid.py attempts to use local Qpid server
(backing up to fake server if it's not available). We could create a
separate subtree in the library for functional tests.

>> What do you mean exactly by performance tests? Just testing
>> overall throughput with some basic scenarios or you mean finding
>> hotspots in the code?
> 

The former. Once we have some base numbers, we may use them to
consider whether changes you propose are worth the effort.

> 
 Regards, Alexei Kornienko
 
 
 On 06/26/2014 11:22 PM, Jay Pipes wrote:
> Hey Alexei, thanks for sharing your findings. Comments
> inline.
> 
> On 06/26/2014 10:08 AM, Alexei Kornienko wrote:
>> Hello,
>> 
>> Returning to performance issues of oslo.messaging. I've
>> found 2 biggest issue in existing implementation of
>> rabbit (kombu) driver:
>> 
>> 1) For almost every message sent/received a new object
>> of Consumer/Publisher class is created. And each object
>> of this class tries to declare it's queue even if it's
>> already declared. 
>> https://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_drivers/impl_rabbit.py#L159
>>
>>
>>
>>
>>
>
>> 
This causes a huge slowdown.
> "Huge" is descriptive but not quantitative :) Do you have
> any numbers that pinpoint the amount of time that is being
> spent reconstructing and declaring the queues, say,
> compared to the time spent doing transmission?
> 
>> 2) with issue #1 is fixed (I've applied a small hack to
>> fix it in my repo) the next big issue araise. For every
>> rpc message received a reply is sent when processing is
>> done (it seems that reply is sent even for "cast" calls
>> which it really strange to me). Reply sent is using
>> connect

Re: [openstack-dev] [rubick] Proposal to make py33 job voting for stackforge/rubick

2014-06-30 Thread Oleg Gelbukh
Hi,

To proceed with this, I have sent (presumably) appropriate change to
review: https://review.openstack.org/#/c/103516/

--
Best regards,
Oleg Gelbukh



On Fri, Jun 27, 2014 at 6:40 PM, Jay Pipes  wrote:

> Sure, why not? :)
>
>
> On 06/27/2014 06:25 AM, Oleg Gelbukh wrote:
>
>> Hello,
>>
>> As our commits consistently pass py33 tests for last month (although not
>> so many changes were made), I propose to enable py33 job voting on
>> stackforge/rubick repository.
>>
>> What do you think?
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-06-30 Thread Sean Dague
On 06/29/2014 09:39 AM, Joshua Hesketh wrote:
> On 6/28/14 10:40 AM, James E. Blair wrote:
>> An alternate approach would be to have third-party CI systems register
>> jobs with OpenStack's Zuul rather than using their own account.  This
>> would mean only a single report of all jobs (upstream and 3rd-party)
>> per-patchset.  It significantly reduces clutter and makes results more
>> accessible -- but even with one system we've never actually wanted to
>> have Jenkins results in comments, so I think one of the other options
>> would be preferred.  Nonetheless, this is possible with a little bit of
>> work.
> 
> I agree this isn't the preferred solution, but I disagree with the
> little bit of work. This would require CI systems registering with
> gearman which would mean security issues. The biggest problem with this
> though is that zuul would be stuck waiting from results from 3rd parties
> which often have very slow return times.

Right, one of the other issues is the quality of the CI results varies
as well.

I think one of the test result burn out issues right now is based on the
fact that they are too rolled up as it is. For instance, a docs only
change gets Tempest results, which humans know are irrelevant, but
Jenkins insists they aren't. I think that if we rolled up more
information, and waited longer, we'd be in a worse state.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] How to filter out meters whose resources have been deleted?

2014-06-30 Thread Ke Xia
Hi,

As time goes on, meters will be a huge list, and some meters whose
resources have been deleted may be useless for me, can I filter them out
from meter list?

Thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [messaging] 'retry' option

2014-06-30 Thread Gordon Sim

On 06/28/2014 10:49 PM, Mark McLoughlin wrote:

On Fri, 2014-06-27 at 17:02 +0100, Gordon Sim wrote:

A question about the new 'retry' option. The doc says:

  By default, cast() and call() will block until the
  message is successfully sent.

What does 'successfully sent' mean here?


Unclear, ambiguous, probably driver dependent etc.

The 'blocking' we're talking about here is establishing a connection
with the broker. If the connection has been lost, then cast() will block
until the connection has been re-established and the message 'sent'.


Understood, but to my mind, that is really an implementation detail.


  Does it mean 'written to the wire' or 'accepted by the broker'?

For the impl_qpid.py driver, each send is synchronous, so it means
accepted by the broker[1].

What does the impl_rabbit.py driver do? Does it just mean 'written to
the wire', or is it using RabbitMQ confirmations to get notified when
the broker accepts it (standard 0-9-1 has no way of doing this).


I don't know, but it would be nice if someone did take the time to
figure it out and document it :)


Having googled around a bit, it appears that kombu v3.* has a 
'confirm_publish' transport option when using the 'pyamqp' transport. 
That isn't available in the 2.* versions, which appear to be what is 
used in oslo.messaging, and I can't find that option specified anywhere 
either in the oslo.messaging codebase.


Running a series of casts using the latest impl_rabbit.py driver and 
examining the data on the wire also shows no confirms being sent.


So for impl_rabbit, the send is not acknowledged, but the delivery to 
consumers is. For impl_qpid its the other way round; the send is 
acknowledged but the delivery to consumers is not (though a prefetch of 
1 is used limiting the loss to one message).



Seriously, some docs around the subtle ways that the drivers differ from
one another would be helpful ... particularly if it exposed incorrect
assumptions API users are currently making.


I'm happy to try and contribute to that.


If the intention is to block until accepted by the broker that has
obvious performance implications. On the other hand if it means block
until written to the wire, what is the advantage of that? Was that a
deliberate feature or perhaps just an accident of implementation?

The use case for the new parameter, as described in the git commit,
seems to be motivated by wanting to avoid the blocking when sending
notifications. I can certainly understand that desire.

However, notifications and casts feel like inherently asynchronous
things to me, and perhaps having/needing the synchronous behaviour is
the real issue?


It's not so much about sync vs async, but a failure mode. By default, if
we lose our connection with the broker, we wait until we can
re-establish it rather than throwing exceptions (requiring the API
caller to have its own retry logic) or quietly dropping the message.


Even when you have no failure, your calling thread has to wait until the 
point the send is deemed successful before returning. So it is 
synchronous with respect to whatever that success criteria is.


In the case where success is deemed to be acceptance by the broker 
(which is the case for the impl_qpid.py driver at present, whether 
intentional or not), the call is fully synchronous.


If on the other hand success is merely writing the message to the wire, 
then any failure may well cause message loss regardless of the retry 
option. The reconnect and retry in this case is only of limited value. 
It can avoid certain losses, but not others.



The use case for ceilometer is to allow its RPCPublisher to have a
publishing policy - block until the samples have been sent, queue (in an
in-memory, fixed-length queue) if we don't have a connection to the
broker, or drop it if we don't have a connection to the broker.

   https://review.openstack.org/77845

I do understand the ambiguity around what message delivery guarantees
are implicit in cast() isn't ideal, but that's not what adding this
'retry' parameter was about.


Sure, I understand that. The retry option is necessitated by an 
(existing) implicit behaviour. However in my view that behaviour is 
implementations specific and of limited value in terms of the semantic 
contract of the call.


--Gordon.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]a problem about the implement of limit-volume-copy-bandwidth

2014-06-30 Thread Yuzhou (C)
Hi stackers,

I found some problems about the current implement of 
limit-volume-copy-bandwidth (this patch has been merged in last week.)

Firstly, assume that I configurate volume_copy_bps_limit=10M, If the path 
is a block device, cgroup blkio can limit copy-bandwidth separately for every 
volume.
But If the path is a regular file, according to the current implement, cgroup 
blkio have to limit total copy-bandwidth for all volume on the disk device 
which the file lies on.
The reason is :
In cinder/utils.py, the method get_blkdev_major_minor

elif lookup_for_file:
# lookup the mounted disk which the file lies on
out, _err = execute('df', path)
devpath = out.split("\n")[1].split()[0]
return get_blkdev_major_minor(devpath, False)

If invoke the method copy_volume concurrently, copy-bandwidth for a volume is 
less than 10M. In this case, the meaning of param volume_copy_bps_limit in 
cinder.conf is defferent.

   Secondly, In NFS, the result of cmd 'df' is like this:
[root@yuzhou yuzhou]# df /mnt/111
Filesystem 1K-blocks  Used Available Use% Mounted on
186.100.8.144:/mnt/nfs_storage   51606528  14676992  34308096  30% /mnt
I think the method get_blkdev_major_minor can not deal with the devpath 
'186.100.8.144:/mnt/nfs_storage'.
i.e can not limit volume copy bandwidth in nfsdriver.

So I think maybe we should modify the current implement to make sure 
copy-bandwidth for every volume meet the configuration requirement.
I suggest we use loop device associated with the regular file(losetup 
/dev/loop0 /mnt/volumes/vm.qcow2),
then limit the bps of loop device.( cgset -r 
blkio.throttle.write_bps_device="7:0 1000" test)
After copying volume, detach loop device. (losetup --detach /dev/loop0)

Any suggestion about my improvement opinions?

Thanks!

Zhou Yu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-06-30 Thread Michael Kerrin
I am trying to finish off https://review.openstack.org/#/c/90134 - percona 
xtradb 
cluster for debian based system.

I have read into this thread that I can error out on Redhat systems when trying 
to 
install percona and tell them to use mariadb instead, percona isn't support 
here. Is 
this correct?

Michael

On Friday 27 June 2014 16:49:58 James Slagle wrote:
> On Fri, Jun 27, 2014 at 4:13 PM, Clint Byrum  wrote:
> > Excerpts from James Slagle's message of 2014-06-27 12:59:36 -0700:
> >> Things are a bit confusing right now, especially with what's been
> >> proposed.  Let me try and clarify (even if just for my own sake).
> >> 
> >> Currently the choices offered are:
> >> 
> >> 1. mysql percona with the percona tarball
> > 
> > Percona Xtradb Cluster, not "mysql percona"
> > 
> >> 2. mariadb galera with mariadb.org packages
> >> 3. mariadb galera with rdo packages
> >> 
> >> And, we're proposing to add
> >> 
> >> 4. mysql percona with percona packages:
> >> https://review.openstack.org/#/c/90134 5. mariadb galera with fedora
> >> packages https://review.openstack.org/#/c/102815/
> >> 
> >> 4 replaces 1, but only for Ubuntu/Debian, it doesn't work on Fedora/RH
> >> 5 replaces 3 (neither of which work on Ubuntu/Debian, obviously)
> >> 
> >> Do we still need 1? Fedora/RH + percona tarball.  I personally don't
> >> think so.
> >> 
> >> Do we still need 2? Fedora/RH or Ubuntu/Debian with galera packages
> >> from maraidb.org. For the Fedora/RH case, I doubt it, people will just
> >> use 5.
> >> 
> >> 3 will be gone (replaced by 5).
> >> 
> >> So, yes, I'd like to see 5 as the default for Fedora/RH and 4 as the
> >> default for Ubuntu/Debian, and both those tested in CI. And get rid of
> >> (or deprecate) 1-3.
> > 
> > I'm actually more confused now than before I read this. The use of
> > numbers is just making my head spin.
> 
> There are 5 choices, some of which are not totally clear.  Hence the
> need to clean things up.
> 
> > It can be stated this way I think:
> > 
> > On RPM systems, use MariaDB Galera packages.
> > 
> > If packages are in the distro, use distro packages. If packages are
> > not in the distro, use RDO packages.
> 
> There won't be a need to install from the RDO repositories. Mariadb
> galera packages are in the main Fedora package repositories, and for
> RHEL, they're in the epel repositories.
> 
> > On DEB systems, use Percona XtraDB Cluster packages.
> > 
> > If packages are in the distro, use distro packages. If packages are
> > not in the distro, use upstream packages.
> > 
> > If anything doesn't match those principles, it is a bug.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][third-party] Simple and robust CI script?

2014-06-30 Thread Luke Gorrie
Howdy!

Paging other 3rd party CI operators...

I would like to run a simple and robust 3rd party CI. Simple as in a small
number of moving parts, robust as in unlikely to make mistakes due to
unexpected problems.

I'm imagining:

- 100 lines of shell for the implementation.

- Minimum of daemons. (Just Jenkins? Ideally not even that...)

- Robust detection of operational problems (CI bugs) vs system-under-test
problems (changes that should be voted against).

- Effective notifications on all errors or negative votes.

Does anybody already have an implementation like this that they would like
to share? (Is anybody else wanting something in this direction?)

I've been iterating on 3rd party CI mechanisms (currently onto my 3rd
from-scratch setup) and I have not been completely satisfied with any of
them. I would love to hear from someone who has a minimalist implementation
that they are happy with :).

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-06-30 Thread Daniel P. Berrange
On Sat, Jun 28, 2014 at 08:26:44AM -0500, Matt Riedemann wrote:
> 
> 
> On 6/27/2014 7:35 AM, Daniel P. Berrange wrote:
> >On Fri, Jun 27, 2014 at 07:40:51AM -0400, Sean Dague wrote:
> >>It's clear that lots of projects want 3rd Party CI information on
> >>patches. But it's also clear that 6 months into this experiment with a
> >>lot of 3rd Party CI systems, the Gerrit UI is really not great for this.
> >
> >That's an understatement about the UI :-)
> >
> >>It seems what we actually want is a dashboard of these results. We want
> >>them available when we go to Gerrit, but we don't want them in Gerrit
> >>itself.
> >>
> >>What if 3rd Party CI didn't vote in Gerrit? What if it instead published
> >>to some 3rd party test reporting site (a thing that doesn't yet exist).
> >>Gerrit has the facility so that we could inject the dashboard content
> >>for this in Gerrit in a little table somewhere, but the data would
> >>fundamentally live outside of Gerrit. It would also mean that all the
> >>aggregate reporting of 3rd Party CI that's being done in custom gerrit
> >>scripts, could be integrated directly into such a thing.
> >
> >Agreed, it would be a great improvement in usability if we stopped all
> >CI systems, including our default Jenkins, from ever commenting on
> >reviews. At most gating CIs should +1/-1.  Having a table of results
> >displayed, pulling the data from an external result tracking system
> >would be a great idea.
> >
> >Even better if this external system had a nice button you can press
> >to trigger re-check, so we can stop using comments for that too.
> 
> I would disagree with this idea since it's equivalent to 'recheck no bug'
> and that's naughty, because then we don't track race bugs as well.

It could easily have a text field for entering a bug number. The point
is to stop adding comments to gerrit that aren't related to code review.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Performance of security group

2014-06-30 Thread Édouard Thuleau
Yes, the usage of fanout topic by VNI is also another big improvement we
could do.
That will fit perfectly for the l2-pop mechanism driver.
Of course, that need a specific call on a start/re-sync to get initial
state. That actually done by the l2-pop MD if the uptime of an agent is
less than 'agent_boot_time' flag [1].

[1]
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L181

Édouard.


On Fri, Jun 27, 2014 at 3:43 AM, joehuang  wrote:

> Interesting idea to optimize the performance.
>
> Not only security group rule will leads to fanout message load, we need to
> review and check to see if all fanout usegae in Neutron could be optimized.
>
> For example, L2 population:
>
> self.fanout_cast(context,
>   self.make_msg(method, fdb_entries=fdb_entries),
>   topic=self.topic_l2pop_update)
>
> it would be better to use network+l2pop_update as the topic, and only the
> agents which there are VMs running on it will consume the message.
>
> Best Regards
> Chaoyi Huang( Joe Huang)
>
> -邮件原件-
> 发件人: Miguel Angel Ajo Pelayo [mailto:mangel...@redhat.com]
> 发送时间: 2014年6月27日 1:33
> 收件人: OpenStack Development Mailing List (not for usage questions)
> 主题: Re: [openstack-dev] [neutron]Performance of security group
>
> - Original Message -
> > @Nachi: Yes that could a good improvement to factorize the RPC mechanism.
> >
> > Another idea:
> > What about creating a RPC topic per security group (quid of the RPC
> > topic
> > scalability) on which an agent subscribes if one of its ports is
> > associated to the security group?
> >
> > Regards,
> > Édouard.
> >
> >
>
>
> Hmm, Interesting,
>
> @Nachi, I'm not sure I fully understood:
>
>
> SG_LIST [ SG1, SG2]
> SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
> port[SG_ID1, SG_ID2], port2 , port3
>
>
> Probably we may need to include also the SG_IP_LIST = [SG_IP1, SG_IP2] ...
>
>
> and let the agent do all the combination work.
>
> Something like this could make sense?
>
> Security_Groups = {SG1:{IPs:[],RULES:[],
>SG2:{IPs:[],RULES:[]}
>   }
>
> Ports = {Port1:[SG1, SG2], Port2: [SG1]  }
>
>
> @Edouard, actually I like the idea of having the agent subscribed
> to security groups they have ports on... That would remove the need to
> include
> all the security groups information on every call...
>
> But would need another call to get the full information of a set of
> security groups
> at start/resync if we don't already have any.
>
>
> >
> > On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang < ayshihanzh...@126.com >
> wrote:
> >
> >
> >
> > hi Miguel Ángel,
> > I am very agree with you about the following point:
> > >  * physical implementation on the hosts (ipsets, nftables, ... )
> > --this can reduce the load of compute node.
> > >  * rpc communication mechanisms.
> > -- this can reduce the load of neutron server
> > can you help me to review my BP specs?
> >
> >
> >
> >
> >
> >
> >
> > At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" < mangel...@redhat.com
> >
> > wrote:
> > >
> > >  Hi it's a very interesting topic, I was getting ready to raise
> > >the same concerns about our security groups implementation, shihanzhang
> > >thank you for starting this topic.
> > >
> > >  Not only at low level where (with our default security group
> > >rules -allow all incoming from 'default' sg- the iptable rules
> > >will grow in ~X^2 for a tenant, and, the
> "security_group_rules_for_devices"
> > >rpc call from ovs-agent to neutron-server grows to message sizes of
> >100MB,
> > >generating serious scalability issues or timeouts/retries that
> > >totally break neutron service.
> > >
> > >   (example trace of that RPC call with a few instances
> > > http://www.fpaste.org/104401/14008522/ )
> > >
> > >  I believe that we also need to review the RPC calling mechanism
> > >for the OVS agent here, there are several possible approaches to
> breaking
> > >down (or/and CIDR compressing) the information we return via this api
> call.
> > >
> > >
> > >   So we have to look at two things here:
> > >
> > >  * physical implementation on the hosts (ipsets, nftables, ... )
> > >  * rpc communication mechanisms.
> > >
> > >   Best regards,
> > >Miguel Ángel.
> > >
> > >- Mensaje original -
> > >
> > >> Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
> > >> It also based on the rule set mechanism.
> > >> The issue in that proposition, it's only stable since the begin of the
> > >> year
> > >> and on Linux kernel 3.13.
> > >> But there lot of pros I don't list here (leverage iptables limitation,
> > >> efficient update rule, rule set, standardization of netfilter
> > >> commands...).
> > >
> > >> Édouard.
> > >
> > >> On Thu, Jun 19, 2014 at 8:25 AM, henry hly < henry4...@gmail.com >
> wrote:
> > >
> > >> > we have done some tests, but have different result: the performance
> is
> > >> > nearly
> > >> > the same for empty and 5k rules in iptable, bu