Re: [openstack-dev] [Mistral] Event Subscription

2014-11-17 Thread Renat Akhmerov
Ok.

Here’s what I think.

Mostly I like the solution you’re proposing. Some comments/questions:
We discussed this with Dmitry about a couple of months ago and we concluded 
that we don’t necessarily need a separate endpoint, CRUD operations etc. (even 
though my initial BP had exactly this idea). One of the reasons was that we can 
just attach required information about events we’re interested in to “start 
workflow” request (technically it may be that “params” argument in 
“start_workflow” method) so that you don’t need to make 2 requests: 1) 
Subscribe to required events 2) Start workflow. However, I think we could 
combine these two things: having a separate endpoint and being able to attach 
“callback” information (the naming should be different though) in “start 
workflow” request. The first option is useful when I’m not interested in some 
particular workflow. For example, if I as a Mistral client just want to monitor 
what’s happening with my multiple workflows (not certain executions but rather 
all executions of specific workflows) and want to react on them somehow. The 
second one is just a shortcut solution to avoid one extra request to make event 
subscription.
Having only decorators is not enough. Even though I like that idea indeed. For 
instance, the event of completing workflow can’t be handled by a decorator I 
guess. But I’m not 100% sure, it may actually depend on how smart that 
decorator is. Anyway, just a thought.
Do you think we need a separate worker/executor connected via MQ to make all 
the notifications? Would it be simpler to just make those HTTP calls directly 
from engine w/o distributing them? Seems like distribution overhead may be 
higher than making the call itself. Btw, we’re now thinking about the concept 
of ‘local actions’ that don’t have to be distributed across executors. The 
reason is the same: distribution overhead is much higher than just running them 
right in the engine process. So I would like to know what you think here.

Thanks

Renat Akhmerov
@ Mirantis Inc.



> On 12 Nov 2014, at 22:21, W Chan  wrote:
> 
> Nikolay,
> 
> You're right.  We will need to store the events in order to re-publish.  How 
> about a separate Event model?  The events are written to the DB by the same 
> worker that publishes the event.  The retention policy for these events is 
> then managed by a config option.
> 
> Winson
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Denis Makogon
вторник, 18 ноября 2014 г. пользователь Mehdi Abaakouk написал:

>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
>
>
> Le 2014-11-17 22:53, Doug Hellmann a écrit :
>
>  That’s a good goal, but that’s not what I had in mind for in-tree
 functional tests.


>>> An interesting idea that might be useful that taskflow implemented/has
>>> done...
>>>
>>> The examples @ https://github.com/openstack/
>>> taskflow/tree/master/taskflow/examples all get tested during unit test
>>> runs to ensure they work as expected. This seems close to your 'simple app'
>>> (where app is replaced with example), it might be nice to have a similar
>>> approach for oslo.messaging that has 'examples' that are these apps that
>>> get ran to probe the functionality of oslo.messaging (as well as useful for
>>> documentation to show people how to use it, which is the other usage
>>> taskflow has for these examples)
>>>
>>> The hacky example tester could likely be shared (or refactored, since it
>>> probably needs it), https://github.com/openstack/
>>> taskflow/blob/master/taskflow/tests/test_examples.py#L91
>>>
>>
>> Sure, that would be a good way to do it, too.
>>
>
> We already have some works done in that ways. Gordon Sim have wrote some
> tests that use only the public API to test a driver:
> https://github.com/openstack/oslo.messaging/blob/master/
> tests/functional/test_functional.py
>
> You just have to set the TRANSPORT_URL environment variable to start them.
>
> I'm working to run them on a devstack vm for rabbit, qpid, amqp1.0 driver,
> the infra patch that add experimental jobs have just landed:
> https://review.openstack.org/#/c/130370/
>
>
Amazing work, Mehdi.


> I have two other patches waiting to make it works:
> * https://review.openstack.org/#/c/130370/
> * https://review.openstack.org/#/c/130437/
>
>
Will take a look at them asap.


> So if zmq driver support in devstack is fixed, we can easily add a new job
> to run them in the same way.
>
>
Btw this is a good question. I will take look at current state of zmq in
devstack.


>
> - ---
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
> -BEGIN PGP SIGNATURE-
> Version: OpenPGP.js v.1.20131017
> Comment: http://openpgpjs.org
>
> wkYEAREIABAFAlRq6p4JEJZbdE7sD8foAAAWnACdHPwDAbga4mfP/tIL1Z9q
> A0w2zvAAnA/tvfXnAJO4a2n4TKiZYiVGbUdT
> =BVDs
> -END PGP SIGNATURE-
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



Le 2014-11-17 22:53, Doug Hellmann a écrit :

That’s a good goal, but that’s not what I had in mind for in-tree 
functional tests.




An interesting idea that might be useful that taskflow implemented/has 
done...


The examples @ 
https://github.com/openstack/taskflow/tree/master/taskflow/examples 
all get tested during unit test runs to ensure they work as expected. 
This seems close to your 'simple app' (where app is replaced with 
example), it might be nice to have a similar approach for 
oslo.messaging that has 'examples' that are these apps that get ran to 
probe the functionality of oslo.messaging (as well as useful for 
documentation to show people how to use it, which is the other usage 
taskflow has for these examples)


The hacky example tester could likely be shared (or refactored, since 
it probably needs it), 
https://github.com/openstack/taskflow/blob/master/taskflow/tests/test_examples.py#L91


Sure, that would be a good way to do it, too.


We already have some works done in that ways. Gordon Sim have wrote some 
tests that use only the public API to test a driver: 
https://github.com/openstack/oslo.messaging/blob/master/tests/functional/test_functional.py


You just have to set the TRANSPORT_URL environment variable to start 
them.


I'm working to run them on a devstack vm for rabbit, qpid, amqp1.0 
driver, the infra patch that add experimental jobs have just landed: 
https://review.openstack.org/#/c/130370/


I have two other patches waiting to make it works:
* https://review.openstack.org/#/c/130370/
* https://review.openstack.org/#/c/130437/

So if zmq driver support in devstack is fixed, we can easily add a new 
job to run them in the same way.



- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht
-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wkYEAREIABAFAlRq6p4JEJZbdE7sD8foAAAWnACdHPwDAbga4mfP/tIL1Z9q
A0w2zvAAnA/tvfXnAJO4a2n4TKiZYiVGbUdT
=BVDs
-END PGP SIGNATURE-


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-17 Thread Richard Jones
On 18 November 2014 10:59, Richard Jones  wrote:

> On 17 November 2014 21:54, Radomir Dopieralski 
> wrote:
> > - Bower in the development environment,
> > - Bower configuration file in two copies, one for global-requirements,
> > and one for the Horizon's local requirements. Plus a gate job that makes
> > sure no new library or version gets included in the Horizon's before
> > getting into the global-requirements,
>
> Could you perhaps elaborate on this? How do you see the workflow working
> here?
>
> Given that Horizon already integrates with xstatic, it would be messy (and
> potentially confusing) to include something so it *also* integrated with
> bower. I was envisaging us creating a tool which generates xstatic packages
> from bower packages. I'm not the first to think along these lines
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/031042.html
>
> I will be looking into creating such a tool today.
>

I wrote the tool today, and you can find it here:

https://github.com/r1chardj0n3s/flaming-shame

(github auto-named the repository for me - it's like it KNOWS)

I've proposed to Thomas Waldmann that this be included in the xstatic
package.

It doesn't handle dependencies at all - I'm not sure it should or could
sensibly.


 Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Proposing new meeting times

2014-11-17 Thread Peeyush Gupta
Hi devananda,

I would be comfortable with both the timings, though I like the second
option better. I am from India, so current timing is a little
inconvenient for us! 

On 11/18/2014 06:30 AM, Devananda van der Veen wrote:
> Hi all,
>
> As discussed in Paris and at today's IRC meeting [1] we are going to be
> alternating the time of the weekly IRC meetings to accommodate our
> contributors in EMEA better. No time will be perfect for everyone, but as
> it stands, we rarely (if ever) see our Indian, Chinese, and Japanese
> contributors -- and it's quite hard for any of the AU / NZ folks to attend.
>
> I'm proposing two sets of times below. Please respond with a "-1" vote to
> an option if that option would cause you to miss ALL meetings, or a "+1"
> vote if you can magically attend ALL the meetings. If you can attend,
> without significant disruption, at least one of the time slots in a
> proposal, please do not vote either for or against it. This way we can
> identify a proposal which allows everyone to attend at a minimum 50% of the
> meetings, and preferentially weight towards one that allows more
> contributors to attend two meetings.
>
> This link shows the local times in some major coutries / timezones around
> the world (and you can customize it to add your own).
> http://www.timeanddate.com/worldclock/meetingtime.html?iso=20141125&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5
>
> For reference, the current meeting time is 1900 UTC.
>
> Option #1: alternate between Monday 1900 UTC && Tuesday 0900 UTC.  I like
> this because 1900 UTC spans all of US and western EU, while 0900 combines
> EU and EMEA. Folks in western EU are "in the middle" and can attend all
> meetings.
>
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=24&hour=19&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5
>
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=25&hour=9&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5
>
>
> Option #2: alternate between Monday 1700 UTC && Tuesday 0500 UTC. I like
> this because it shifts the current slot two hours earlier, making it easier
> for eastern EU to attend without excluding the western US, and while 0500
> UTC is not so late that US west coast contributors can't attend (it's 9PM
> for us), it is harder for western EU folks to attend. There's really no one
> in the middle here, but there is at least a chance for US west coast and
> EMEA to overlap, which we don't have at any other time.
>
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=24&hour=17&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5
>
>
> I'll collate all the responses to this thread during the week, ahead of
> next week's regularly-scheduled meeting.
>
> -Devananda
>
> [1]
> http://eavesdrop.openstack.org/meetings/ironic/2014/ironic.2014-11-17-19.00.log.html
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Peeyush Gupta
gpeey...@linux.vnet.ibm.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Anyone Using the Open Solaris ZFS Driver?

2014-11-17 Thread Duncan Thomas
Is the new driver drop-in compatible with the old one? IF not, can existing
systems be upgraded to the new driver via some manual steps, or is it
basically a completely new driver with similar functionality?

On 17 November 2014 07:08, Drew Fisher  wrote:
> We (here at Oracle) have a replacement for this driver which includes
> local ZFS, iSCSI and FC drivers all with ZFS as the underlying driver.
> We're in the process of getting CI set up so we can contribute the
> driver upstream along with our ZFSSA driver (which is already in the
tree).
>
> If anybody has more questions about this, please let me know. The
> driver is in the open for folks to look at and if anybody wants us to
> start upstream integration for it, we'll be happy to do so.
>
> -Drew
>
>
> On 11/16/14, 8:45 PM, Mike Perez wrote:
>> The Open Solaris ZFS driver [1] is currently missing a lot of the minimum
>> features [2] that the Cinder team requires with all drivers. As a
result, it's
>> really broken.
>>
>> I wanted to gauge who is using it, and if anyone was interested in
fixing the
>> driver. If there is not any activity with this driver, I would like to
propose
>> it to be deprecated for removal.
>>
>> [1] -
https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/san/solaris.py
>> [2] -
http://docs.openstack.org/developer/cinder/devref/drivers.html#minimum-features
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Proposing new meeting times

2014-11-17 Thread Jay Faulkner


From: Devananda van der Veen [mailto:devananda@gmail.com]
Sent: Monday, November 17, 2014 5:00 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Ironic] Proposing new meeting times

Hi all,

As discussed in Paris and at today's IRC meeting [1] we are going to be 
alternating the time of the weekly IRC meetings to accommodate our contributors 
in EMEA better. No time will be perfect for everyone, but as it stands, we 
rarely (if ever) see our Indian, Chinese, and Japanese contributors -- and it's 
quite hard for any of the AU / NZ folks to attend.

I'm proposing two sets of times below. Please respond with a "-1" vote to an 
option if that option would cause you to miss ALL meetings, or a "+1" vote if 
you can magically attend ALL the meetings. If you can attend, without 
significant disruption, at least one of the time slots in a proposal, please do 
not vote either for or against it. This way we can identify a proposal which 
allows everyone to attend at a minimum 50% of the meetings, and preferentially 
weight towards one that allows more contributors to attend two meetings.

This link shows the local times in some major coutries / timezones around the 
world (and you can customize it to add your own).
http://www.timeanddate.com/worldclock/meetingtime.html?iso=20141125&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5

For reference, the current meeting time is 1900 UTC.

Option #1: alternate between Monday 1900 UTC && Tuesday 0900 UTC.  I like this 
because 1900 UTC spans all of US and western EU, while 0900 combines EU and 
EMEA. Folks in western EU are "in the middle" and can attend all meetings.

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=24&hour=19&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=25&hour=9&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5


Option #2: alternate between Monday 1700 UTC && Tuesday 0500 UTC. I like this 
because it shifts the current slot two hours earlier, making it easier for 
eastern EU to attend without excluding the western US, and while 0500 UTC is 
not so late that US west coast contributors can't attend (it's 9PM for us), it 
is harder for western EU folks to attend. There's really no one in the middle 
here, but there is at least a chance for US west coast and EMEA to overlap, 
which we don't have at any other time.

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=24&hour=17&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5


+1, I’d be able to attend both these meetings, I believe.

-Jay Faulkner

I'll collate all the responses to this thread during the week, ahead of next 
week's regularly-scheduled meeting.

-Devananda

[1] 
http://eavesdrop.openstack.org/meetings/ironic/2014/ironic.2014-11-17-19.00.log.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] fix latency on requirements breakage

2014-11-17 Thread Matt Riedemann



On 11/17/2014 6:57 PM, Louis Taylor wrote:

On Tue, Nov 18, 2014 at 10:46:38AM +1030, Christopher Yeoh wrote:

Maybe a MOTD at the top of http://review.openstack.org could help here?  Have
a button that the QA/infra people can hit when everything is broken that puts
up a message there asking people to stop rechecking/submitting patches.


How about elastic recheck showing a message? If a bug is identified as breaking
the world, it shouldn't give a helpful "feel free to leave a 'recheck' comment
to run the tests again" comment when tests fail. That just encourages people to
keep rechecking.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I think this idea has come up before, the problem is knowing how to 
distinguish the sky is falling type bugs from other race bugs we know 
about. Thinking out loud it could be severity of the bug in launchpad 
but we have a lot of high/critical race bugs that have been around for a 
long time and they are obviously not breaking the world. We could tag 
bugs (I'm assuming I could get bug tags from the launchpad API) but we'd 
have to be pretty strict about not abusing the tag just to get attention 
on a bug.


Other ideas?

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 11/18

2014-11-17 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)


1) Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [testr] import failure diagnostics about to be much better

2014-11-17 Thread Robert Collins
One of the highly reported issues was that when test discovery failed,
testr showed no useful details about the issue.

The root cause of this was an interaction between test *listing*,
which the standard library doesn't formally support (its on my list to
add it there) and discovery, which only knew how to report failures as
pseudo-tests.

I recently fixed that in the standard library (so it will be in Python
3.5) - but unittest2 0.8.0 has it backported for older Pythons, and
using testtools 1.2.0 or newer will export the needed data for the
import errors to be identified even when not running the tests. A
subunit point release will be needed in order to consume that data,
which will probably be 1.0.0.

I'll send a followup when everything is in releases.

Cheers,
-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.1.0 released

2014-11-17 Thread Matt Riedemann



On 11/17/2014 9:36 AM, Victor Sergeyev wrote:

Hello All!

Oslo team is pleased to announce the new release of Oslo database
handling library - oslo.db 1.1.0

List of changes:
$ git log --oneline --no-merges  1.0.2..master
1b0c2b1 Imported Translations from Transifex
9aa02f4 Updated from global requirements
766ff5e Activate pep8 check that _ is imported
f99e1b5 Assert exceptions based on API, not string messages
490f644 Updated from global requirements
8bb12c0 Updated from global requirements
4e19870 Reorganize DbTestCase to use provisioning completely
2a6dbcd Set utf8 encoding for mysql and postgresql
1b41056 ModelsMigrationsSync: Add check for foreign keys
8fb696e Updated from global requirements
ba4a881 Remove extraneous vim editor configuration comments
33011a5 Remove utils.drop_unique_constraint()
64f6062 Improve error reporting for backend import failures
01a54cc Ensure create_engine() retries the initial connection test
26ec2fc Imported Translations from Transifex
9129545 Use fixture from oslo.config instead of oslo-incubator
2285310 Move begin ping listener to a connect listener
7f9f4f1 Create a nested helper function that will work on py3.x
b42d8f1 Imported Translations from Transifex
4fa3350 Start adding a environment for py34/py33
b09ee9a Explicitly depend on six in requirements file
7a3e091 Unwrap DialectFunctionDispatcher from itself.
0928d73 Updated from global requirements
696f3c1 Use six.wraps instead of functools.wraps
8fac4c7 Update help string to use database
fc8eb62 Use __qualname__ if we can
6a664b9 Add description for test_models_sync function
8bc1fb7 Use the six provided iterator mix-in
436dfdc ModelsMigrationsSync:add correct server_default check for Enum
2075074 Add history/changelog to docs
c9e5fdf Add run_cross_tests.sh script

Thanks Andreas Jaeger, Ann Kamyshnikova, Christian Berendt, Davanum
Srinivas, Doug Hellmann, Ihar Hrachyshka, James Carey, Joshua Harlow,
Mike Bayer, Oleksii Chuprykov, Roman Podoliaka for contributing to this
release.

Please report issues to the bug tracker: https://bugs.launchpad.net/oslo.db


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



And...the nova postgresql opportunistic DB tests are failing quite 
frequently due to some race introduced by the new library version [1].


[1] https://bugs.launchpad.net/oslo.db/+bug/1393633

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cells] Cells subgroup and meeting times

2014-11-17 Thread Matt Riedemann



On 11/17/2014 9:21 AM, Andrew Laski wrote:

Since there have been no stated conflicts or issues with these times yet
we will go with Wednesdays alternating between 1700 and 2200 UTC in
#openstack-meeting-3 which seems to be open at that time.

This week the meeting will be at 1700.  I'll see you all there!



On 11/11/2014 04:04 PM, Andrew Laski wrote:

We had a great discussion on cells at the summit which is captured at
https://etherpad.openstack.org/p/kilo-nova-cells.  One of the tasks we
agreed upon there was to form a subgroup to co-ordinate this effort
and report progress to the Nova meeting regularly.  To that end I
would like to find a meeting time, or more likely alternating times,
that will work for interested parties.

I am proposing Wednesday as the meeting day since it's more open than
Tues/Thurs so finding a meeting room at almost any time should be
feasible.  My opening bid is alternating between 1700 and 2200 UTC.
That should provide options that aren't too early or too late for most
people.  Is this fine for everyone or should it be adjusted a bit?

A meeting room will be picked once we're settled on times.  And I'm
not planning on a meeting this week since we haven't picked times yet
and I haven't had time to put together specs yet and I would like to
start with a discussion on those.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I was going to update the meetings wiki with the new entry but I see 
you've beaten me to it [1], touche.


[1] https://wiki.openstack.org/wiki/Meetings#Nova_Cellsv2_Meeting

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Anyone Using the Open Solaris ZFS Driver?

2014-11-17 Thread Jay S. Bryant

Drew,

I would say that it would be good to open a Blueprint for this and push 
the code up for review.  Even if you don't have CI ready yet you can 
post results of a driver cert test [1] and we can start reviewing the 
code while you work on finishing up the process of getting your CI running.


Look forward to seeing you code.

Jay

[1] https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers

On 11/16/2014 11:08 PM, Drew Fisher wrote:

We (here at Oracle) have a replacement for this driver which includes
local ZFS, iSCSI and FC drivers all with ZFS as the underlying driver.
We're in the process of getting CI set up so we can contribute the
driver upstream along with our ZFSSA driver (which is already in the tree).

If anybody has more questions about this, please let me know.  The
driver is in the open for folks to look at and if anybody wants us to
start upstream integration for it, we'll be happy to do so.

-Drew


On 11/16/14, 8:45 PM, Mike Perez wrote:

The Open Solaris ZFS driver [1] is currently missing a lot of the minimum
features [2] that the Cinder team requires with all drivers. As a result, it's
really broken.

I wanted to gauge who is using it, and if anyone was interested in fixing the
driver. If there is not any activity with this driver, I would like to propose
it to be deprecated for removal.

[1] - 
https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/san/solaris.py
[2] - 
http://docs.openstack.org/developer/cinder/devref/drivers.html#minimum-features


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ServiceVM] servicevm IRC meeting reminder (Nov 19 Wednesday 17:00 UTC-)

2014-11-17 Thread Isaku Yamahata
Hi. This is a reminder mail for the weekly servicevm IRC meeting.
>From Nov 19 2014, the timeslot/channel has been changed.
Please be prepared.

Nov 19, 2014 Wednesday 17:00 UTC-
#openstack-meeting-4 on freenode
https://wiki.openstack.org/wiki/Meetings/ServiceVM
-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] fix latency on requirements breakage

2014-11-17 Thread Joshua Harlow

Good point, we really need a better dependency resolver/installer...

Jeremy Stanley wrote:

On 2014-11-17 16:41:02 -0800 (-0800), Joshua Harlow wrote:

Robert Collins wrote:
[...]

That said, making requirements be capped and auto adjust upwards would
be extremely useful IMO, but its a chunk of work;
  - we need the transitive dependencies listed, not just direct dependencies

Wouldn't a pip install of the requirements.txt from the requirements repo
itself get this? That would tell pip to download all the things and there
transitive dependencies (aka step #1).


  - we need a thing to find possible upgrades and propose bumps

This is an analysis of the $ pip freeze after installing into that
virtualenv (aka step #2)?

[...]

Something to keep in mind here is that just asking pip to install a
list of 150 packages at particular versions doesn't actually get you
that. You can't ever really cap your transitive dependencies
effectively because they are transitive, so pip will ignore what
you've asked for if some other package you subsequently install
wants a different version of the same. For this reason, the result
is also highly dependent on the order in which you list these
dependencies.

If your project lists dependencies on AX, you'll get A>X
B

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] fix latency on requirements breakage

2014-11-17 Thread Mathieu Gagné

Sean Dague, thanks for bringing up the subject.

This is highly relevant to my interests. =)

On 2014-11-17 7:10 PM, Robert Collins wrote:

Most production systems I know don't run with open ended dependencies.
One of our contributing issues IMO is that we have the requirements
duplicated everywhere - and then ignore them for many of our test runs
(we deliberately override the in-tree ones with global requirements).
Particularly, since the only reason unified requirements matter is for
distro packages, and they ignore our requirements files *anyway*, I'm
not sure our current aggregate system is needed in that light.

That said, making requirements be capped and auto adjust upwards would
be extremely useful IMO, but its a chunk of work;
  - we need the transitive dependencies listed, not just direct dependencies
  - we need a thing to find possible upgrades and propose bumps


I recently found this blog post which suggests using pip-review:
http://nvie.com/posts/pin-your-packages/#pip-review

Could it be run once in a while against global requirements and a change 
proposed to gerrit to review new updates?



  - we would need to very very actively propogate those out from global
requirements

For now I think making 'react to the situation faster and easier' is a
good thing to push on.

-Rob



--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] fix latency on requirements breakage

2014-11-17 Thread Jeremy Stanley
On 2014-11-17 16:41:02 -0800 (-0800), Joshua Harlow wrote:
> Robert Collins wrote:
> [...]
> >That said, making requirements be capped and auto adjust upwards would
> >be extremely useful IMO, but its a chunk of work;
> >  - we need the transitive dependencies listed, not just direct dependencies
> 
> Wouldn't a pip install of the requirements.txt from the requirements repo
> itself get this? That would tell pip to download all the things and there
> transitive dependencies (aka step #1).
> 
> >  - we need a thing to find possible upgrades and propose bumps
> 
> This is an analysis of the $ pip freeze after installing into that
> virtualenv (aka step #2)?
[...]

Something to keep in mind here is that just asking pip to install a
list of 150 packages at particular versions doesn't actually get you
that. You can't ever really cap your transitive dependencies
effectively because they are transitive, so pip will ignore what
you've asked for if some other package you subsequently install
wants a different version of the same. For this reason, the result
is also highly dependent on the order in which you list these
dependencies.

If your project lists dependencies on AX, you'll get A>X
Bhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Proposing new meeting times

2014-11-17 Thread Devananda van der Veen
Hi all,

As discussed in Paris and at today's IRC meeting [1] we are going to be
alternating the time of the weekly IRC meetings to accommodate our
contributors in EMEA better. No time will be perfect for everyone, but as
it stands, we rarely (if ever) see our Indian, Chinese, and Japanese
contributors -- and it's quite hard for any of the AU / NZ folks to attend.

I'm proposing two sets of times below. Please respond with a "-1" vote to
an option if that option would cause you to miss ALL meetings, or a "+1"
vote if you can magically attend ALL the meetings. If you can attend,
without significant disruption, at least one of the time slots in a
proposal, please do not vote either for or against it. This way we can
identify a proposal which allows everyone to attend at a minimum 50% of the
meetings, and preferentially weight towards one that allows more
contributors to attend two meetings.

This link shows the local times in some major coutries / timezones around
the world (and you can customize it to add your own).
http://www.timeanddate.com/worldclock/meetingtime.html?iso=20141125&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5

For reference, the current meeting time is 1900 UTC.

Option #1: alternate between Monday 1900 UTC && Tuesday 0900 UTC.  I like
this because 1900 UTC spans all of US and western EU, while 0900 combines
EU and EMEA. Folks in western EU are "in the middle" and can attend all
meetings.

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=24&hour=19&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=25&hour=9&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5


Option #2: alternate between Monday 1700 UTC && Tuesday 0500 UTC. I like
this because it shifts the current slot two hours earlier, making it easier
for eastern EU to attend without excluding the western US, and while 0500
UTC is not so late that US west coast contributors can't attend (it's 9PM
for us), it is harder for western EU folks to attend. There's really no one
in the middle here, but there is at least a chance for US west coast and
EMEA to overlap, which we don't have at any other time.

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014&month=11&day=24&hour=17&min=0&sec=0&p1=224&p2=179&p3=78&p4=367&p5=44&p6=33&p7=248&p8=5


I'll collate all the responses to this thread during the week, ahead of
next week's regularly-scheduled meeting.

-Devananda

[1]
http://eavesdrop.openstack.org/meetings/ironic/2014/ironic.2014-11-17-19.00.log.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] fix latency on requirements breakage

2014-11-17 Thread Louis Taylor
On Tue, Nov 18, 2014 at 10:46:38AM +1030, Christopher Yeoh wrote:
> Maybe a MOTD at the top of http://review.openstack.org could help here?  Have
> a button that the QA/infra people can hit when everything is broken that puts
> up a message there asking people to stop rechecking/submitting patches.

How about elastic recheck showing a message? If a bug is identified as breaking
the world, it shouldn't give a helpful "feel free to leave a 'recheck' comment
to run the tests again" comment when tests fail. That just encourages people to
keep rechecking.


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RT/Scheduler summit summary and Kilo development plan

2014-11-17 Thread Dugger, Donald D
Jay-

Good, detailed summary, it pretty much matches with what I heard.

The one thing I want to do is I've setup a wiki page to track our progress (I 
find email/etherpads deficient for this task).  I've tried to include 
everything from the email threads and the summit etherpads at:

https://wiki.openstack.org/wiki/Gantt/kilo

If I've missed anything let me know and I'll update the wiki.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Monday, November 17, 2014 8:59 AM
To: OpenStack Development Mailing List; Michael Still; sgor...@redhat.com; 
Dugger, Donald D
Subject: [nova] RT/Scheduler summit summary and Kilo development plan

Good morning Stackers,

At the summit in Paris, we put together a plan for work on the Nova resource 
tracker and scheduler in the Kilo timeframe. A large number of contributors 
across many companies are all working on this particular part of the Nova code 
base, so it's important that we keep coordinated and updated on the overall 
efforts. I'll work together with Don Dugger this cycle to make sure we make 
steady, measured progress. If you are involved in this effort, please do be 
sure to attend the weekly scheduler IRC meetings [1] (Tuesdays @ 1500UTC on 
#openstack-meeting).

== Decisions from Summit ==

The following decisions were made at the summit session [2]:

1) The patch series for virt CPU pinning [3] and huge page support [4] shall 
not be approved until nova/virt/hardware.py is modified to use nova.objects as 
its serialization/domain object model. Jay is responsible for the conversion 
patches, and this patch series should be fully proposed by end of this week.

2) We agreed on the concepts introduced by the resource-objects blueprint [5], 
with a caveat that child object versioning be discussed in greater depth with 
Jay, Paul, and Dan Smith.

3) We agreed on all concepts and implementation from the 2 isolate-scheduler-db 
blueprints: aggregates [6] and instance groups [7]

4) We agreed on implementation and need for separating compute node object from 
the service object [8]

5) We agreed on concept and implementation for converting the request spec from 
a dict to a versioned object [9] as well as converting
select_destinations() to use said object [10]

[6] We agreed on the need for returning a proper object from the virt driver's 
get_available_resource() method [11] but AFAICR, we did not say that this 
object needed to use nova/objects because this is an interface internal to the 
virt layer and resource tracker, and the ComputeNode nova.object will handle 
the setting of resource-related fields properly.

[7] We agreed the unit tests for the resource tracker were, well, crappy, and 
are a real source of pain in making changes to the resource tracker itself. So, 
we resolved to fix them up in early Kilo-1

[8] We are not interested in adding any additional functionality to the 
scheduler outside already-agreed NUMA blueprint functionality in Kilo. 
The goal is to get the scheduler fully independent of the Nova database, and 
communicating with nova-conductor and nova-compute via fully versioned 
interfaces by the end of Kilo, so that a split of the scheduler can occur at 
the start of the L release cycle.

== Action Items ==

1) Jay to propose patches that objectify the domain objects in 
nova/virt/hardware.py by EOB November 21

2) Paul Murray, Jay, and Alexis Lee to work on refactoring of the unit tests 
around the resource tracker in early Kilo-1

3) Dan Smith, Paul Murray, and Jay to discuss the issues with child object 
versioning

4) Ed Leafe to work on separating the compute node from the service object in 
Kilo-1

5) Sylvain Bauza to work on the request spec and select_destinations() to use 
request spec blueprints to be completed for Kilo-2

6) Paul Murray, Sylvain Bauza to work on the isolate-scheduler-db aggregate and 
instance groups blueprints to be completed by Kilo-3

7) Jay to complete the resource-objects blueprint work by Kilo-2

8) Dan Berrange, Sahid, and Nikola Dipanov to work on completing the CPU 
pinning, huge page support, and get_available_resources() blueprints in
Kilo-1

== Open Items ==

1) We need to figure out who is working on the objectification of the PCI 
tracker stuff (Yunjong maybe or Robert Li?)

2) The child object version thing needs to be thoroughly vetted. 
Basically, the nova.objects.compute_node.ComputeNode object will have a series 
of sub objects for resources (NUMA, PCI, other stuff) and Paul Murray has some 
questions on how to handle the child object versioning properly.

3) Need to coordinate with Steve Gordon, Adrian Hoban, and Ian Wells  on NUMA 
hardware in an external testing lab that the NFV subteam is working on getting 
up and running [12]. We need functional tests (Tempest+Nova) written for all 
NUMA-related functionality in the RT and scheduler by end of Kilo-3, but have 
yet to

Re: [openstack-dev] [all] fix latency on requirements breakage

2014-11-17 Thread Joshua Harlow

Robert Collins wrote:

Most production systems I know don't run with open ended dependencies.
One of our contributing issues IMO is that we have the requirements
duplicated everywhere - and then ignore them for many of our test runs
(we deliberately override the in-tree ones with global requirements).
Particularly, since the only reason unified requirements matter is for
distro packages, and they ignore our requirements files *anyway*, I'm
not sure our current aggregate system is needed in that light.

That said, making requirements be capped and auto adjust upwards would
be extremely useful IMO, but its a chunk of work;
  - we need the transitive dependencies listed, not just direct dependencies


Wouldn't a pip install of the requirements.txt from the requirements 
repo itself get this? That would tell pip to download all the things and 
there transitive dependencies (aka step #1).



  - we need a thing to find possible upgrades and propose bumps


This is an analysis of the $ pip freeze after installing into that 
virtualenv (aka step #2)?



  - we would need to very very actively propogate those out from global
requirements


Sounds like an enhanced updater.py that uses the output from step #2?



For now I think making 'react to the situation faster and easier' is a
good thing to push on.


One question I have is that not all things specify all there 
dependencies, since some of them are pluggable (for example kombu can 
use couchdb, or a transport exists that seems like it could, yet kombu 
doesn't list that dependency in its requirements (it gets listed in 
https://github.com/celery/kombu/blob/master/setup.py#L122 under 
'extra_requires' though); I'm sure other pluggable libraries 
(sqlalchemy, taskflow, tooz...) are similar in this regard so I wonder 
how those kind of libraries would work with this kind of proposal.




-Rob



On 18 November 2014 12:02, Sean Dague  wrote:

As we're dealing with the fact that testtools 1.4.0 apparently broke
something with attribute additions to tests (needed by tempest for
filtering), it raises an interesting problem.

Our current policy on requirements is to leave them open ended, this
lets us take upstream fixes. It also breaks us a lot. But our max
version of dependencies happens with 0 code review or testing.

However, fixing these things takes a bunch of debug, code review, and
test time. Seen by the fact that the testtools 1.2.0 block didn't even
manage to fully merge this weekend.

This is an asymetric break/fix path, which I think we need a better plan
for. If fixing is more expensive than breaking, then you'll tend to be
in a broken state quite a bit. We really actually want the other
asymetry if we can get it.

There are a couple of things we could try here:

== Cap all requirements, require code reviews to bump maximums ==

Benefits, protected from upstream breaks.

Down sides, requires active energy to move forward. The SQLA 0.8
transition took forever.

== Provide Requirements core push authority ==

For blocks on bad versions, if we had a fast path to just merge know
breaks, we could right ourselves quicker. It would have reasonably
strict rules, like could only be used to block individual versions.
Probably that should also come with sending email to the dev list any
time such a thing happened.

Benefits, fast to fix

Down sides, bypasses our testing infrastructure. Though realistically
the break bypassed it as well.

...

There are probably other ways to make this more symetric. I had a grand
vision one time of building a system that kind of automated the
requirements bump, but have other problems I think need to be addressed
in OpenStack.


The reason I think it's important to come up with a better way here is
that making our whole code gating system lock up for 12+ hrs because of
an external dependency that we are pretty sure is the crux of our break
becomes very discouraging for developers. They can't get their code
merged. They can't get accurate test results. It means that once we get
the fix done, everyone is rechecking their code, so now everyone is
waiting extra long for valid test results. People don't realize their
code can't pass and just keep pushing patches up consuming resources
which means that parts of the project that could pass tests, is backed
up behind 100% guarunteed failing parts. All in all, not a great system.

 -Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stale patches

2014-11-17 Thread Carl Baldwin
+1.  I always hesitate to abandon someone's patch because it is so
personal.  The auto-expire is impersonal and procedural.  I agree that
1 week is too soon.  Give it at least a month.

Abandoned patches that have some importance shouldn't ever really be
lost.  They should be linked to bug reports or blueprints to which
they're related.  So, why do we need to keep them around while there
is no activity on them?

Carl

On Mon, Nov 17, 2014 at 11:45 AM, Collins, Sean
 wrote:
> Perhaps we should re-introduce the auto-expiration of patches, albeit on
> a very leisurely timeframe. Before, it was like 1 week to expire a
> patch, which was a bit aggressive. Perhaps we could auto-expire patches
> that haven't been touched in 4 or 6 weeks, to expire patches that have
> truly been abandoned by authors?
>
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-17 Thread Kevin L. Mitchell
On Mon, 2014-11-17 at 18:48 -0500, Doug Hellmann wrote:
> I’ve spent a bit of time thinking about the resource ownership issue.
> The challenge there is we don’t currently have any libraries that
> define tables in the schema of an application. I think that’s a good
> pattern to maintain, since it avoids introducing a lot of tricky
> issues like how to manage migrations for the library, how to ensure
> they are run by the application, etc. The fact that this common quota
> thing needs to store some data in a schema that it controls says to me
> that it is really an app and not a library. Making the quota manager
> an app solves the API definition issue, too, since we can describe a
> generic way to configure quotas and other applications can then use
> that API to define specific rules using the quota manager’s API.
> 
> I don’t know if we need a new application or if it would make sense
> to, as with policy, add quota management features to keystone. A
> single well-defined app has some appeal, but there’s also a certain
> amount of extra ramp-up time needed to go that route that we wouldn’t
> need if we added the features directly to keystone.

I'll also point out that it was largely because of the storage needs
that I chose to propose Boson[1] as a separate app, rather than as a
library.  Further, the dimensions over which quota-covered resources
needed to be tracked seemed to me to be complicated enough that it would
be better to define a new app and make it support that one domain well,
which is why I didn't propose it as something to add to Keystone.
Consider: nova has quotas that are applied by user, other quotas that
are applied by tenant, and even some quotas on what could be considered
sub-resources—a limit on the number of security group rules per security
group, for instance.

My current feeling is that, if we can figure out a way to make the quota
problem into an acceptable library, that will work; it would probably
have to maintain its own database separate from the client app and have
features for automatically managing the schema, since we couldn't
necessarily rely on the client app to invoke the proper juju there.  If,
on the other hand, that ends up failing, then the best route is probably
to begin by developing a separate app, like Boson, as a PoC; then, after
we have some idea of just how difficult it is to actually solve the
problem, we can evaluate whether it makes sense to actually fold it into
a service like Keystone, or whether it should stand on its own.

(Personally, I think Boson should be created and should stand on its
own, but I also envision using it for purposes outside of OpenStack…)

Just my $.02…

[1] https://wiki.openstack.org/wiki/Boson
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] fix latency on requirements breakage

2014-11-17 Thread Christopher Yeoh
On Tue, Nov 18, 2014 at 9:32 AM, Sean Dague  wrote:

> waiting extra long for valid test results. People don't realize their
> code can't pass and just keep pushing patches up consuming resources
> which means that parts of the project that could pass tests, is backed
> up behind 100% guarunteed failing parts. All in all, not a great system.
>
>
Maybe a MOTD at the top of http://review.openstack.org could help here?
Have a button
that the QA/infra people can hit when everything is broken that puts up a
message
there asking people to stop rechecking/submitting patches.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] fix latency on requirements breakage

2014-11-17 Thread Robert Collins
Most production systems I know don't run with open ended dependencies.
One of our contributing issues IMO is that we have the requirements
duplicated everywhere - and then ignore them for many of our test runs
(we deliberately override the in-tree ones with global requirements).
Particularly, since the only reason unified requirements matter is for
distro packages, and they ignore our requirements files *anyway*, I'm
not sure our current aggregate system is needed in that light.

That said, making requirements be capped and auto adjust upwards would
be extremely useful IMO, but its a chunk of work;
 - we need the transitive dependencies listed, not just direct dependencies
 - we need a thing to find possible upgrades and propose bumps
 - we would need to very very actively propogate those out from global
requirements

For now I think making 'react to the situation faster and easier' is a
good thing to push on.

-Rob



On 18 November 2014 12:02, Sean Dague  wrote:
> As we're dealing with the fact that testtools 1.4.0 apparently broke
> something with attribute additions to tests (needed by tempest for
> filtering), it raises an interesting problem.
>
> Our current policy on requirements is to leave them open ended, this
> lets us take upstream fixes. It also breaks us a lot. But our max
> version of dependencies happens with 0 code review or testing.
>
> However, fixing these things takes a bunch of debug, code review, and
> test time. Seen by the fact that the testtools 1.2.0 block didn't even
> manage to fully merge this weekend.
>
> This is an asymetric break/fix path, which I think we need a better plan
> for. If fixing is more expensive than breaking, then you'll tend to be
> in a broken state quite a bit. We really actually want the other
> asymetry if we can get it.
>
> There are a couple of things we could try here:
>
> == Cap all requirements, require code reviews to bump maximums ==
>
> Benefits, protected from upstream breaks.
>
> Down sides, requires active energy to move forward. The SQLA 0.8
> transition took forever.
>
> == Provide Requirements core push authority ==
>
> For blocks on bad versions, if we had a fast path to just merge know
> breaks, we could right ourselves quicker. It would have reasonably
> strict rules, like could only be used to block individual versions.
> Probably that should also come with sending email to the dev list any
> time such a thing happened.
>
> Benefits, fast to fix
>
> Down sides, bypasses our testing infrastructure. Though realistically
> the break bypassed it as well.
>
> ...
>
> There are probably other ways to make this more symetric. I had a grand
> vision one time of building a system that kind of automated the
> requirements bump, but have other problems I think need to be addressed
> in OpenStack.
>
>
> The reason I think it's important to come up with a better way here is
> that making our whole code gating system lock up for 12+ hrs because of
> an external dependency that we are pretty sure is the crux of our break
> becomes very discouraging for developers. They can't get their code
> merged. They can't get accurate test results. It means that once we get
> the fix done, everyone is rechecking their code, so now everyone is
> waiting extra long for valid test results. People don't realize their
> code can't pass and just keep pushing patches up consuming resources
> which means that parts of the project that could pass tests, is backed
> up behind 100% guarunteed failing parts. All in all, not a great system.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v2 or v3 for new api

2014-11-17 Thread Christopher Yeoh
On Tue, Nov 18, 2014 at 2:31 AM, Pasquale Porreca <
pasquale.porr...@dektech.com.au> wrote:

> Thank you very much Christopher
>
> On 11/17/14 12:15, Christopher Yeoh wrote:
>
>> Yes, sorry documentation has been on our todo list for too long. Could I
>> get you to submit a bug report about the lack of developer documentation
>> for api plugins? It might hurry us up :-)
>>
>
> I reported as a bug and subscribed you to it. https://bugs.launchpad.net/
> nova/+bug/1393455
>
>
Thanks!


>
>> In the meantime, off the top of my head. you'll need to create or
>> modify the following files in a typical plugin:
>>
>> setup.cfg - add an entry in at least the nova.api.v3.extensions section
>>
>> etc/nova/policy.json - an entry for the permissions for you plugin,
>> perhaps one per api method for maximum flexibility. Also will need a
>> discoverable entry (lots of examples in this file)
>>
>> nova/tests/unit/fake_policy.json (similar to policy.json)
>>
>
> I wish I had asked about this before, I found yet these files, but I
> confess it took quite a bit of time to guess I had to modify them (I
> actually didn't modify yet fake_policy, but my tests are still not
> completed).
> What about nova/nova.egg-info/entry_points.txt I mentioned earlier?
>
>
The entry points file is automatically updated based on setup.cfg


>
>> nova/api/openstack/compute/plugins/v3/ - please make the
>> alias name something os-scheduler-hints rather than OS-SCH-HNTS. No
>> skimping on vowels. Probably the easiest way at this stage without more
>> doco is look for for a plugin in that directory that does the sort of the
>> thing you want to do.
>>
>
> Following the path of other plugins, I created a module
> nova/api/openstack/compute/plugins/v3/node_uuid.py, while the class is
> NodeUuid(extensions.V3APIExtensionBase) the alias is os-node-uuid and the
> actual json parameter is node_uuid. I hope this is correct...
>
>
>> nova/tests/unit/nova/api/openstack/compute/contrib/test_your_plugin.py -
>> we have been combining the v2 and v2.1(v3) unittests to share as much as
>> possible, so please do the same here for new tests as the v3 directory will
>> be eventually removed. There's quite a few examples now in that directory
>> of sharing unittests between v2.1 and v2 but with a new extension the
>> customisation between the two should be pretty minimal (just a bit of
>> inheritance to call the right controller)
>>
>>
> Very good to know. I put my test in nova/tests/unit/api/openstack/plugins/v3
> , but I was getting confused by the fact only few tests were in this folder
> while the tests in nova/tests/unit/api/openstack/compute/contrib/ covered
> both v2 and v2.1 cases.
> So should I move my test in nova/tests/unit/api/openstack/compute/contrib/
> folder, right?
>

Yes


>
>  nova/tests/unit/integrated/v3/test_your_plugin.py
>> nova/tests/unit/integrated/test_api_samples.py
>>
>> Sorry the api samples tests are not unified yet. So you'll need to create
>> two. All of the v2 api sample tests are in one directory, whilst the the
>> v2.1 are separated into different files by plugin.
>>
>> There's some rather old documentation on how to generate the api samples
>> themselves (hint: directories aren't made automatically) here:
>>
>> https://blueprints.launchpad.net/nova/+spec/nova-api-samples
>>
>> Personally I wouldn't bother with any xml support if you do decide to
>> support v2 as its deprecated anyway.
>>
>
> After reading your answer I understood I have to work more on this part :)
>
>
>> Hope this helps. Feel free to add me as a reviewer for the api parts of
>> your changesets.
>>
>
> It helps a lot! I will add you for sure as soon as I will upload my code.
> For now the specification has still to be approved, so I think I have to
> wait before to upload it, is that correct?
>
> This is the blueprint link anyway: https://blueprints.launchpad.
> net/nova/+spec/use-uuid-v1
>
>
So it won't hurt to upload the code before the spec is approved if you're
looking for some early feedback, but I'd recommend setting it to Work in
Progress otherwise it's likely to get -2'd pending spec approval

Regards,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-11-17 Thread Sean Dague
On 11/17/2014 06:57 PM, Vishvananda Ishaya wrote:
> 
> On Nov 6, 2014, at 7:56 PM, Ian Wienand  wrote:
> 
>> On 10/29/2014 12:42 AM, Doug Hellmann wrote:
>>> Another way to do this, which has been used in some other projects,
>>> is to define one option for a list of “names” of things, and use
>>> those names to make groups with each field
>>
>> I've proposed that in [1].  I look forward to some -1's :)
>>
>>> OTOH, oslo.config is not the only way we have to support
>>> configuration. This looks like a good example of settings that are
>>> more complex than what oslo.config is meant to handle, and that
>>> might be better served in a separate file with the location of that
>>> file specified in an oslo.config option.
>>
>> My personal opinion is that yet-another-config-file in possibly
>> yet-another-format is just a few lines of code, but has a pretty high
>> cost for packagers, testers, admins, etc.  So I feel like that's
>> probably a last-resort.
> 
> In most discussions I’ve had with deployers, the prefer multiple files, as it
> is easier to create a new file via puppet or chef when a feature is turned
> on than to add a bunch of new sections in the middle of an existing file.

+1

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-17 Thread Richard Jones
On 17 November 2014 21:54, Radomir Dopieralski 
wrote:
>
> On 17/11/14 09:53, Martin Geisler wrote:
>
> [...]
>
> > As Richard said, npm and bower are not competitors. You use npm to
> > install bower, and you use bower to download Angular, jQuery, Bootstrap
> > and other static files. These are the static files that you will want to
> > include when you finally deploy the web app to your server.
> >
> > Before using Bower, people would simply download Angular from the
> > projects homepage and check it into version control. Bower is not doing
> > much, but using it avoids this bad practice.
> >
> > There is often a kind of "compilation step" between bower downloading a
> > dependency and the deployment on the webserver: minification and
> > compression of the JavaScript and CSS. Concatenating and minifying the
> > files serve to reduce the number of HTTP requests -- which can make an
> > app much faster.
> >
> > Finally, you use Grunt/Gulp to execute other tools during development.
> > These tools could be a local web server, it could be running the unit
> > tests. Grunt is only a convenience tool here -- think of it as a kind of
> > Makefile that tells you how to lunch various tasks.
>
> Thank you for your explanations.
>
> The way I see it, we would need:
> - Grunt both in the development environment and packaged (to run tests,
> etc.),

I'm increasingly coming to think that we can avoid grunt.

- serve is handled by django runserver
- test running is handled by tox
- livereload could be handled by https://pypi.python.org/pypi/livereload
(it'd be really nice if someone could get support for this rolled in to
django-livereload...)
- watch is not handled by anything and it would be a shame to miss out on
automatic linting/testing, but I think we can live without it

So, the tools we need packaged for Linux are:

- karma
- jasmine (already in Fedora, I believe)
- selenium (I believe this is already done in Fedora and Debian)
- phantomjs (definitely already done!)


> - Bower in the development environment,
> - Bower configuration file in two copies, one for global-requirements,
> and one for the Horizon's local requirements. Plus a gate job that makes
> sure no new library or version gets included in the Horizon's before
> getting into the global-requirements,

Could you perhaps elaborate on this? How do you see the workflow working
here?

Given that Horizon already integrates with xstatic, it would be messy (and
potentially confusing) to include something so it *also* integrated with
bower. I was envisaging us creating a tool which generates xstatic packages
from bower packages. I'm not the first to think along these lines
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031042.html

I will be looking into creating such a tool today.


> - A tool, probably a script, that would help packaging the Bower
> packages into DEB/RPM packages. I suspect the Debian/Fedora packagers
> already have a semi-automatic solution for that.

Yep, that is indeed their problem that they'd have already solved for the
existing xstatic packages.


> - A script that would generate a file with all the paths to those
> packaged libraries, that would get included in Horizon's settings.py

If we stick with xstatic, we don't need this :)


 Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-11-17 Thread Vishvananda Ishaya

On Nov 6, 2014, at 7:56 PM, Ian Wienand  wrote:

> On 10/29/2014 12:42 AM, Doug Hellmann wrote:
>> Another way to do this, which has been used in some other projects,
>> is to define one option for a list of “names” of things, and use
>> those names to make groups with each field
> 
> I've proposed that in [1].  I look forward to some -1's :)
> 
>> OTOH, oslo.config is not the only way we have to support
>> configuration. This looks like a good example of settings that are
>> more complex than what oslo.config is meant to handle, and that
>> might be better served in a separate file with the location of that
>> file specified in an oslo.config option.
> 
> My personal opinion is that yet-another-config-file in possibly
> yet-another-format is just a few lines of code, but has a pretty high
> cost for packagers, testers, admins, etc.  So I feel like that's
> probably a last-resort.

In most discussions I’ve had with deployers, the prefer multiple files, as it
is easier to create a new file via puppet or chef when a feature is turned
on than to add a bunch of new sections in the middle of an existing file.

Vish

> 
> -i
> 
> [1] https://review.openstack.org/133138
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-17 Thread Doug Hellmann

On Nov 17, 2014, at 5:49 PM, Salvatore Orlando  wrote:

> Hi all,
> 
> I am resuming this thread following the session we had at the summit in Paris 
> (etherpad here [1])
> 
> While there was some sort of consensus regarding what this library should do, 
> and how it should do it, the session ended with some open questions which we 
> need to address before finalising the specification.
> 
> There was a rather large consensus that the only viable architecture would be 
> one where the quota management library owns resource usage data. However, 
> this raises further concerns around:
> - ownership of resource usage records. As the quota library owns usage data, 
> it becomes the authoritative source of truth for it. This could be 
> problematic in some projects, particularly nova, where the resource tracker 
> currently own usage data.
> - inserting quota-related db migrations in the consumer project timeline. 
> We'll need to solve this for both sqlalchemy.migrate and alembic (I seem to 
> recall Doug has a solution for this)
> 
> While the proposed interface for quota enforcing was generally well received, 
> another open question is whether this library should also be responsible for 
> quota management. Each project currently exposes its own extension for 
> setting quotas for tenants, and this is rather bad from an usability 
> perspective. whether this should be responsibility of the quota library, 
> handle by a separate "quota service", or handled through keystone is 
> debatable. Personally I think the library should not get in the business of 
> defining APIs. However, it can define the business logic for managing tenant 
> quotas which can then be leveraged by the projects which will adopt it. 
> Either way, I reckon that we might be better off by limiting our scope to 
> quota enforcement for the first iteration of this library.
> 
> The last, and possibly most important question, is whether this library 
> belongs to the oslo-incubator or should live into its own repository. I would 
> personally prefer the oslo incubator because its process makes adoption by 
> OpenStack projects fairly simple. On the other hand, I also understand the 
> concerns around a library that comes with its own data models and has the 
> potential of becoming something bigger than a simple library. I would like to 
> defer an answer to this question to the oslo core team, which are surely a 
> lot more experienced than me on the matter of the incubator.

I’ve spent a bit of time thinking about the resource ownership issue. The 
challenge there is we don’t currently have any libraries that define tables in 
the schema of an application. I think that’s a good pattern to maintain, since 
it avoids introducing a lot of tricky issues like how to manage migrations for 
the library, how to ensure they are run by the application, etc. The fact that 
this common quota thing needs to store some data in a schema that it controls 
says to me that it is really an app and not a library. Making the quota manager 
an app solves the API definition issue, too, since we can describe a generic 
way to configure quotas and other applications can then use that API to define 
specific rules using the quota manager’s API.

I don’t know if we need a new application or if it would make sense to, as with 
policy, add quota management features to keystone. A single well-defined app 
has some appeal, but there’s also a certain amount of extra ramp-up time needed 
to go that route that we wouldn’t need if we added the features directly to 
keystone.

Doug

> 
> To this aim, the proposed specification [2] has been temporarily put in WIP.
> 
> A few more notes:
> 
> 1) We agreed that we won't be constrained about saving any part of the 
> current module, since it's not used by any project. We will however reuse 
> whatever can be reused (mostly for not wasting time by writing the same thing 
> again)
> 
> 2) As concerns data backends to support, we agreed that we would start with 
> the simplest possible solution. A single SQLalchemy-backed data source. 
> Potentially we won't even have a framework for plugging different kind of 
> data sources. However if contributors step in with support for multiple 
> backends, we will consider it. 
> 
> 3) The idea of having a quota engine leveraging "data sources" was quite 
> welcomed. Ideally it's not that different from the engine/driver model in the 
> current module, but there will be a cleaner separation of responsibilities.
> 
> Salvatore
> 
> [1] https://etherpad.openstack.org/p/kilo-oslo-common-quota-library
> [2] https://review.openstack.org/#/c/132127/
> 
> On 15 October 2014 20:54, Doug Hellmann  wrote:
> We can make it support whatever we want! :-)
> 
> The existing code almost certainly predates domain support in keystone, so I 
> think this is a case of needing to catch up not a conscious decision to leave 
> it out.
> 
> Doug
> 
> 
> On Oct 15, 2014, at 12:15 PM, Ajaya Agrawal  wrote:
> 
>> H

Re: [openstack-dev] [api] Requesting opinion/guideline on IDs vs hrefs

2014-11-17 Thread Amit Gandhi
Agreed it helps with billing.  It also allows the customer to make a
choice based on the features offered in a flavor.  At the end of the day,
the API works the same way regardless of the flavor selected.  The flavor
selection merely gives the customer the experience they are looking for.

I see flavors working this way:
Nova - choosing a flavor that represents the type of compute power you
need.  Many combinations could exist.
Zaqar - choosing a flavor that represents the type of messaging you need
(performance, durability, ha, or any combinations of them)
Poppy - choosing a flavor that represents the type of cdn you need
(performance, regionality, cost, or any combination of them)

…and so on…


Within each ‘feature’ there could be multiple options.

Hence I struggle to see how ‘features’ could be exposed directly by the
api’s if that only limits it to one driver of each feature (i.e. There
could be multiple drivers that meet the performance feature for example).

The flavor concept allows operators to package together some of these
features, and offer it to customers - who can then select that flavor via
the API.

In terms of inter-operable clouds, since the API functionality works the
same regardless of flavor, I don’t think it breaks interoperability.

Since operators can define their flavor names themselves and what drives
those flavors, then there is work on the dev to refer to the new flavor at
the new operator’s cloud.  But inverting that argument - will every cloud
operator always offer the same flavors?  Will some operators prefer to
offer certain flavors only?  How do you define flavors (or their features)
in a generic way that allows for the gray area’s in between, and multiple
permutations of the same benefit?

I am curious to explore the idea of not using flavors and replacing the
concept with something more generic (with concrete examples with some of
the existing API’s).  It sounds great to me, I just can’t (at this time)
think of how it would work.


Thanks.
Amit.







On 11/17/14, 5:38 PM, "Ed Leafe"  wrote:

>On Nov 17, 2014, at 3:46 PM, Amit Gandhi 
>wrote:
>> 
>> I can see where this makes a lot of sense in API¹s such as Nova¹s where
>> flavors represent some combination of memory, disk, and cpu performance.
>
>For Nova, flavors emerged more as a billing convenience than anything
>technical regarding creating a VM with certain characteristics.
>
>> In the case of CDN, the flavor represents a list of CDN providers.
>> 
>> So... you could have a flavor representing a region of the world
>> (America¹s) consisting of one or more CDN providers with a strong
>>presence
>> in those regions.
>> Or a flavor could represent performance or features offered (number of
>> edge nodes, speed, etc).  And it is up to the operator to define those
>> flavors and assign one or more appropriate CDN providers to those
>>flavors.
>
>Again, this seems like the sort of packaging you would need to charge
>customers for different levels of service, and not something that you
>would need to make a working CDN API.
>
>
>-- Ed Leafe
>
>
>
>
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] fix latency on requirements breakage

2014-11-17 Thread Sean Dague
As we're dealing with the fact that testtools 1.4.0 apparently broke
something with attribute additions to tests (needed by tempest for
filtering), it raises an interesting problem.

Our current policy on requirements is to leave them open ended, this
lets us take upstream fixes. It also breaks us a lot. But our max
version of dependencies happens with 0 code review or testing.

However, fixing these things takes a bunch of debug, code review, and
test time. Seen by the fact that the testtools 1.2.0 block didn't even
manage to fully merge this weekend.

This is an asymetric break/fix path, which I think we need a better plan
for. If fixing is more expensive than breaking, then you'll tend to be
in a broken state quite a bit. We really actually want the other
asymetry if we can get it.

There are a couple of things we could try here:

== Cap all requirements, require code reviews to bump maximums ==

Benefits, protected from upstream breaks.

Down sides, requires active energy to move forward. The SQLA 0.8
transition took forever.

== Provide Requirements core push authority ==

For blocks on bad versions, if we had a fast path to just merge know
breaks, we could right ourselves quicker. It would have reasonably
strict rules, like could only be used to block individual versions.
Probably that should also come with sending email to the dev list any
time such a thing happened.

Benefits, fast to fix

Down sides, bypasses our testing infrastructure. Though realistically
the break bypassed it as well.

...

There are probably other ways to make this more symetric. I had a grand
vision one time of building a system that kind of automated the
requirements bump, but have other problems I think need to be addressed
in OpenStack.


The reason I think it's important to come up with a better way here is
that making our whole code gating system lock up for 12+ hrs because of
an external dependency that we are pretty sure is the crux of our break
becomes very discouraging for developers. They can't get their code
merged. They can't get accurate test results. It means that once we get
the fix done, everyone is rechecking their code, so now everyone is
waiting extra long for valid test results. People don't realize their
code can't pass and just keep pushing patches up consuming resources
which means that parts of the project that could pass tests, is backed
up behind 100% guarunteed failing parts. All in all, not a great system.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-17 Thread Salvatore Orlando
Hi all,

I am resuming this thread following the session we had at the summit in
Paris (etherpad here [1])

While there was some sort of consensus regarding what this library should
do, and how it should do it, the session ended with some open questions
which we need to address before finalising the specification.

There was a rather large consensus that the only viable architecture would
be one where the quota management library owns resource usage data.
However, this raises further concerns around:
- ownership of resource usage records. As the quota library owns usage
data, it becomes the authoritative source of truth for it. This could be
problematic in some projects, particularly nova, where the resource tracker
currently own usage data.
- inserting quota-related db migrations in the consumer project timeline.
We'll need to solve this for both sqlalchemy.migrate and alembic (I seem to
recall Doug has a solution for this)

While the proposed interface for quota enforcing was generally well
received, another open question is whether this library should also be
responsible for quota management. Each project currently exposes its own
extension for setting quotas for tenants, and this is rather bad from an
usability perspective. whether this should be responsibility of the quota
library, handle by a separate "quota service", or handled through keystone
is debatable. Personally I think the library should not get in the business
of defining APIs. However, it can define the business logic for managing
tenant quotas which can then be leveraged by the projects which will adopt
it. Either way, I reckon that we might be better off by limiting our scope
to quota enforcement for the first iteration of this library.

The last, and possibly most important question, is whether this library
belongs to the oslo-incubator or should live into its own repository. I
would personally prefer the oslo incubator because its process makes
adoption by OpenStack projects fairly simple. On the other hand, I also
understand the concerns around a library that comes with its own data
models and has the potential of becoming something bigger than a simple
library. I would like to defer an answer to this question to the oslo core
team, which are surely a lot more experienced than me on the matter of the
incubator.

To this aim, the proposed specification [2] has been temporarily put in WIP.

A few more notes:

1) We agreed that we won't be constrained about saving any part of the
current module, since it's not used by any project. We will however reuse
whatever can be reused (mostly for not wasting time by writing the same
thing again)

2) As concerns data backends to support, we agreed that we would start with
the simplest possible solution. A single SQLalchemy-backed data source.
Potentially we won't even have a framework for plugging different kind of
data sources. However if contributors step in with support for multiple
backends, we will consider it.

3) The idea of having a quota engine leveraging "data sources" was quite
welcomed. Ideally it's not that different from the engine/driver model in
the current module, but there will be a cleaner separation of
responsibilities.

Salvatore

[1] https://etherpad.openstack.org/p/kilo-oslo-common-quota-library
[2] https://review.openstack.org/#/c/132127/

On 15 October 2014 20:54, Doug Hellmann  wrote:

> We can make it support whatever we want! :-)
>
> The existing code almost certainly predates domain support in keystone, so
> I think this is a case of needing to catch up not a conscious decision to
> leave it out.
>
> Doug
>
>
> On Oct 15, 2014, at 12:15 PM, Ajaya Agrawal  wrote:
>
> Hi,
>
> Would the new library support quota on domain level also? As it stands in
> oslo-incubator it only does quota enforcement on project level only. The
> use case for this is quota enforcement across multiple projects. For e.g.
> as a cloud provider, I would like my customer to create only #X volumes
> across all his projects.
>
> -Ajaya
>
> Cheers,
> Ajaya
>
> On Wed, Oct 15, 2014 at 7:04 PM, Doug Hellmann 
> wrote:
>
>> Sigh. Because I typed the wrong command. Thanks for pointing that out.
>>
>> I don’t see any instances of “quota” in openstack-common.conf files:
>>
>> $ grep quota */openstack-common.conf
>>
>> or in any projects under "openstack/“:
>>
>> $ ls */*/openstack/common/quota.py
>> ls: cannot access */*/openstack/common/quota.py: No such file or directory
>>
>> I don’t know where manila’s copy came from, but if it has been copied
>> from the incubator by hand and then changed we should fix that up.
>>
>> Doug
>>
>> On Oct 15, 2014, at 5:28 AM, Valeriy Ponomaryov 
>> wrote:
>>
>> But why "policy" is being discussed on "quota" thread?
>>
>> On Wed, Oct 15, 2014 at 11:55 AM, Valeriy Ponomaryov <
>> vponomar...@mirantis.com> wrote:
>>
>>> Manila project does use "policy" common code from incubator.
>>>
>>> Our "small" wrapper for it:
>>> https://github.com/openstack/manila/blob/8203c51081680a7a

Re: [openstack-dev] [api] Requesting opinion/guideline on IDs vs hrefs

2014-11-17 Thread Ed Leafe
On Nov 17, 2014, at 3:46 PM, Amit Gandhi  wrote:
> 
> I can see where this makes a lot of sense in API¹s such as Nova¹s where
> flavors represent some combination of memory, disk, and cpu performance.

For Nova, flavors emerged more as a billing convenience than anything technical 
regarding creating a VM with certain characteristics.

> In the case of CDN, the flavor represents a list of CDN providers.
> 
> So... you could have a flavor representing a region of the world
> (America¹s) consisting of one or more CDN providers with a strong presence
> in those regions.
> Or a flavor could represent performance or features offered (number of
> edge nodes, speed, etc).  And it is up to the operator to define those
> flavors and assign one or more appropriate CDN providers to those flavors.

Again, this seems like the sort of packaging you would need to charge customers 
for different levels of service, and not something that you would need to make 
a working CDN API.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Joshua Harlow

Ok, it depends on what features are needed in said driver.

Things like http://redis.io/commands/psetex (and others) that might be 
used only exist in newer versions > 2.2.0, if these aren't used then it 
doesn't matter (basic support/functionality exists in 2.2.0 for all the 
common things).


Eric Windisch wrote:



On Mon, Nov 17, 2014 at 3:33 PM, Joshua Harlow mailto:harlo...@outlook.com>> wrote:

It should already be running.

Tooz has been testing with it[1]. Whats running in ubuntu is an
older redis though so don't expect some of the new > 2.2.0 features
to work until the ubuntu version is pushed out to all projects.

The redis (soft) requirement for the ZeroMQ driver shouldn't require a
newer version at all.

Also, since I have a platform, I'll note that the redis "matchmaker"
driver is just a reference implementation I tossed together in a day.
It's convenient because it eliminates the need for a static
configuration, making tempest tests much easier to run and generally
easier for anyone to deploy, but it's intended to be an example of
hooking into an inventory service, not necessarily the defacto solution.

--
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multi-node testing (for redis and others...)

2014-11-17 Thread Joe Gordon
On Mon, Nov 17, 2014 at 1:06 PM, Joshua Harlow  wrote:

> Hi guys,
>
> A recent question came up about how do we test better with redis for tooz.
> I think this question is also relevant for ceilometer (and other users of
> redis) and in general applies to the whole of openstack as the larger
> system is what people run (I hope not everyone just runs devstack on a
> single-node and that's where they stop, ha).
>

https://review.openstack.org/#/c/106043/23


>
> The basic question is that redis (or zookeeper) have (and typically are)
> ways to be setup with multi-node instances (for example with redis +
> sentinel or zookeeper in multi-node configurations, or the newly released
> redis clustering...). It seems though that our testing infrastructure is
> setup to do the basics of tests (which isn't bad, but does have its
> limits), and this got me thinking on what would be needed to actually test
> these multi-node configurations of things like redis (configured in
> sentinel mode, or redis in clustering mode) in a realistic manner that
> tests 'common' failure patterns (net splits for example).
>
> I guess we can split it up into 3 or 4 (or more questions).
>
> 1. How do we get a multi-node configuration (of say redis) setup in the
> first place, configured so that all nodes are running and sentinel (for
> example) is running as expected?
> 2. How do we then inject failures into this setup to ensure that the
> applications and clients built ontop of those systems reliably handle these
> type of injected failures (something like https://github.com/aphyr/jepsen
> or similar?).
> 3. How do we analyze those results (for when #2 doesn't turn out to work
> as expected) in a meaningful manner, so that we can then turn those
> experiments into more reliable software?
>
> Anyone else have any interesting ideas for this?
>
> -Josh
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Eric Windisch
On Mon, Nov 17, 2014 at 3:33 PM, Joshua Harlow  wrote:

> It should already be running.
>
> Tooz has been testing with it[1]. Whats running in ubuntu is an older
> redis though so don't expect some of the new > 2.2.0 features to work until
> the ubuntu version is pushed out to all projects.


The redis (soft) requirement for the ZeroMQ driver shouldn't require a
newer version at all.

Also, since I have a platform, I'll note that the redis "matchmaker" driver
is just a reference implementation I tossed together in a day.  It's
convenient because it eliminates the need for a static configuration,
making tempest tests much easier to run and generally easier for anyone to
deploy, but it's intended to be an example of hooking into an inventory
service, not necessarily the defacto solution.


-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Requesting opinion/guideline on IDs vs hrefs

2014-11-17 Thread Jay Pipes

On 11/17/2014 04:46 PM, Amit Gandhi wrote:

I can see where this makes a lot of sense in API¹s such as Nova¹s where
flavors represent some combination of memory, disk, and cpu performance.

In the case of CDN, the flavor represents a list of CDN providers.


In that case, your API is leaking implementation details.


So... you could have a flavor representing a region of the world
(America¹s) consisting of one or more CDN providers with a strong presence
in those regions.
Or a flavor could represent performance or features offered (number of
edge nodes, speed, etc).  And it is up to the operator to define those
flavors and assign one or more appropriate CDN providers to those flavors.


Nothing above is different from Nova's flavors, other than the lack of 
leaking driver implementation details.



Im not sure decomposing the flavor in this context makes as much sense.


If you can't decompose the flavor into a set of capabilities that are 
standardized across deployers of the Poppy API, then you have an API 
that cannot be interoperable across deployers.


Best,
-jay


Amit.


On 11/17/14, 3:18 PM, "Jay Pipes"  wrote:


I personally do not think that a "flavor" should be stored in the base
resource. The "flavor" should instead be decomposed into its composite
pieces (the specification for the CDN creation) and those pieces stored
in the database.

That way, you don't inherit the terrible problem that Nova has where
you're never really able to delete a flavor because some instance
somewhere may still be referring to it. If, in Nova, we decomposed the
flavor into all of its requisite pieces -- requested resource amounts,
requested "extra specs" capabilities, requested NUMA topology, etc -- we
wouldn't have this problem at all.

So, therefore my advice would be to not do any of the above and don't
have anything other than a "CDN type" or "CDN template" object that is
just deconstructed into the requested capabilities and resources of the
to-be-created CDN object and send along those things instead of
referring to some "flavor" thing.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack] CI failed The plugin token_endpoint could not be found

2014-11-17 Thread Patrick East
I am also still seeing issues with 'ERROR: openstack The plugin
token_endpoint could not be found', we can work around it by adding in a
list of all the clients to DEVSTACK_PROJECT_FROM_GIT and things go back to
working, for example:

http://ec2-54-69-107-106.us-west-2.compute.amazonaws.com/purestorageci/71/131871/10/check/dsvm-tempest-volume-PureISCSIDriver/8e36386/devstacklog.txt

But without it, attempting to use the released versions from pip:

http://ec2-54-69-107-106.us-west-2.compute.amazonaws.com/purestorageci/95/114395/11/check/dsvm-tempest-volume-PureISCSIDriver/725f580/devstacklog.txt

The strange part is that on my development setup, which is watching
sandbox, I was able to pip uninstall the python-*client packages and it
went back to running with out any problems,  (
http://ec2-54-69-246-234.us-west-2.compute.amazonaws.com/purestorageci/MANUALLY_TRIGGERED_282/devstacklog.txt
) but my "live" setup did not after going through the same process.

For now things seem ok with getting the packages from git, but it does
concern me that our CI is not able to function the same way as the gate
jenkins. I would rather have our configuration match those as closely as
possible.

I'm happy to provide more details about the setup or the failures. Any
suggestions on how to go about troubleshooting this would be much
appreciated.


-Patrick

On Mon, Nov 17, 2014 at 3:11 AM, Sean Dague  wrote:

> There needs to be a lot more context than that provided. As seen here -
> https://review.openstack.org/#/c/134379/ this seems to be working fine
> upstream.
>
> -Sean
>
> On 11/16/2014 09:16 PM, Wan, Sam wrote:
> > Hi Sean,
> >
> >   Seems once I unset '
> DEVSTACK_PROJECT_FROM_GIT=python-keystoneclient,python-openstackclient',
> devstack will fail with ' ERROR: openstack The plugin token_endpoint could
> not be found'.
> >   How should I overcome this issue then?
> >
> > Thanks and regards
> > Sam
> >
> > -Original Message-
> > From: Sean Dague [mailto:s...@dague.net]
> > Sent: Saturday, November 15, 2014 12:28 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [infra][devstack] CI failed The plugin
> token_endpoint could not be found
> >
> > On 11/14/2014 09:09 AM, Jeremy Stanley wrote:
> >> On 2014-11-14 00:34:14 -0500 (-0500), Wan, Sam wrote:
> >>> Seems we need to use python-keystoneclient and python-openstackclient
> >>> from git.openstack.org  because those on pip don’t work.
> >>
> >> That's a bug we're (collectively) trying to prevent in the future.
> >> Services, even under development, should not depend on features only
> >> available in unreleased versions of libraries.
> >>
> >>> But in latest update of stack.sh, it’s to use pip by default
> >> [...]
> >>
> >> And this is intentional, implemented specifically so that we can keep
> >> it from happening again.
> >>
> >
> > Patrick actually got to the bottom of a bug we had in devstack around
> this, we merged the fixes this morning.
> >
> > As Jeremy said, installing from pypi released versions is intentional.
> > If something wants to use features in a library, the library needs to
> cut a release.
> >
> >   -Sean
> >
> > --
> > Sean Dague
> > http://dague.net
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> --
> Sean Dague
> http://dague.net
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] the possible use of dhcp client id

2014-11-17 Thread Chuck Carlino

On 11/17/2014 12:43 PM, Devananda van der Veen wrote:

Thanks for the reply!

On Wed Nov 12 2014 at 2:41:27 PM Chuck Carlino 
mailto:chuckjcarl...@gmail.com>> wrote:


Hi,

I'm working on the neutron side of a couple of ironic issues, and
I need some help.  Here are the issues.

 1. If a nic on an ironic server fails and is replaced by a nic
with a different mac address, neutron's dhcp service will not
serve it the same ip address.  This can be worked around by
deleting the neutron port and creating a new one, but it
leaves a window wherein the ip address could be lost to an
unrelated port creation happening at the same time.
 2. While performing large deployments, a random nic failure can
cause the failure of the entire deploy.  The ability to retry
a failed boot with a different nic has been requested.

It has been proposed that both issues could be at least partially
addressed by adding the ability to use dhcp client id to neutron. 
In this solution, the dhcp client is configured to use a dhcp

client id, and the server associates this client id (instead of
mac address) with the ip address.  Note that this idea just came
up today, so no code exists yet to try things out.

My questions:

For 1, the mac address of the neutron port will be left different
from the actual nic's mac address.  Is that a problem for ironic? 
It makes me feel uneasy, and might confuse users, but that's all I

got.

I think that's a show-stopper, actually. Not just because it would be 
very confusing for operators to see a fake MAC in Nova and the real 
MAC in Ironic. Neutron's lack of knowledge of the physical MAC(s) 
would seem to prevent it performing physical switch configuration (via 
ml2 plugins) for those who choose to use Ironic in a multi-tenant 
environment (eg, OnMetal).


Good to know.


In general, does using dhcp client id present any issues for
booting an ironic server?  I've done a bit of web searching and
from a protocol perspective it looks feasible, but I don't get a
sense of whether it's a good general solution.

A few things come to mind:

- How does the instance know what DHCP client ID to include in its 
request, before it has an IP by which to contact the metadata service? 
It sounds like this feature would only work if Ironic has a pre-boot 
way to pass in data (eg, configdrive). Not all our drivers support 
that today.


So using dhcp client id may not be a general solution.



- Is it possible / desirable to group multiple NICs under a single 
DHCP client ID? If so, then again, it would seem like neutron would 
need to know the physical MACs. (I recall us chatting about port 
bonding at some point, but I'm not sure if these were related 
conversations.)


I'd rather not confuse the issue with any details around how bonding or 
link aggregation works, so let's just say that in case #2 above, the 
guest may or may not be bonding the interfaces.  Since bonding occurs 
after boot, the bonding itself is not pertinent.  But yes, all NICs 
through which network boot can be attempted must present the same 
dhcp_client_id for this solution to work.  I don't see the connection to 
neutron needing correct mac addresses, though, since the client id 
effectively replaces the mac address for ip address lookup.




- What prevents some other server from spoofing the DHCP client ID in 
a multi-tenant environment? Again, folks using an ML2 plugin today are 
able to do MAC filtering on traffic at the switch. Removing knowledge 
of the node's physical MACs looks like it breaks this.


Googling around, it looks like spoofing can be addressed as in 
https://www.ietf.org/rfc/rfc3046.txt (needs a trusted component).  I 
agree that neutron needs the correct mac address here.


Thanks,
Chuck


If you have any off-the-top 'there's no chance that'll work' or
better things to try kind of feedback, it would be great to hear
it now since I'm about to start a POC to try it out.

Thanks,
Chuck

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Doug Hellmann

On Nov 17, 2014, at 3:36 PM, Joshua Harlow  wrote:

> Doug Hellmann wrote:
>> On Nov 17, 2014, at 10:01 AM, Denis Makogon  wrote:
>> 
>>> On Mon, Nov 17, 2014 at 4:26 PM, James Page  wrote:
>>> 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 Hi Denis
 
 On 17/11/14 07:43, Denis Makogon wrote:
> During Paris Design summit oslo.messaging session was raised good
> question about maintaining ZeroMQ driver in upstream (see section
> “dropping ZeroMQ support in oslo.messaging” at [1]) . As we all
> know, good thoughts are comming always after. I’d like to propose
> several improvements in process of maintaining and developing of
> ZeroMQ driver in upstream.
> 
> Contribution focus. As we all see, that there are enough patches
> that are trying to address certain problems related to ZeroMQ
> driver.
> 
> Few of them trying to add functional tests, which is definitely
> good, but … there’s always ‘but’, they are not “gate”-able.
 I'm not sure I understand you statement about them not being
 "gate"-able - the functional/unit tests currently proposed for the zmq
 driver run fine as part of the standard test suite execution - maybe
 the confusion is over what 'functional' actually means, but in my
 opinion until we have some level of testing of this driver, we can't
 effectively make changes and fix bugs.
 
>>> I do agree that there's a confusion what "functional testing" means.
>>> Another thing, what the best solution is? Unit tests are welcome, but they
>>> are still remain to be units (they are using mocks, etc.)
>>> I'd try to define what 'fuctional testing' means for me. Functional testing
>>> in oslo.messaging means that we've been using real service for messaging
>>> (in this case - deployed 0mq). So, the simple definition, in term os
>>> OpenStack integration, we should be able to run full Tempest test suit for
>>> OpenStack services that are using oslo.messaging with enabled zmq driver.
>>> Am i right or not?
>> 
>> That’s a good goal, but that’s not what I had in mind for in-tree functional 
>> tests.
>> 
>> We should build a simple app using the Oslo libraries that we can place in 
>> the oslo.messaging source tree to use for exercising the communication 
>> patterns of the library with different drivers. Ideally that would be a 
>> single app (or set of apps) that could be used to test all drivers, with 
>> tests written against the app rather than the driver. Once we have the app 
>> and tests, we can define a new gate job to run those tests just for 
>> oslo.messaging, with a different configuration for each driver we support.
>> 
>> Doug
> 
> An interesting idea that might be useful that taskflow implemented/has done...
> 
> The examples @ 
> https://github.com/openstack/taskflow/tree/master/taskflow/examples all get 
> tested during unit test runs to ensure they work as expected. This seems 
> close to your 'simple app' (where app is replaced with example), it might be 
> nice to have a similar approach for oslo.messaging that has 'examples' that 
> are these apps that get ran to probe the functionality of oslo.messaging (as 
> well as useful for documentation to show people how to use it, which is the 
> other usage taskflow has for these examples)
> 
> The hacky example tester could likely be shared (or refactored, since it 
> probably needs it), 
> https://github.com/openstack/taskflow/blob/master/taskflow/tests/test_examples.py#L91

Sure, that would be a good way to do it, too.

Doug

> 
>> 
>>> 
> My proposal for this topic is to change contribution focus from
> oslo.messaging by itself to OpenStack/Infra project and DevStack
> (subsequently to devstack-gate too).
> 
> I guess there would be questions “why?”.  I think the answer is
> pretty obvious: we have driver that is not being tested at all
> within DevStack and project integration.
 This was discussed in the oslo.messaging summit session, and
 re-enabling zeromq support in devstack is definately on my todo list,
 but I don't think the should block landing of the currently proposed
 unit tests on oslo.messaging.
 
 For example https://review.openstack.org/#/c/128233/ says about adding
>>> functional and units. I'm ok with units, but what about functional tests?
>>> Which oslo.messaging gate job runs them?
>>> 
>>> 
> Also i’d say that such focus re-orientation would be very useful
> as source of use cases and bugs eventually. Here’s a list of what
> we, as team, should do first:
> 
> 1.
> 
> Ensure that DevStack can successfully:
> 
> 1.
> 
> Install ZeroMQ.
> 
> 2.
> 
> Configure  each project to work with zmq driver from
> oslo.messaging.
> 
> 2.
> 
> Ensure that we can run successfully simple test plan for each
> project (like boot VM, fill object store container, spin up volume,
> etc.).
>>

Re: [openstack-dev] [api] Requesting opinion/guideline on IDs vs hrefs

2014-11-17 Thread Amit Gandhi
I can see where this makes a lot of sense in API¹s such as Nova¹s where
flavors represent some combination of memory, disk, and cpu performance.

In the case of CDN, the flavor represents a list of CDN providers.

So... you could have a flavor representing a region of the world
(America¹s) consisting of one or more CDN providers with a strong presence
in those regions.
Or a flavor could represent performance or features offered (number of
edge nodes, speed, etc).  And it is up to the operator to define those
flavors and assign one or more appropriate CDN providers to those flavors.

Im not sure decomposing the flavor in this context makes as much sense.


Amit.


On 11/17/14, 3:18 PM, "Jay Pipes"  wrote:

>I personally do not think that a "flavor" should be stored in the base
>resource. The "flavor" should instead be decomposed into its composite
>pieces (the specification for the CDN creation) and those pieces stored
>in the database.
>
>That way, you don't inherit the terrible problem that Nova has where
>you're never really able to delete a flavor because some instance
>somewhere may still be referring to it. If, in Nova, we decomposed the
>flavor into all of its requisite pieces -- requested resource amounts,
>requested "extra specs" capabilities, requested NUMA topology, etc -- we
>wouldn't have this problem at all.
>
>So, therefore my advice would be to not do any of the above and don't
>have anything other than a "CDN type" or "CDN template" object that is
>just deconstructed into the requested capabilities and resources of the
>to-be-created CDN object and send along those things instead of
>referring to some "flavor" thing.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A mascot for Ironic

2014-11-17 Thread Devananda van der Veen
This whole thread is made of awesome. Adding a drum kit that's built out of
servers is just piling on more awesome.

Cheers for the great mascot drawings, Lucas!

Deva

On Mon Nov 17 2014 at 8:20:09 AM Jim Rollenhagen 
wrote:

> On Sun, Nov 16, 2014 at 01:14:13PM +, Lucas Alvares Gomes wrote:
> > Hi Ironickers,
> >
> > I was thinking this weekend: All the cool projects does have a mascot
> > so I thought that we could have one for Ironic too.
> >
> > The idea about what the mascot would be was easy because the RAX guys
> > put "bear metal" their presentation[1] and that totally rocks! So I
> > drew a bear. It also needed an instrument, at first I thought about a
> > guitar, but drums is actually my favorite instrument so I drew a pair
> > of drumsticks instead.
> >
> > The drawing thing wasn't that hard, the problem was to digitalize it.
> > So I scanned the thing and went to youtube to watch some tutorials
> > about gimp and inkspace to learn how to vectorize it. Magic, it
> > worked!
> >
> > Attached in the email there's the original draw, the vectorized
> > version without colors and the final version of it (with colors).
> >
> > Of course, I know some people does have better skills than I do, so I
> > also attached the inkspace file of the final version in case people
> > want to tweak it :)
> >
> > So, what you guys think about making this little drummer bear the
> > mascot of the Ironic project?
> >
> > Ahh he also needs a name. So please send some suggestions and we can
> > vote on the best name for him.
> >
> > [1] http://www.youtube.com/watch?v=2Oi2T2pSGDU#t=90
> >
> > Lucas
>
> +1000, this is awesome.
>
> A cool variation would be to put a drum set behind the bear, made
> out of servers. :)
>
> // jim
>
>
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-17 Thread Ian Wells
On 12 November 2014 11:11, Steve Gordon  wrote:

NUMA
> 
>
> We still need to identify some hardware to run third party CI for the
> NUMA-related work, and no doubt other things that will come up. It's
> expected that this will be an interim solution until OPNFV resources can be
> used (note cdub jokingly replied 1-2 years when asked for a "rough"
> estimate - I mention this because based on a later discussion some people
> took this as a serious estimate).
>
> Ian did you have any luck kicking this off? Russell and I are also
> endeavouring to see what we can do on our side w.r.t. this short term
> approach - in particular if you find hardware we still need to find an
> owner to actually setup and manage it as discussed.
>


In theory to get started we need a physical multi-socket box and a virtual
> machine somewhere on the same network to handle job control etc. I believe
> the tests themselves can be run in VMs (just not those exposed by existing
> public clouds) assuming a recent Libvirt and an appropriately crafted
> Libvirt XML that ensures the VM gets a multi-socket topology etc. (we can
> assist with this).
>

With apologies for the late reply, but I was off last week.  And because I
was off last week I've not done anything about this so far.

I'm assuming that we'll just set up one physical multisocket box and ensure
that we can do a cleanup-deploy cycle so that we can run whatever
x86-dependent but otherwise relatively hardware agnostic tests we might
need.  Seems easier than worrying about what libvirt and KVM do and don't
support at a given moment in time.

I'll go nag our lab people for the machines.  I'm thinking for the
cleanup-deploy that I might just try booting the physical machine into a
RAM root disk and then running a devstack setup, as it's probably faster
than a clean install, but I'm open to options.  (There's quite a lot of
memory in the servers we have so this is likely to work fine.)

That aside, where are the tests going to live?
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Multi-node testing (for redis and others...)

2014-11-17 Thread Joshua Harlow

Hi guys,

A recent question came up about how do we test better with redis for 
tooz. I think this question is also relevant for ceilometer (and other 
users of redis) and in general applies to the whole of openstack as the 
larger system is what people run (I hope not everyone just runs devstack 
on a single-node and that's where they stop, ha).


The basic question is that redis (or zookeeper) have (and typically are) 
ways to be setup with multi-node instances (for example with redis + 
sentinel or zookeeper in multi-node configurations, or the newly 
released redis clustering...). It seems though that our testing 
infrastructure is setup to do the basics of tests (which isn't bad, but 
does have its limits), and this got me thinking on what would be needed 
to actually test these multi-node configurations of things like redis 
(configured in sentinel mode, or redis in clustering mode) in a 
realistic manner that tests 'common' failure patterns (net splits for 
example).


I guess we can split it up into 3 or 4 (or more questions).

1. How do we get a multi-node configuration (of say redis) setup in the 
first place, configured so that all nodes are running and sentinel (for 
example) is running as expected?
2. How do we then inject failures into this setup to ensure that the 
applications and clients built ontop of those systems reliably handle 
these type of injected failures (something like 
https://github.com/aphyr/jepsen or similar?).
3. How do we analyze those results (for when #2 doesn't turn out to work 
as expected) in a meaningful manner, so that we can then turn those 
experiments into more reliable software?


Anyone else have any interesting ideas for this?

-Josh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Changing our weekly meeting format

2014-11-17 Thread Devananda van der Veen
Lucas, Thanks for bringing this up (both in Paris and here). I'm strongly
in favor of these changes as well. We discussed this during the meeting
today [1], to which I offer the following summary:

- move the status reporting to the etherpad [2]
-- take one (1) minute at the start of the meeting for folks to look at the
'pad and raise any concerns. discuss concerns but cap at ten (10) minutes
-- at the end of the meeting, an email will be sent out containing the
status report from the 'pad, plus discussion notes (if needed)
- meeting agenda to be frozen two (2) days before the meeting. This also
gives time for others to review/research the topic ahead of time. Anyone
who has something on the planned agenda should prepare some notes to
introduce the topic (or type fast enough that we don't notice that you
didn't prepare) so we aren't all waiting on IRC.

I'll update the meeting wiki accordingly.

-Deva

[1]
http://eavesdrop.openstack.org/meetings/ironic/2014/ironic.2014-11-17-19.00.log.html
[2] https://etherpad.openstack.org/IronicWhiteBoard


On Thu Nov 13 2014 at 8:58:18 AM Chris K  wrote:

> +1
> I think the best use of our time is to discuss new features and functions
> that may have a api or functional impact for ironic or projects that depend
> on ironic.
>
> Chris Krelle
>
> On Thu, Nov 13, 2014 at 8:22 AM, Ghe Rivero  wrote:
>
>> I agree that a lot of time is missed with the announcement and status
>> reports, but mostly because irc is a slow bandwidth communication channel
>> (like waiting several minutes for a 3 lines announcement to be written)
>>
>> I propose that any announcement and project status must be written in
>> advanced to an etherpad, and during the irc meeting just have a slot for
>> people to discuss anything that need further explanation, only mentioning
>> the topic but not  the content.
>>
>> Ghe Rivero
>> On Nov 13, 2014 5:08 PM, "Peeyush Gupta" 
>> wrote:
>>
>>> +1
>>>
>>> I agree with Lucas. Sounds like a good idea. I guess if we could spare
>>> more time for discussing new features and requirements rather than
>>> asking for status, that would be helpful for everyone.
>>>
>>> On 11/13/2014 05:45 PM, Lucas Alvares Gomes wrote:
>>> > This was discussed in the Contributor Meetup on Friday at the Summit
>>> > but I think it's important to share on the mail list too so we can get
>>> > more opnions/suggestions/comments about it.
>>> >
>>> > In the Ironic weekly meeting we dedicate a good time of the meeting to
>>> > do some announcements, reporting bug status, CI status, oslo status,
>>> > specific drivers status, etc... It's all good information, but I
>>> > believe that the mail list would be a better place to report it and
>>> > then we can free some time from our meeting to actually discuss
>>> > things.
>>> >
>>> > Are you guys in favor of it?
>>> >
>>> > If so I'd like to propose a new format based on the discussions we had
>>> > in Paris. For the people doing the status report on the meeting, they
>>> > would start adding the status to an etherpad and then we would have a
>>> > responsible person to get this information and send it to the mail
>>> > list once a week.
>>> >
>>> > For the meeting itself we have a wiki page with an agenda[1] which
>>> > everyone can edit to put the topic they want to discuss in the meeting
>>> > there, I think that's fine and works. The only change about it would
>>> > be that we may want freeze the agenda 2 days before the meeting so
>>> > people can take a look at the topics that will be discussed and
>>> > prepare for it; With that we can move forward quicker with the
>>> > discussions because people will be familiar with the topics already.
>>> >
>>> > Let me know what you guys think.
>>> >
>>> > [1] https://wiki.openstack.org/wiki/Meetings/Ironic
>>> >
>>> > Lucas
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> --
>>> Peeyush Gupta
>>> gpeey...@linux.vnet.ibm.com
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Requesting opinion/guideline on IDs vs hrefs

2014-11-17 Thread Ed Leafe
On Nov 17, 2014, at 2:18 PM, Jay Pipes  wrote:
> 
> Please, can we as a community kill the term "flavor" in a fire?

+1

One of the first (but certainly not the last) battle I lost in the early days 
of Nova.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] the possible use of dhcp client id

2014-11-17 Thread Devananda van der Veen
On Wed Nov 12 2014 at 2:41:27 PM Chuck Carlino 
wrote:

>  Hi,
>
> I'm working on the neutron side of a couple of ironic issues, and I need
> some help.  Here are the issues.
>
>1. If a nic on an ironic server fails and is replaced by a nic with a
>different mac address, neutron's dhcp service will not serve it the same ip
>address.  This can be worked around by deleting the neutron port and
>creating a new one, but it leaves a window wherein the ip address could be
>lost to an unrelated port creation happening at the same time.
> 2. While performing large deployments, a random nic failure can cause
>the failure of the entire deploy.  The ability to retry a failed boot with
>a different nic has been requested.
>
> It has been proposed that both issues could be at least partially
> addressed by adding the ability to use dhcp client id to neutron.  In this
> solution, the dhcp client is configured to use a dhcp client id, and the
> server associates this client id (instead of mac address) with the ip
> address.  Note that this idea just came up today, so no code exists yet to
> try things out.
>
> My questions:
>
> For 1, the mac address of the neutron port will be left different from the
> actual nic's mac address.  Is that a problem for ironic?  It makes me feel
> uneasy, and might confuse users, but that's all I got.
>
I think that's a show-stopper, actually. Not just because it would be very
confusing for operators to see a fake MAC in Nova and the real MAC in
Ironic. Neutron's lack of knowledge of the physical MAC(s) would seem to
prevent it performing physical switch configuration (via ml2 plugins) for
those who choose to use Ironic in a multi-tenant environment (eg, OnMetal).

>  In general, does using dhcp client id present any issues for booting an
> ironic server?  I've done a bit of web searching and from a protocol
> perspective it looks feasible, but I don't get a sense of whether it's a
> good general solution.
>
A few things come to mind:

- How does the instance know what DHCP client ID to include in its request,
before it has an IP by which to contact the metadata service? It sounds
like this feature would only work if Ironic has a pre-boot way to pass in
data (eg, configdrive). Not all our drivers support that today.

- Is it possible / desirable to group multiple NICs under a single DHCP
client ID? If so, then again, it would seem like neutron would need to know
the physical MACs. (I recall us chatting about port bonding at some point,
but I'm not sure if these were related conversations.)

- What prevents some other server from spoofing the DHCP client ID in a
multi-tenant environment? Again, folks using an ML2 plugin today are able
to do MAC filtering on traffic at the switch. Removing knowledge of the
node's physical MACs looks like it breaks this.


>  If you have any off-the-top 'there's no chance that'll work' or better
> things to try kind of feedback, it would be great to hear it now since I'm
> about to start a POC to try it out.
>
> Thanks,
> Chuck
>  ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Joshua Harlow

Doug Hellmann wrote:

On Nov 17, 2014, at 10:01 AM, Denis Makogon  wrote:


On Mon, Nov 17, 2014 at 4:26 PM, James Page  wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi Denis

On 17/11/14 07:43, Denis Makogon wrote:

During Paris Design summit oslo.messaging session was raised good
question about maintaining ZeroMQ driver in upstream (see section
“dropping ZeroMQ support in oslo.messaging” at [1]) . As we all
know, good thoughts are comming always after. I’d like to propose
several improvements in process of maintaining and developing of
ZeroMQ driver in upstream.

Contribution focus. As we all see, that there are enough patches
that are trying to address certain problems related to ZeroMQ
driver.

Few of them trying to add functional tests, which is definitely
good, but … there’s always ‘but’, they are not “gate”-able.

I'm not sure I understand you statement about them not being
"gate"-able - the functional/unit tests currently proposed for the zmq
driver run fine as part of the standard test suite execution - maybe
the confusion is over what 'functional' actually means, but in my
opinion until we have some level of testing of this driver, we can't
effectively make changes and fix bugs.


I do agree that there's a confusion what "functional testing" means.
Another thing, what the best solution is? Unit tests are welcome, but they
are still remain to be units (they are using mocks, etc.)
I'd try to define what 'fuctional testing' means for me. Functional testing
in oslo.messaging means that we've been using real service for messaging
(in this case - deployed 0mq). So, the simple definition, in term os
OpenStack integration, we should be able to run full Tempest test suit for
OpenStack services that are using oslo.messaging with enabled zmq driver.
Am i right or not?


That’s a good goal, but that’s not what I had in mind for in-tree functional 
tests.

We should build a simple app using the Oslo libraries that we can place in the 
oslo.messaging source tree to use for exercising the communication patterns of 
the library with different drivers. Ideally that would be a single app (or set 
of apps) that could be used to test all drivers, with tests written against the 
app rather than the driver. Once we have the app and tests, we can define a new 
gate job to run those tests just for oslo.messaging, with a different 
configuration for each driver we support.

Doug


An interesting idea that might be useful that taskflow implemented/has 
done...


The examples @ 
https://github.com/openstack/taskflow/tree/master/taskflow/examples all 
get tested during unit test runs to ensure they work as expected. This 
seems close to your 'simple app' (where app is replaced with example), 
it might be nice to have a similar approach for oslo.messaging that has 
'examples' that are these apps that get ran to probe the functionality 
of oslo.messaging (as well as useful for documentation to show people 
how to use it, which is the other usage taskflow has for these examples)


The hacky example tester could likely be shared (or refactored, since it 
probably needs it), 
https://github.com/openstack/taskflow/blob/master/taskflow/tests/test_examples.py#L91







My proposal for this topic is to change contribution focus from
oslo.messaging by itself to OpenStack/Infra project and DevStack
(subsequently to devstack-gate too).

I guess there would be questions “why?”.  I think the answer is
pretty obvious: we have driver that is not being tested at all
within DevStack and project integration.

This was discussed in the oslo.messaging summit session, and
re-enabling zeromq support in devstack is definately on my todo list,
but I don't think the should block landing of the currently proposed
unit tests on oslo.messaging.

For example https://review.openstack.org/#/c/128233/ says about adding

functional and units. I'm ok with units, but what about functional tests?
Which oslo.messaging gate job runs them?



Also i’d say that such focus re-orientation would be very useful
as source of use cases and bugs eventually. Here’s a list of what
we, as team, should do first:

1.

Ensure that DevStack can successfully:

1.

Install ZeroMQ.

2.

Configure  each project to work with zmq driver from
oslo.messaging.

2.

Ensure that we can run successfully simple test plan for each
project (like boot VM, fill object store container, spin up volume,
etc.).

A better objective would be able to run a full tempest test as
conducted with the RabbitMQ driver IMHO.



I do agree with this too. But we should define step-by-step plan for this
type of testing. Since we want to see quick gate feedback adding full test
suit would be an overhead, at least for now.



ZeroMQ driver maintainers communityorganization. During design
session was raised question about who uses zmq driver in
production.

I’ve seen folks from Canonical and few other companies. So, here’s
my proposals around improving process of maintaining of given
driver:

1.

Wit

Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Joshua Harlow

It should already be running.

Tooz has been testing with it[1]. Whats running in ubuntu is an older 
redis though so don't expect some of the new > 2.2.0 features to work 
until the ubuntu version is pushed out to all projects.


https://github.com/stackforge/tooz/blob/master/tooz/drivers/redis.py#L132

Doug Hellmann wrote:

On Nov 17, 2014, at 12:48 PM, Mehdi Abaakouk  wrote:


Signed PGP part
Le 2014-11-17 15:26, James Page a écrit :

This was discussed in the oslo.messaging summit session, and
re-enabling zeromq support in devstack is definately on my todo list,
but I don't think the should block landing of the currently proposed
unit tests on oslo.messaging.

I would like to see this tests landed too, even we need to install redis
or whatever and
to start them manually. This will help a lot to review zmq stuffs and
ensure fixed thing are not broken again.


What’s blocking us from setting that up? Is redis available in the CI 
environment?

Doug


---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Sent with Postbox 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] meeting day/time change

2014-11-17 Thread Stephen Balukoff
Awesome!

On Mon, Nov 10, 2014 at 9:10 AM, Susanne Balle 
wrote:

> Works for me. Susanne
>
> On Mon, Nov 10, 2014 at 10:57 AM, Brandon Logan <
> brandon.lo...@rackspace.com> wrote:
>
>> https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting
>>
>> That is updated for lbaas and advanced services with the new times.
>>
>> Thanks,
>> Brandon
>>
>> On Mon, 2014-11-10 at 11:07 +, Doug Wiegley wrote:
>> > #openstack-meeting-4
>> >
>> >
>> > > On Nov 10, 2014, at 10:33 AM, Evgeny Fedoruk 
>> wrote:
>> > >
>> > > Thanks,
>> > > Evg
>> > >
>> > > -Original Message-
>> > > From: Doug Wiegley [mailto:do...@a10networks.com]
>> > > Sent: Friday, November 07, 2014 9:04 PM
>> > > To: OpenStack Development Mailing List
>> > > Subject: [openstack-dev] [neutron][lbaas] meeting day/time change
>> > >
>> > > Hi all,
>> > >
>> > > Neutron LBaaS meetings are now going to be Tuesdays at 16:00 UTC.
>> > >
>> > > Safe travels.
>> > >
>> > > Thanks,
>> > > Doug
>> > >
>> > >
>> > > ___
>> > > OpenStack-dev mailing list
>> > > OpenStack-dev@lists.openstack.org
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> > > ___
>> > > OpenStack-dev mailing list
>> > > OpenStack-dev@lists.openstack.org
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] [glance] glance_store scheduled release 0.1.10

2014-11-17 Thread Nikhil Komawar
This is done.

Thanks,
-Nikhil

From: Nikhil Komawar
Sent: Monday, November 17, 2014 1:23 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [stable] [glance] glance_store scheduled release 0.1.10

Hi all,

Following last week's (mentioned) corrections to get Glance and the 
glance_store library come to stable state, we've got a few changes [0, 1, 2] 
merged. The library is set for the next release in a couple hours or so.

Just wanted to give a head's up and see if there were any concerns.

[0] https://review.openstack.org/#/c/131528/
[1] https://review.openstack.org/#/c/131838/
[2] https://review.openstack.org/#/c/130200/

Thanks,
-Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Requesting opinion/guideline on IDs vs hrefs

2014-11-17 Thread Jay Pipes

On 11/17/2014 02:47 PM, Shaunak Kashyap wrote:

Hi,

I’ve been working with the Poppy project[1] while they are designing their 
APIs. Poppy has a concept of a flavor. A single flavor resource has the URI, 
{base}/flavors/{flavor_id}.


Please, can we as a community kill the term "flavor" in a fire?

It has a different spelling in non-American English, it doesn't convey 
appropriate semantic details about the resource itself (i.e. non-English 
speakers think "what is tasty about this particular resource?"), and 
doesn't seem to have any benefit over using a term like "resource type", 
or "CDN service type" or "CDN template" or "CDN spec" that actually 
*does* convey some meaning.


If we're going to use a term that has no meaning with relation to the 
thing being described, I'd rather get creative and call it a 
"thingamabob" or a "whatsit" or a "dingoateyourbaby".



Poppy APIs refer to a flavor in their request and/or response representations. 
Some representations do this by using the flavor ID (e.g. 12345) while others 
use the flavor resource URI (e.g. {base}/flavors/12345).


And some APIs use both -- see, e.g. image_ref and image_id used all over 
the place in nova/image/glance.py, much to my chagrin.



In this context, I created a bug[2] to settle on one way of referring to a 
flavor across all API representations in Poppy. So the question to the API 
working group is this:

How should flavors be referred to in Poppy API representations? Some options to 
consider:

a) Using a “flavor_id” (or similar) property whose value is the flavor ID (e.g. 
12345),
b) Using a “flavor_ref” (or similar) property whose value is the flavor 
resource URI (e.g. {base}/flavors/12345),
c) Using a “links” property whose value is an array of links, one of which has 
an object like { “rel”: “flavor”, “href”: “{base}/flavors/12345” },


No, this isn't really what links (or _links in JSON+HAL) is intended 
for, IMO.



d) Similar to c) but using HAL[3] instead,
e) A combination of a) and c),
f) A combination of a) and d),
g) Something else?


I personally do not think that a "flavor" should be stored in the base 
resource. The "flavor" should instead be decomposed into its composite 
pieces (the specification for the CDN creation) and those pieces stored 
in the database.


That way, you don't inherit the terrible problem that Nova has where 
you're never really able to delete a flavor because some instance 
somewhere may still be referring to it. If, in Nova, we decomposed the 
flavor into all of its requisite pieces -- requested resource amounts, 
requested "extra specs" capabilities, requested NUMA topology, etc -- we 
wouldn't have this problem at all.


So, therefore my advice would be to not do any of the above and don't 
have anything other than a "CDN type" or "CDN template" object that is 
just deconstructed into the requested capabilities and resources of the 
to-be-created CDN object and send along those things instead of 
referring to some "flavor" thing.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV][Telco] Meeting time and agenda (incl. proposal for future alternate time)

2014-11-17 Thread Steve Gordon
Hi all,

The OpenStack Telco Working Group (formerly NFV subteam) will be meeting for 
the first time post-summit this Wednesday @ 1400 UTC [1] in 
#openstack-meeting-alt on Freenode [2]. I have started to draft the agenda here:

https://etherpad.openstack.org/p/nfv-meeting-agenda

Please add additional items that you would like to discuss during the meeting.  
If you received this email directly via BCC then you put your name down on the 
sign up sheet passed around at the recent telco working group ops summit 
session [3]. Please sign up to the openstack-operators list [4] and filter 
emails for the [Telco] tag in the subject to receive further updates from this 
group.

Please also note that the group Wiki page has been moved to to reflect the 
naming and scope discussed at the summit with further edits still to be done:

https://wiki.openstack.org/wiki/TelcoWorkingGroup

Finally, I would like to propose that we replace the Thursday 1600 UTC 
alternate time with Wednesday 2200 UTC [5] in future to provide a more 
divergent option for those attending from other time zones. Please let me know 
of any objections, we will also discuss this in the meeting.

Thanks,

Steve

[1] 
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Telco+Working+Group&iso=20141119T14&ah=1
[2] https://wiki.openstack.org/wiki/IRC
[3] https://etherpad.openstack.org/p/kilo-summit-ops-telco
[4] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
[5] 
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Telco+Working+Group+%28alternate%29&iso=20141117T22&ah=1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Requesting opinion/guideline on IDs vs hrefs

2014-11-17 Thread Kevin L. Mitchell
On Mon, 2014-11-17 at 19:47 +, Shaunak Kashyap wrote:
> Poppy APIs refer to a flavor in their request and/or response
> representations. Some representations do this by using the flavor ID
> (e.g. 12345) while others use the flavor resource URI (e.g.
> {base}/flavors/12345).
> 
> In this context, I created a bug[2] to settle on one way of referring
> to a flavor across all API representations in Poppy. So the question
> to the API working group is this:
> 
> How should flavors be referred to in Poppy API representations? Some
> options to consider:
> 
> a) Using a “flavor_id” (or similar) property whose value is the flavor
> ID (e.g. 12345),
> b) Using a “flavor_ref” (or similar) property whose value is the
> flavor resource URI (e.g. {base}/flavors/12345),
> c) Using a “links” property whose value is an array of links, one of
> which has an object like { “rel”: “flavor”, “href”:
> “{base}/flavors/12345” },
> d) Similar to c) but using HAL[3] instead,
> e) A combination of a) and c),
> f) A combination of a) and d),
> g) Something else?

I can't think of any API in OpenStack that uses a URI for referencing
anything.  We may have *links* in the payloads, but those are provided
for convenience; anytime nova refers to a flavor, it returns the flavor
UUID.

I would, by the way, suggest using UUIDs rather than plain IDs, for
consistency with the rest of the APIs…
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V CI missing logs

2014-11-17 Thread Octavian Ciuhandu
Hi all,

We have experienced a severe hardware failure and the storage for the logs of 
the CI runs was also being affected. We have recovered all that could be saved 
on the new dedicated server. 

If you find any patchset that has the logs missing, please issue a “check 
hyper-v” command to generate a new run and ensure that the latest logs will be 
present. 

Please let us know of any other issues you might have regarding the Hyper-V CI.

Thanks,

Octavian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Requesting opinion/guideline on IDs vs hrefs

2014-11-17 Thread Shaunak Kashyap
Hi,

I’ve been working with the Poppy project[1] while they are designing their 
APIs. Poppy has a concept of a flavor. A single flavor resource has the URI, 
{base}/flavors/{flavor_id}. 

Poppy APIs refer to a flavor in their request and/or response representations. 
Some representations do this by using the flavor ID (e.g. 12345) while others 
use the flavor resource URI (e.g. {base}/flavors/12345).

In this context, I created a bug[2] to settle on one way of referring to a 
flavor across all API representations in Poppy. So the question to the API 
working group is this:

How should flavors be referred to in Poppy API representations? Some options to 
consider:

a) Using a “flavor_id” (or similar) property whose value is the flavor ID (e.g. 
12345),
b) Using a “flavor_ref” (or similar) property whose value is the flavor 
resource URI (e.g. {base}/flavors/12345),
c) Using a “links” property whose value is an array of links, one of which has 
an object like { “rel”: “flavor”, “href”: “{base}/flavors/12345” },
d) Similar to c) but using HAL[3] instead,
e) A combination of a) and c),
f) A combination of a) and d),
g) Something else?

Thanks,

Shaunak

References:
[1] https://wiki.openstack.org/wiki/Poppy
[2] https://bugs.launchpad.net/poppy/+bug/1392573
[3] 
https://github.com/mikekelly/hal_specification/blob/master/hal_specification.md
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Doug Hellmann

On Nov 17, 2014, at 12:48 PM, Mehdi Abaakouk  wrote:

> 
> Signed PGP part
> Le 2014-11-17 15:26, James Page a écrit :
> > This was discussed in the oslo.messaging summit session, and
> > re-enabling zeromq support in devstack is definately on my todo list,
> > but I don't think the should block landing of the currently proposed
> > unit tests on oslo.messaging.
> 
> I would like to see this tests landed too, even we need to install redis
> or whatever and
> to start them manually. This will help a lot to review zmq stuffs and
> ensure fixed thing are not broken again.

What’s blocking us from setting that up? Is redis available in the CI 
environment?

Doug

> 
> ---
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Doug Hellmann

On Nov 17, 2014, at 10:01 AM, Denis Makogon  wrote:

> On Mon, Nov 17, 2014 at 4:26 PM, James Page  wrote:
> 
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>> 
>> Hi Denis
>> 
>> On 17/11/14 07:43, Denis Makogon wrote:
>>> During Paris Design summit oslo.messaging session was raised good
>>> question about maintaining ZeroMQ driver in upstream (see section
>>> “dropping ZeroMQ support in oslo.messaging” at [1]) . As we all
>>> know, good thoughts are comming always after. I’d like to propose
>>> several improvements in process of maintaining and developing of
>>> ZeroMQ driver in upstream.
>>> 
>>> Contribution focus. As we all see, that there are enough patches
>>> that are trying to address certain problems related to ZeroMQ
>>> driver.
>>> 
>>> Few of them trying to add functional tests, which is definitely
>>> good, but … there’s always ‘but’, they are not “gate”-able.
>> 
>> I'm not sure I understand you statement about them not being
>> "gate"-able - the functional/unit tests currently proposed for the zmq
>> driver run fine as part of the standard test suite execution - maybe
>> the confusion is over what 'functional' actually means, but in my
>> opinion until we have some level of testing of this driver, we can't
>> effectively make changes and fix bugs.
>> 
> 
> I do agree that there's a confusion what "functional testing" means.
> Another thing, what the best solution is? Unit tests are welcome, but they
> are still remain to be units (they are using mocks, etc.)
> I'd try to define what 'fuctional testing' means for me. Functional testing
> in oslo.messaging means that we've been using real service for messaging
> (in this case - deployed 0mq). So, the simple definition, in term os
> OpenStack integration, we should be able to run full Tempest test suit for
> OpenStack services that are using oslo.messaging with enabled zmq driver.
> Am i right or not?

That’s a good goal, but that’s not what I had in mind for in-tree functional 
tests.

We should build a simple app using the Oslo libraries that we can place in the 
oslo.messaging source tree to use for exercising the communication patterns of 
the library with different drivers. Ideally that would be a single app (or set 
of apps) that could be used to test all drivers, with tests written against the 
app rather than the driver. Once we have the app and tests, we can define a new 
gate job to run those tests just for oslo.messaging, with a different 
configuration for each driver we support.

Doug

> 
> 
>>> My proposal for this topic is to change contribution focus from
>>> oslo.messaging by itself to OpenStack/Infra project and DevStack
>>> (subsequently to devstack-gate too).
>>> 
>>> I guess there would be questions “why?”.  I think the answer is
>>> pretty obvious: we have driver that is not being tested at all
>>> within DevStack and project integration.
>> 
>> This was discussed in the oslo.messaging summit session, and
>> re-enabling zeromq support in devstack is definately on my todo list,
>> but I don't think the should block landing of the currently proposed
>> unit tests on oslo.messaging.
>> 
>> For example https://review.openstack.org/#/c/128233/ says about adding
> functional and units. I'm ok with units, but what about functional tests?
> Which oslo.messaging gate job runs them?
> 
> 
>>> Also i’d say that such focus re-orientation would be very useful
>>> as source of use cases and bugs eventually. Here’s a list of what
>>> we, as team, should do first:
>>> 
>>> 1.
>>> 
>>> Ensure that DevStack can successfully:
>>> 
>>> 1.
>>> 
>>> Install ZeroMQ.
>>> 
>>> 2.
>>> 
>>> Configure  each project to work with zmq driver from
>>> oslo.messaging.
>>> 
>>> 2.
>>> 
>>> Ensure that we can run successfully simple test plan for each
>>> project (like boot VM, fill object store container, spin up volume,
>>> etc.).
>> 
>> A better objective would be able to run a full tempest test as
>> conducted with the RabbitMQ driver IMHO.
>> 
>> 
> I do agree with this too. But we should define step-by-step plan for this
> type of testing. Since we want to see quick gate feedback adding full test
> suit would be an overhead, at least for now.
> 
> 
>>> ZeroMQ driver maintainers communityorganization. During design
>>> session was raised question about who uses zmq driver in
>>> production.
>>> 
>>> I’ve seen folks from Canonical and few other companies. So, here’s
>>> my proposals around improving process of maintaining of given
>>> driver:
>>> 
>>> 1.
>>> 
>>> With respect to best practices of driver maintaining procedure, we
>>> might need to set up community sub-group. What would it give to us
>>> and to the project subsequently? It’s not pretty obvious, at least
>>> for now, but i’d try to light out couple moments:
>>> 
>>> 1.
>>> 
>>> continuous driver stability
>>> 
>>> 2.
>>> 
>>> continuous community support (across all OpenStack Project that are
>>> using same model: driver should have maintaining team, would it be
>>> a company or c

Re: [openstack-dev] [oslo.messaging] status of rabbitmq heartbeat support

2014-11-17 Thread Doug Hellmann

On Nov 17, 2014, at 12:54 PM, Mehdi Abaakouk  wrote:

> 
> Signed PGP part
> 
> Hi,
> 
> Many peoples want to have heartbeat support into the rabbit driver of
> oslo.messaging (https://launchpad.net/bugs/856764)
> 
> We have different approaches to add this feature:
> 
> - Putting all the heartbeat logic into the rabbitmq driver.
> (https://review.openstack.org/#/c/126330/)
>But the patch use a python thread to do the work, and don't care about
> which oslo.messaging executor is used.
>But this patch is also the only already written patch that add
> heartbeat for all kind of connections and that works.
> 
> - Like we quickly talk at the summit, we can make the oslo.messaging
> executor responsible to trigger the
>heartbeat method of the driver
> (https://review.openstack.org/#/c/132979/,
> https://review.openstack.org/#/c/134542/)
>At the first glance, this sound good, but the executor is only used
> for the server side of oslo.messaging.
>And for the client side, this doesn't solve the issue.
> 
> Or just an other thought:
> 
> - Moving the executor setup/start/stop from the
> "MessageHandlingServer" object to the "Transport" object (note: 1
> transport instance == 1 driver instance),
>the 'transport' become responsible to register 'tasks' into the
> 'executor'
>and tasks will be 'polling and dispatch' (one for each
> rpc/notification server created like we have now) and a background task
> for the driver (the heartbeat in this case)
>So when we setup a new transport, it's will automatically schedule the
> work to do on the underlying driver without the need to know if this is
> client or server things.
>This will help a driver to do background tasks within also messaging
> (I known that AMQP1.0 driver have the same kind of needs,
>currently done with a python thread into the driver too)
>This looks a bigger work.

I like the idea of fixing this in the transport. I wonder if we can combine 
option 1 and 3 so the transport starts a heartbeat thread? That seems like it 
would take less rewriting, although I do like the design changes you propose in 
option 3.

Doug

> 
> 
> So I think if we want a quick resolution of the heartbeat issue,
> we need to land the first solution when it's ready
> (https://review.openstack.org/#/c/126330/)
> 
> Otherwise any thoughts/comments are welcome.
> 
> Regards,
> 
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Scale out bug-triage by making it easier for people to contribute

2014-11-17 Thread Dolph Mathews
As someone who has spent quite a bit of time triaging bugs, I'd be hugely
in favor of this. I'd probably be willing to pitch in on additional
projects, as well.

Is there already tooling for this built around Launchpad, or do we have to
roll our own?

With storyboard.openstack.org looming on the horizon, should such an effort
be put on the back burner for now?

On Mon, Nov 17, 2014 at 10:46 AM, Flavio Percoco  wrote:

> Greetings,
>
> Regardless of how big/small bugs backlog is for each project, I
> believe this is a common, annoying and difficult problem. At the oslo
> meeting today, we're talking about how to address our bug triage
> process and I proposed something that I've seen done in other
> communities (rust-language [0]) that I consider useful and a good
> option for OpenStack too.
>
> The process consist in a bot that sends an email to every *volunteer*
> with 10 bugs to review/triage for the week. Each volunteer follows the
> triage standards, applies tags and provides information on whether the
> bug is still valid or not. The volunteer doesn't have to fix the bug,
> just triage it.
>
> In openstack, we could have a job that does this and then have people
> from each team volunteer to help with triage. The benefits I see are:
>
> * Interested folks don't have to go through the list and filter the
> bugs they want to triage. The bot should be smart enough to pick the
> oldest, most critical, etc.
>
> * It's a totally opt-in process and volunteers can obviously ignore
> emails if they don't have time that week.
>
> * It helps scaling out the triage process without poking people around
> and without having to do a "call for volunteers" every meeting/cycle/etc
>
> The above doesn't solve the problme completely but just like reviews,
> it'd be an optional, completely opt-in process that people can sign up
> for.
>
> Thoughts?
> Flavio
>
> [0] https://mail.mozilla.org/pipermail/rust-dev/2013-April/003668.html
>
> --
> @flaper87
> Flavio Percoco
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing Nova V2 API xml support

2014-11-17 Thread Davanum Srinivas
Thanks Matt, will pester Nova-core when the time is right :)

-- dims

On Mon, Nov 17, 2014 at 1:57 PM, Matthew Treinish  wrote:
> On Mon, Nov 17, 2014 at 12:40:12PM -0500, Davanum Srinivas wrote:
>> Matt,
>>
>> Thanks for the help/reviews!. Looks like the sequence seems to be
>>
>> DevStack / Icehouse - https://review.openstack.org/#/c/134972/
>> DevStack / Juno - https://review.openstack.org/#/c/134975/
>> Tempest - https://review.openstack.org/#/c/134924/
>> Tempest - https://review.openstack.org/#/c/134985/
>> Nova - https://review.openstack.org/#/c/134332/
>>
>> Right?
>
> Yeah, that looks right.
>
> The only caveat is that before 134985 can land the procedure is that a Nova 
> core
> has to +2 the 134332, and just leave a comment on the Tempest review saying 
> it's
> got Nova core sign off with the link. (it's just procedural, I know that
> everyone wants to kill the XML API, myself included)
>
> -Matt Treinish
>
>> On Mon, Nov 17, 2014 at 8:48 AM, Matthew Treinish  
>> wrote:
>> > On Mon, Nov 17, 2014 at 08:24:47AM -0500, Davanum Srinivas wrote:
>> >> Sean, Joe, Dean, all,
>> >>
>> >> Here's the Nova change to disable the V2 XML support:
>> >> https://review.openstack.org/#/c/134332/
>> >>
>> >> To keep all the jobs happy, we'll need changes in tempest, devstack,
>> >> devstack-gate as well:
>> >>
>> >> Tempest : https://review.openstack.org/#/c/134924/
>> >> Devstack : https://review.openstack.org/#/c/134685/
>> >> Devstack-gate : https://review.openstack.org/#/c/134714/
>> >>
>> >> Please see if i am on the right track.
>> >>
>> >
>> > So this approach will work, but the direction that neutron took and that
>> > keystone is in the process of undertaking for doing the same was basically 
>> > the
>> > opposite. Instead of just overriding the default tempest value on master
>> > devstack to disable xml testing the devstack stable branches are update to
>> > ensure xml_api is True when running tempest from the stable branches, and 
>> > then
>> > the default in tempest is switched to False. The advantage of this 
>> > approach is
>> > that the default value for tempest running against any cloud will always 
>> > work.
>> >
>> > The patches which landed for neutron doing this:
>> >
>> > Devstack:
>> > https://review.openstack.org/#/c/130368/
>> > https://review.openstack.org/#/c/130367/
>> >
>> > Tempest:
>> > https://review.openstack.org/#/c/127667/
>> >
>> > Neutron:
>> > https://review.openstack.org/#/c/128095/
>> >
>> >
>> > Ping me on IRC and we can work through the process, because things need to 
>> > land
>> > in a particular order to make this approach work. But, basically the 
>> > approach
>> > is first the stable devstack changes need to land which enable the testing 
>> > on
>> > stable, followed by a +2 on the failing Nova patch saying the approach is
>> > approved, and then we can land the tempest patch which switches the 
>> > default,
>> > which will let the Nova change get through the gate.
>> >
>> > -Matt Treinish
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] LTFS integration with OpenStack Swift for scenario like - Data Archival as a Service .

2014-11-17 Thread Tim Bell
> -Original Message-
> From: Tim Bell [mailto:tim.b...@cern.ch]
> Sent: 17 November 2014 19:44
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [swift] LTFS integration with OpenStack Swift for
> scenario like - Data Archival as a Service .
> 
> > -Original Message-
> > From: Christian Schwede [mailto:christian.schw...@enovance.com]
> > Sent: 17 November 2014 14:36
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [swift] LTFS integration with OpenStack
> > Swift for scenario like - Data Archival as a Service .
> >
...
> 
> Certainly not impossible and lots of prior art in the various HSM systems 
> such as
> HPSS or CERN's CASTOR.
> 

One additional thought would be to use an open source archive/retrieve solution 
such as Bacula (http://blog.bacula.org/) which has many of the above features 
in it.

> Tim
> 
> > Christian
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing Nova V2 API xml support

2014-11-17 Thread Matthew Treinish
On Mon, Nov 17, 2014 at 12:40:12PM -0500, Davanum Srinivas wrote:
> Matt,
> 
> Thanks for the help/reviews!. Looks like the sequence seems to be
> 
> DevStack / Icehouse - https://review.openstack.org/#/c/134972/
> DevStack / Juno - https://review.openstack.org/#/c/134975/
> Tempest - https://review.openstack.org/#/c/134924/
> Tempest - https://review.openstack.org/#/c/134985/
> Nova - https://review.openstack.org/#/c/134332/
> 
> Right?

Yeah, that looks right. 

The only caveat is that before 134985 can land the procedure is that a Nova core
has to +2 the 134332, and just leave a comment on the Tempest review saying it's
got Nova core sign off with the link. (it's just procedural, I know that
everyone wants to kill the XML API, myself included)

-Matt Treinish

> On Mon, Nov 17, 2014 at 8:48 AM, Matthew Treinish  
> wrote:
> > On Mon, Nov 17, 2014 at 08:24:47AM -0500, Davanum Srinivas wrote:
> >> Sean, Joe, Dean, all,
> >>
> >> Here's the Nova change to disable the V2 XML support:
> >> https://review.openstack.org/#/c/134332/
> >>
> >> To keep all the jobs happy, we'll need changes in tempest, devstack,
> >> devstack-gate as well:
> >>
> >> Tempest : https://review.openstack.org/#/c/134924/
> >> Devstack : https://review.openstack.org/#/c/134685/
> >> Devstack-gate : https://review.openstack.org/#/c/134714/
> >>
> >> Please see if i am on the right track.
> >>
> >
> > So this approach will work, but the direction that neutron took and that
> > keystone is in the process of undertaking for doing the same was basically 
> > the
> > opposite. Instead of just overriding the default tempest value on master
> > devstack to disable xml testing the devstack stable branches are update to
> > ensure xml_api is True when running tempest from the stable branches, and 
> > then
> > the default in tempest is switched to False. The advantage of this approach 
> > is
> > that the default value for tempest running against any cloud will always 
> > work.
> >
> > The patches which landed for neutron doing this:
> >
> > Devstack:
> > https://review.openstack.org/#/c/130368/
> > https://review.openstack.org/#/c/130367/
> >
> > Tempest:
> > https://review.openstack.org/#/c/127667/
> >
> > Neutron:
> > https://review.openstack.org/#/c/128095/
> >
> >
> > Ping me on IRC and we can work through the process, because things need to 
> > land
> > in a particular order to make this approach work. But, basically the 
> > approach
> > is first the stable devstack changes need to land which enable the testing 
> > on
> > stable, followed by a +2 on the failing Nova patch saying the approach is
> > approved, and then we can land the tempest patch which switches the default,
> > which will let the Nova change get through the gate.
> >
> > -Matt Treinish
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> -- 
> Davanum Srinivas :: https://twitter.com/dims
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


pgpZVrXy1Nj2f.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stale patches

2014-11-17 Thread Collins, Sean
Perhaps we should re-introduce the auto-expiration of patches, albeit on
a very leisurely timeframe. Before, it was like 1 week to expire a
patch, which was a bit aggressive. Perhaps we could auto-expire patches
that haven't been touched in 4 or 6 weeks, to expire patches that have
truly been abandoned by authors?

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] LTFS integration with OpenStack Swift for scenario like - Data Archival as a Service .

2014-11-17 Thread Tim Bell
> -Original Message-
> From: Christian Schwede [mailto:christian.schw...@enovance.com]
> Sent: 17 November 2014 14:36
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [swift] LTFS integration with OpenStack Swift for
> scenario like - Data Archival as a Service .
> 
> On 14.11.14 20:43, Tim Bell wrote:
> > It would need to be tiered (i.e. migrate whole collections rather than
> > files) and a local catalog would be needed to map containers to tapes.
> > Timeouts would be an issue since we are often waiting hours for recall
> > (to ensure that multiple recalls for the same tape are grouped).
> >
> > It is not an insolvable problem but it is not just a 'use LTFS' answer.
> 
> There were some ad-hoc discussions during the last summit about using Swift
> (API) to access data that stored on tape. At the same time we talked about
> possible data migrations from one storage policy to another, and this might be
> an option to think about.
> 
> Something like this:
> 
> 1. Data is stored in a container with a Storage Policy (SP) that defines a 
> time-
> based data migration to some other place 2. After some time, data is migrated
> to tape, and only some stubs (zero-byte objects) are left on disk.
> 3. If a client requests such an object the clients gets an error stating that 
> the
> object is temporarily not available (unfortunately there is no suitable http
> response code for this yet) 4. At this time the object is scheduled to be 
> restored
> from tape 5. Finally the object is read from tape and stored on disk again. 
> Will be
> deleted again from disk after some time.
> 
> Using this approach there are only smaller modifications inside Swift 
> required,
> for example to send a notification to an external consumer to migrate data
> forth and back and to handle requests for empty stub files.
> The migration itself should be done by an external worker, that works with
> existing solutions from tape vendors.
> 
> Just an idea, but might be worth to investigate further (because more and more
> people seem to be interested in this, and especially from the science
> community).
> 

This sounds something like DMAPI (http://en.wikipedia.org/wiki/DMAPI)  
there may be some concepts from that which would help to construct an API 
definition for the driver.

If you work on the basis that a container is either online or offline, you 
would need a basic data store which told you which robot/tape held that 
container and some method for handling containers spanning multiple tapes or 
multiple containers on a tape.

The semantics when a new object was added to a container would also be needed 
for different scenarios (such as container offline already and container being 
archived/recalled).

Other operations needed would be to move a container to new media (such as when 
recycling old tapes), initialising tape media as being empty but defined in the 
system, handling container deletion on offline media (with associated garbage 
collection), validation of an offline tape, ...
 
Certainly not impossible and lots of prior art in the various HSM systems such 
as HPSS or CERN's CASTOR.

Tim

> Christian
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Denis Makogon
понедельник, 17 ноября 2014 г. пользователь James Page написал:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 17/11/14 09:01, Denis Makogon wrote:
> > I'm not sure I understand you statement about them not being
> > "gate"-able - the functional/unit tests currently proposed for the
> > zmq driver run fine as part of the standard test suite execution -
> > maybe the confusion is over what 'functional' actually means, but
> > in my opinion until we have some level of testing of this driver,
> > we can't effectively make changes and fix bugs.
> >
> > I do agree that there's a confusion what "functional testing"
> > means. Another thing, what the best solution is? Unit tests are
> > welcome, but they are still remain to be units (they are using
> > mocks, etc.) I'd try to define what 'fuctional testing' means for
> > me. Functional testing in oslo.messaging means that we've been
> > using real service for messaging (in this case - deployed 0mq). So,
> > the simple definition, in term os OpenStack integration, we should
> > be able to run full Tempest test suit for OpenStack services that
> > are using oslo.messaging with enabled zmq driver. Am i right or
> > not?
>
> 0mq is just a set of messaging semantics on top of tcp/ipc sockets; so
> its possible to test the entire tcp/ipc messaging flow standalone i.e.
> without involving any openstack services.  That's what the current
> test proposal includes - unit tests which mock out most things, and
> base functional tests that validate the tcp/icp messaging flows via
> the zmq-receiver proxy.  These are things that *just work* under a tox
> environment and don't require any special setup.
>
> Hm, I see what you've been trying to say. But unfortunately it breaks
whole idea of TDD. Why can't we just spend some time on getting non-voting
gates? Ok, let me describe what would satisfy all of us: Lest write up docs
that are describes how to setup(manually) environment to test out income
patches.

So, in any way. This topic is not for disagreement. It's for building out
team relationship.
I'd like to discuss next steps on developing zmq driver.

Kind regards,
Denis M.







> This will be supplemented with good devstack support and full tempest
> gate, but lets start from the bottom up please!  The work that's been
> done so far allows us to move forward with bug fixing and re-factoring
> that's backed up on having a base level of unit/functional tests.
>
> - --
> James Page
> Ubuntu and Debian Developer
> james.p...@ubuntu.com 
> jamesp...@debian.org 
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
>
> iQIcBAEBCAAGBQJUajaZAAoJEL/srsug59jDm2wP/1xW99gc/63CXnNowJLwgCAK
> AflhWs4SAUSF0VizOFoys6j1ApjAwWDG33B927REH/YDNwmAd7PgHRilgcaBjR5w
> pgaPRctCHPpWtJCWRCAmgkogqJotN3gTDKORxRNaWo9otzjQQbyPP5sEzuLl86/8
> 0n9KjwhjdJV42fcoKYvWt18uvz9yVOQLlPqj0WhAuzfpeP/5ZkXkd/dOvh6rwJnk
> wc+ZExPBhdeMNwaJFPZvle++Ki6tZCV8P8+Be5rqTZxdnGxoct72YnIohW48E9Nu
> 1sjdJCg42vxIMZi8NfkJDDfTBWzOmkab2jcViIJd9ycTn8CT/e62ZK8nN/hnIjla
> qU8pdRxNkY7xY3AuVoTWYRZGAon+Pp6Xw3J+lh7xUYukKtP/PaN+PjLCmVYrfca0
> JQnc8N5bLfcZkz/tx8R09hxqV7cpaRZh/lM6D62XEMRQJ7y9rcUIaJQnHbsmqLw9
> lwriXjNE/77eyttQlGnItyBZrTFjCFED9zg6ihK5w0DNXQr3CbIvlgCjiWkXfxDD
> 1QK05SbsukSlnO+Aqfs/HNICMdiZmqxcqcUcVs/XnKXf5Bi/Y/P0haLb43nFoa3E
> FaOYvY/T5HSJDvrFK6+kzPgT2zF3sWy4bZjRwKLl8GM8Mm7K65nfd5APhVCnQq5X
> yZOvpJehduiy6W/lQgzk
> =HAiM
> -END PGP SIGNATURE-
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] testtools 1.2.0 release breaks the world

2014-11-17 Thread Robert Collins
On 18 November 2014 01:46, Sean Dague  wrote:
> On 11/17/2014 07:26 AM, Alan Pevec wrote:
 We don't support 2.6 any more in OpenStack. If we decide to pin
 testtools on stable/*, we could just let this be.
>>>
>>> We still support 2.6 on the python clients and oslo libraries - but
>>> indeed not for trove itself with master.
>>
>> What Andreas said, also testtools claims "testtools gives you the very
>> latest in unit testing technology in a way that will work with Python
>> 2.6, 2.7, 3.1 and 3.2." so it should be fixed, OpenStack or not.
>
> Well, the python 2.6 support was only added for OpenStack. And I think
> it's fine to drop that burden now that we don't need it (as long as we
> pin appropriately).

Huh? No :) - testtools had Python 2.6 support long before OpenStack
existed :) - testtools has kept 2.6 support because a) its easy and b)
there are still groups (like but not limited to OpenStack) that care
about it.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] [glance] glance_store scheduled release 0.1.10

2014-11-17 Thread Nikhil Komawar
Hi all,

Following last week's (mentioned) corrections to get Glance and the 
glance_store library come to stable state, we've got a few changes [0, 1, 2] 
merged. The library is set for the next release in a couple hours or so.

Just wanted to give a head's up and see if there were any concerns.

[0] https://review.openstack.org/#/c/131528/
[1] https://review.openstack.org/#/c/131838/
[2] https://review.openstack.org/#/c/130200/

Thanks,
-Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Denis Makogon
понедельник, 17 ноября 2014 г. пользователь Mehdi Abaakouk написал:

>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Le 2014-11-17 15:26, James Page a écrit :
>
>> This was discussed in the oslo.messaging summit session, and
>> re-enabling zeromq support in devstack is definately on my todo list,
>> but I don't think the should block landing of the currently proposed
>> unit tests on oslo.messaging.
>>
>
> I would like to see this tests landed too, even we need to install redis
> or whatever and
> to start them manually. This will help a lot to review zmq stuffs and
> ensure fixed thing are not broken again.
>
>
I do agree that we need to find a way to prevent blocking of zmq
development. But I don't think that such testing way eventually will lead
us to failure. Why not just focus on setting up testing environment that
can be used for gating? Just as another example, we can consider on getting
at least 3d party CI for zmq driver until we have infra gating environment.


> - ---
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>
> -BEGIN PGP SIGNATURE-
> Version: OpenPGP.js v.1.20131017
> Comment: http://openpgpjs.org
>
> wsFcBAEBCAAQBQJUajTzCRAYkrQvzqrryAAAN3AP+QEdd4/kc0y+6WB4d3Tu
> g19EfSLR/1ekSBK0AeBc7z7hlDh5wVnQF1t0cm4Kv/fg2+59+Kjc0FhoBeDR
> DbOe75vlJTkkUIK+RgPiFLm2prjV7oHQVA7x5E75IhewG+jlLtPm47Wj2b12
> wRpeIJC3ofR8OETZ6yxr8NVUvdEWrQk+E2XfDrs3SC55RMYl+so9/FxVlR4y
> qwg2EKyhBvjCF8B7j0f3kZqrOCUTi00ivLEN2t+gqCA1WDm7o0cqSVLGvqDW
> +HvgJTnVeCu9F+OgsSjpfrVcAiWsF4K5sxZtLv76fLL75simDVG04gOTi5ZL
> UtZ2HSQGHrdamTz/qu9FckdhMWoGeUq9XeJf1ulCqJ/9Q4GWlh3KwM/h0hws
> A3lKBRxwdiG4afkddhXH3CXa2WyN/genTEaitbk0rk0Q6Q0dumiLPC+P5txB
> Fpn1DgwXYMdKVOVWGhUuKVtVWHN35+bJIaGXA/j9MuzEVyTkxhQsOl2aC992
> SrQzLvOE9Ao9o4zQCChDnKPfVg8NcxFsljnf55uLBCWQT6zrKNLL18EY1JvL
> kacwKipFWyW4TGYQc33ibV66353W8WY6L07ihDFWYo5Ww0NTWtgNM2FUpM2L
> QgiP9DcGsOMJ+Ez41uXVLzPueal0KCkgXFbl4Vrrk5PflTvZx8kaXf8TTbei
> Kcmc
> =hgqJ
> -END PGP SIGNATURE-
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Kind regards,
Denis M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS falls into interactive mode: Unsupported hardware

2014-11-17 Thread Jesse Pretorius
>
> On Mon, Nov 17, 2014 at 4:41 PM, Matthew Mosesohn 
> wrote:
>
>> I actually reported this to CentOS back in May:
>> https://bugs.centos.org/view.php?id=7136
>> It's a bug/feature in Anaconda. It can be worked around quite easily
>> by adding "unsupported_hardware" to kernel params or to the kickstart
>> file.
>>
>> I reported the bug because there's no support for CentOS (except from
>> the community), so this error message has no true value in a
>> non-commercial OS.
>>
>> On Mon, Nov 17, 2014 at 4:30 PM, Mike Scherbakov
>>  wrote:
>> > Hi all,
>> > I was skimming through a nicely written blogpost about Fuel experience
>> [1],
>> > and noticed "This hardware ... not supported by CentOS" [2] on one of
>> the
>> > screenshots. Looks like CentOS goes into interactive mode and complains
>> > about unsupported hardware.
>>
>
This was resolved for the Fuel Master:
https://bugs.launchpad.net/fuel/+bug/1322502

It would appear that it wasn't resolved for the deployment images though.
There's a doc bug: https://bugs.launchpad.net/fuel/+bug/1359494

It would be best to get it fixed up for any and all deployments in a Fuel
build.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-17 Thread Armando M.
On 17 November 2014 01:13, Mathieu Rohon  wrote:

> Hi
>
> On Fri, Nov 14, 2014 at 6:26 PM, Armando M.  wrote:
> > Last Friday I recall we had two discussions around this topic. One in the
> > morning, which I think led to Maruti to push [1]. The way I understood
> [1]
> > was that it is an attempt at unifying [2] and [3], by choosing the API
> > approach of one and the architectural approach of the other.
> >
> > [1] https://review.openstack.org/#/c/134179/
> > [2] https://review.openstack.org/#/c/100278/
> > [3] https://review.openstack.org/#/c/93613/
> >
> > Then there was another discussion in the afternoon, but I am not 100% of
> the
> > outcome.
>
> Me neither, that's why I'd like ian, who led this discussion, to sum
> up the outcome from its point of view.
>
> > All this churn makes me believe that we probably just need to stop
> > pretending we can achieve any sort of consensus on the approach and let
> the
> > different alternatives develop independently, assumed they can all
> develop
> > independently, and then let natural evolution take its course :)
>
> I tend to agree, but I think that one of the reason why we are looking
> for a consensus, is because API evolutions proposed through
> Neutron-spec are rejected by core-dev, because they rely on external
> components (sdn controller, proprietary hardware...) or they are not a
> high priority for neutron core-dev.
>

I am not sure I agree with this statement. I am not aware of any proposal
here being dependent on external components as you suggested, but even if
it were, an API can be implemented in multiple ways, just like the (core)
Neutron API can be implemented using a fully open source solution or an
external party like an SDN controller.


> By finding a consensus, we show that several players are interested in
> such an API, and it helps to convince core-dev that this use-case, and
> its API, is missing in neutron.
>

Right, but it seems we are struggling to find this consensus. In this
particular instance, where we are trying to address the use case of L2
Gateway (i.e. allow Neutron logical networks to be extended with physical
ones), it seems that everyone has a different opinion as to what
abstraction we should adopt in order to express and configure the L2
gateway entity, and at the same time I see no convergence in sight.

Now if the specific L2 Gateway case were to be considered part of the core
Neutron API, then such a consensus would be mandatory IMO, but if it isn't,
is there any value in striving for that consensus at all costs? Perhaps
not, and we can have multiple attempts experiment and innovate
independently.

So far, all my data points seem to imply that such an abstraction need not
be part of the core API.


> Now, if there is room for easily propose new API in Neutron, It make
> sense to leave new API appear and evolve, and then " let natural
> evolution take its course ", as you said.
> To me, this is in the scope of the "advanced services" project.
>

Advanced Services may be a misnomer, but an incubation feature, sure why
not?


>
> > Ultimately the biggest debate is on what the API model needs to be for
> these
> > abstractions. We can judge on which one is the best API of all, but
> > sometimes this ends up being a religious fight. A good API for me might
> not
> > be a good API for you, even though I strongly believe that a good API is
> one
> > that can:
> >
> > - be hard to use incorrectly
> > - clear to understand
> > - does one thing, and one thing well
> >
> > So far I have been unable to be convinced why we'd need to cram more than
> > one abstraction in one single API, as it does violate the above mentioned
> > principles. Ultimately I like the L2 GW API proposed by 1 and 2 because
> it's
> > in line with those principles. I'd rather start from there and iterate.
> >
> > My 2c,
> > Armando
> >
> > On 14 November 2014 08:47, Salvatore Orlando 
> wrote:
> >>
> >> Thanks guys.
> >>
> >> I think you've answered my initial question. Probably not in the way I
> was
> >> hoping it to be answered, but it's ok.
> >>
> >> So now we have potentially 4 different blueprint describing more or less
> >> overlapping use cases that we need to reconcile into one?
> >> If the above is correct, then I suggest we go back to the use case and
> >> make an effort to abstract a bit from thinking about how those use cases
> >> should be implemented.
> >>
> >> Salvatore
> >>
> >> On 14 November 2014 15:42, Igor Cardoso  wrote:
> >>>
> >>> Hello all,
> >>> Also, what about Kevin's https://review.openstack.org/#/c/87825/? One
> of
> >>> its use cases is exactly the L2 gateway. These proposals could
> probably be
> >>> inserted in a more generic work for moving existing datacenter L2
> resources
> >>> to Neutron.
> >>> Cheers,
> >>>
> >>> On 14 November 2014 15:28, Mathieu Rohon 
> wrote:
> 
>  Hi,
> 
>  As far as I understood last friday afternoon dicussions during the
>  design summit, this use case is in the scope o

Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread James Page
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 17/11/14 09:01, Denis Makogon wrote:
> I'm not sure I understand you statement about them not being 
> "gate"-able - the functional/unit tests currently proposed for the
> zmq driver run fine as part of the standard test suite execution -
> maybe the confusion is over what 'functional' actually means, but
> in my opinion until we have some level of testing of this driver,
> we can't effectively make changes and fix bugs.
> 
> I do agree that there's a confusion what "functional testing"
> means. Another thing, what the best solution is? Unit tests are
> welcome, but they are still remain to be units (they are using
> mocks, etc.) I'd try to define what 'fuctional testing' means for
> me. Functional testing in oslo.messaging means that we've been
> using real service for messaging (in this case - deployed 0mq). So,
> the simple definition, in term os OpenStack integration, we should
> be able to run full Tempest test suit for OpenStack services that
> are using oslo.messaging with enabled zmq driver. Am i right or
> not?

0mq is just a set of messaging semantics on top of tcp/ipc sockets; so
its possible to test the entire tcp/ipc messaging flow standalone i.e.
without involving any openstack services.  That's what the current
test proposal includes - unit tests which mock out most things, and
base functional tests that validate the tcp/icp messaging flows via
the zmq-receiver proxy.  These are things that *just work* under a tox
environment and don't require any special setup.

This will be supplemented with good devstack support and full tempest
gate, but lets start from the bottom up please!  The work that's been
done so far allows us to move forward with bug fixing and re-factoring
that's backed up on having a base level of unit/functional tests.

- -- 
James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBCAAGBQJUajaZAAoJEL/srsug59jDm2wP/1xW99gc/63CXnNowJLwgCAK
AflhWs4SAUSF0VizOFoys6j1ApjAwWDG33B927REH/YDNwmAd7PgHRilgcaBjR5w
pgaPRctCHPpWtJCWRCAmgkogqJotN3gTDKORxRNaWo9otzjQQbyPP5sEzuLl86/8
0n9KjwhjdJV42fcoKYvWt18uvz9yVOQLlPqj0WhAuzfpeP/5ZkXkd/dOvh6rwJnk
wc+ZExPBhdeMNwaJFPZvle++Ki6tZCV8P8+Be5rqTZxdnGxoct72YnIohW48E9Nu
1sjdJCg42vxIMZi8NfkJDDfTBWzOmkab2jcViIJd9ycTn8CT/e62ZK8nN/hnIjla
qU8pdRxNkY7xY3AuVoTWYRZGAon+Pp6Xw3J+lh7xUYukKtP/PaN+PjLCmVYrfca0
JQnc8N5bLfcZkz/tx8R09hxqV7cpaRZh/lM6D62XEMRQJ7y9rcUIaJQnHbsmqLw9
lwriXjNE/77eyttQlGnItyBZrTFjCFED9zg6ihK5w0DNXQr3CbIvlgCjiWkXfxDD
1QK05SbsukSlnO+Aqfs/HNICMdiZmqxcqcUcVs/XnKXf5Bi/Y/P0haLb43nFoa3E
FaOYvY/T5HSJDvrFK6+kzPgT2zF3sWy4bZjRwKLl8GM8Mm7K65nfd5APhVCnQq5X
yZOvpJehduiy6W/lQgzk
=HAiM
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] status of rabbitmq heartbeat support

2014-11-17 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


Hi,

Many peoples want to have heartbeat support into the rabbit driver of 
oslo.messaging (https://launchpad.net/bugs/856764)


We have different approaches to add this feature:

- - Putting all the heartbeat logic into the rabbitmq driver. 
(https://review.openstack.org/#/c/126330/)
  But the patch use a python thread to do the work, and don't care about 
which oslo.messaging executor is used.
  But this patch is also the only already written patch that add 
heartbeat for all kind of connections and that works.


- - Like we quickly talk at the summit, we can make the oslo.messaging 
executor responsible to trigger the
  heartbeat method of the driver 
(https://review.openstack.org/#/c/132979/, 
https://review.openstack.org/#/c/134542/)
  At the first glance, this sound good, but the executor is only used 
for the server side of oslo.messaging.

  And for the client side, this doesn't solve the issue.

Or just an other thought:

- - Moving the executor setup/start/stop from the 
"MessageHandlingServer" object to the "Transport" object (note: 1 
transport instance == 1 driver instance),
  the 'transport' become responsible to register 'tasks' into the 
'executor'
  and tasks will be 'polling and dispatch' (one for each 
rpc/notification server created like we have now) and a background task 
for the driver (the heartbeat in this case)
  So when we setup a new transport, it's will automatically schedule the 
work to do on the underlying driver without the need to know if this is 
client or server things.
  This will help a driver to do background tasks within also messaging 
(I known that AMQP1.0 driver have the same kind of needs,

  currently done with a python thread into the driver too)
  This looks a bigger work.


So I think if we want a quick resolution of the heartbeat issue,
we need to land the first solution when it's ready 
(https://review.openstack.org/#/c/126330/)


Otherwise any thoughts/comments are welcome.

Regards,

- --
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJUajZICRAYkrQvzqrryAAAdoAQAKGrsCUGIKqmGc2VDpQ4
r5iJO1U+6Tq/BVTch70kSAZ9X0FToor8Zwf6/QIv/1f95r9KapOEtmIP2i+f
9qIcuO0U6yFABiOcp+2XPPTo4zWUPUlZf0+KH28MvGcIaulS3t+k+Z2BObIO
ZZ+chjadg2CVxFL0WeeSk0U7FdWDUl3/Jm+gA04+cUv/yUDBqo1UCcdLqKz6
/VmvPjnEUtYvityHNuoytPo9Na6RS7fa9UPyAOJJhp577QQGZzfpMwV/AY6c
7OfOABHINvmzB7YMiEhE/nOcu3sxrIbp7lMAvdPHxtpHd90BBLquxoPpbBvo
ajKDAw6dPLLL6QYTRUIk4xbN0tQXbkQ/l1/9gV38c6x1HfxkIB8XSVNSNnq2
CAsTq7jWfka08R3dtcLlq9zR7Kv0ouqMvvR0SXcMjASJd/WonBD38zMCOc1Z
6puM1keEaXtmiKyj+WzDkLu0DTEvHdHiTSzazJIqqGbbGIBuhMiTwreRjL4P
LkQFnZciL38n98lBbG5JIo8YKrQhAI1c1/vj3+2q2olot13ExmJYIzaf1ZFF
4QyxcUAeR3dVbxkSHU89xv17uImuNw/klUsLCV7hsfZw1lm+HZyW+OHTOzMu
PymYsaJewIOmO3YZMu7F1bm8hDq7O0Gax1Yh0aaVUl/NsXsoCR5dYTE4KYQl
AZ3k
=xSP6
-END PGP SIGNATURE-


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard

2014-11-17 Thread Lin Hua Cheng
On 11/17/2014 06:43 AM, Yves-Gwenaël Bourhis wrote:
> Le 17/11/2014 14:19, Matthias Runge a écrit :
>
>> There is already horizon on pypi[1]
>>
>> IMHO this will lead only to more confusion.
>>
>> Matthias
>>
>>
>> [1] https://pypi.python.org/pypi/horizon/2012.2
>
> Well the current "horizon" on Pypi is "The OpenStack Dashboard" +
> horizon(_lib) included
>
> If the future "horizon" on pypi is "openstack_dashboard" alone, it would
> still pull "horizon_lib" as a dependency, so it would not brake the
> existing.
>
> So indeed the "horizon" package itself in Pypi would not have
> horizon(_lib) in it anymore, but he "pip install horizon" would pull
> everything due to the dependency horizon will have with horizon_lib.
>
> I find this the least confusing issue and the "horizon" package on Pypi
> would still be seen as "The OpenStack Dashboard" like it is now. We
> would only add an "horizon_lib" package on Pypi.
> Therefore existing third-party "requirements.txt" would not brake
> because they would pull horizon_lib with horizon. and they would still
> import the proper module. Every backwards compatibility (requirements
> and module) is therefore preserved.
>

+1 on this proposal as well

On Mon, Nov 17, 2014 at 6:00 PM, Jason Rist  wrote:

> On 11/17/2014 06:43 AM, Yves-Gwenaël Bourhis wrote:
> > Le 17/11/2014 14:19, Matthias Runge a écrit :
> >
> >> There is already horizon on pypi[1]
> >>
> >> IMHO this will lead only to more confusion.
> >>
> >> Matthias
> >>
> >>
> >> [1] https://pypi.python.org/pypi/horizon/2012.2
> >
> > Well the current "horizon" on Pypi is "The OpenStack Dashboard" +
> > horizon(_lib) included
> >
> > If the future "horizon" on pypi is "openstack_dashboard" alone, it would
> > still pull "horizon_lib" as a dependency, so it would not brake the
> > existing.
> >
> > So indeed the "horizon" package itself in Pypi would not have
> > horizon(_lib) in it anymore, but he "pip install horizon" would pull
> > everything due to the dependency horizon will have with horizon_lib.
> >
> > I find this the least confusing issue and the "horizon" package on Pypi
> > would still be seen as "The OpenStack Dashboard" like it is now. We
> > would only add an "horizon_lib" package on Pypi.
> > Therefore existing third-party "requirements.txt" would not brake
> > because they would pull horizon_lib with horizon. and they would still
> > import the proper module. Every backwards compatibility (requirements
> > and module) is therefore preserved.
> >
> >
>
> +1 to this solution
>
> --
> Jason E. Rist
> Senior Software Engineer
> OpenStack Management UI
> Red Hat, Inc.
> openuc: +1.972.707.6408
> mobile: +1.720.256.3933
> Freenode: jrist
> github/identi.ca: knowncitizen
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Le 2014-11-17 15:26, James Page a écrit :

This was discussed in the oslo.messaging summit session, and
re-enabling zeromq support in devstack is definately on my todo list,
but I don't think the should block landing of the currently proposed
unit tests on oslo.messaging.


I would like to see this tests landed too, even we need to install redis 
or whatever and
to start them manually. This will help a lot to review zmq stuffs and 
ensure fixed thing are not broken again.


- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJUajTzCRAYkrQvzqrryAAAN3AP+QEdd4/kc0y+6WB4d3Tu
g19EfSLR/1ekSBK0AeBc7z7hlDh5wVnQF1t0cm4Kv/fg2+59+Kjc0FhoBeDR
DbOe75vlJTkkUIK+RgPiFLm2prjV7oHQVA7x5E75IhewG+jlLtPm47Wj2b12
wRpeIJC3ofR8OETZ6yxr8NVUvdEWrQk+E2XfDrs3SC55RMYl+so9/FxVlR4y
qwg2EKyhBvjCF8B7j0f3kZqrOCUTi00ivLEN2t+gqCA1WDm7o0cqSVLGvqDW
+HvgJTnVeCu9F+OgsSjpfrVcAiWsF4K5sxZtLv76fLL75simDVG04gOTi5ZL
UtZ2HSQGHrdamTz/qu9FckdhMWoGeUq9XeJf1ulCqJ/9Q4GWlh3KwM/h0hws
A3lKBRxwdiG4afkddhXH3CXa2WyN/genTEaitbk0rk0Q6Q0dumiLPC+P5txB
Fpn1DgwXYMdKVOVWGhUuKVtVWHN35+bJIaGXA/j9MuzEVyTkxhQsOl2aC992
SrQzLvOE9Ao9o4zQCChDnKPfVg8NcxFsljnf55uLBCWQT6zrKNLL18EY1JvL
kacwKipFWyW4TGYQc33ibV66353W8WY6L07ihDFWYo5Ww0NTWtgNM2FUpM2L
QgiP9DcGsOMJ+Ez41uXVLzPueal0KCkgXFbl4Vrrk5PflTvZx8kaXf8TTbei
Kcmc
=hgqJ
-END PGP SIGNATURE-


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Process for program without lead

2014-11-17 Thread Kevin L. Mitchell
On Mon, 2014-11-17 at 17:25 +, Hayes, Graham wrote:
> Quite often people will come forward in a vacuum - people who thought
> they were not right for the job, or felt that someone else would suit
> the role better can come forward in a by-election. (I only have
> anecdotal evidence for this, but it is first hand, based on other
> voluntary, self organising groups I have been part of, and run elections
> for over the years).
> 
> I would suggest when nominations close with no candidates, they re-open
> immediately for one week, at with point, if there is no candidates I
> goes to the TC.

While I think the point is valid, an alternate process would be for the
election coordinator(s) to point out the lack of candidates and issue a
reminder for the procedure a certain amount of time prior to the end of
the nomination period.  Say, if no candidates have been put forward with
3 days left in the nomination period, then the election coordinator
would send out the appropriate reminder email.  I think this would have
the same effect as the one week re-open period without delaying the
election process.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing Nova V2 API xml support

2014-11-17 Thread Davanum Srinivas
Matt,

Thanks for the help/reviews!. Looks like the sequence seems to be

DevStack / Icehouse - https://review.openstack.org/#/c/134972/
DevStack / Juno - https://review.openstack.org/#/c/134975/
Tempest - https://review.openstack.org/#/c/134924/
Tempest - https://review.openstack.org/#/c/134985/
Nova - https://review.openstack.org/#/c/134332/

Right?

thanks,
dims

On Mon, Nov 17, 2014 at 8:48 AM, Matthew Treinish  wrote:
> On Mon, Nov 17, 2014 at 08:24:47AM -0500, Davanum Srinivas wrote:
>> Sean, Joe, Dean, all,
>>
>> Here's the Nova change to disable the V2 XML support:
>> https://review.openstack.org/#/c/134332/
>>
>> To keep all the jobs happy, we'll need changes in tempest, devstack,
>> devstack-gate as well:
>>
>> Tempest : https://review.openstack.org/#/c/134924/
>> Devstack : https://review.openstack.org/#/c/134685/
>> Devstack-gate : https://review.openstack.org/#/c/134714/
>>
>> Please see if i am on the right track.
>>
>
> So this approach will work, but the direction that neutron took and that
> keystone is in the process of undertaking for doing the same was basically the
> opposite. Instead of just overriding the default tempest value on master
> devstack to disable xml testing the devstack stable branches are update to
> ensure xml_api is True when running tempest from the stable branches, and then
> the default in tempest is switched to False. The advantage of this approach is
> that the default value for tempest running against any cloud will always work.
>
> The patches which landed for neutron doing this:
>
> Devstack:
> https://review.openstack.org/#/c/130368/
> https://review.openstack.org/#/c/130367/
>
> Tempest:
> https://review.openstack.org/#/c/127667/
>
> Neutron:
> https://review.openstack.org/#/c/128095/
>
>
> Ping me on IRC and we can work through the process, because things need to 
> land
> in a particular order to make this approach work. But, basically the approach
> is first the stable devstack changes need to land which enable the testing on
> stable, followed by a +2 on the failing Nova patch saying the approach is
> approved, and then we can land the tempest patch which switches the default,
> which will let the Nova change get through the gate.
>
> -Matt Treinish
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Process for program without lead

2014-11-17 Thread Hayes, Graham
On 17/11/14 16:46, Anita Kuno wrote:
> On 11/17/2014 11:35 AM, Hayes, Graham wrote:
>> On 17/11/14 16:18, Anita Kuno wrote:
>>> In the last two elections there was a program that was in the last hours
>>> of the nomination period before someone stepped up to lead. Currently
>>> there is no process for how to address leadership for a program should
>>> the nomination period expire without a someone stepping forward. I would
>>> like to discuss this with the goal of having a process should this
>>> situation arise.
>>>
>>> By way of kicking things off, I would like to propose the following process:
>>> Should the nomination period expire and no PTL candidate has stepped
>>> forward for the program in question, the program will be identified to
>>> the TC by the election officials.
>>> The TC can appoint a leadership candidate by mutual agreement of the
>>> TC and the candidate in question.
>>> The appointed candidate has all the same responsibilities and
>>> obligations as a self-nominated, elected PTL.
>>>
>>> I welcome ideas and discussion on the above situation and proposed solution.
>>>
>>> Thank you,
>>> Anita.
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> Would a by-election be an option - with the TC appointment being a last
>> resort? I personally think as many options for a vote as possible is a
>> good idea. Is there technical / administrative barriers to a by-election?
> A by-election with whom?
> 
> If noone came forward during the nomination period, why would there be
> an expectation that someone would come forward during a nomination
> period after the nomination period?

Quite often people will come forward in a vacuum - people who thought
they were not right for the job, or felt that someone else would suit
the role better can come forward in a by-election. (I only have
anecdotal evidence for this, but it is first hand, based on other
voluntary, self organising groups I have been part of, and run elections
for over the years).

I would suggest when nominations close with no candidates, they re-open
immediately for one week, at with point, if there is no candidates I
goes to the TC.

If the TC / Election Officials want to do extra outreach / promotion in
that week, that might avoid the need for appointment.

>>
>> Also, something to consider - would the TC nomination be required to be
>> an ATC in that Program? I assume that the TC would try and find a
>> someone within the program, but if a program did not have anyone willing
>> to be the PTL, should outside candidates be considered?
> That is a good question. I have no proposal here.
> 
>>
>> I am not advocating yes or no on the second point, just putting it out
>> for discussion.
>>
>> Graham
> 
> Thanks for your thoughts Graham,
> Anita.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Changing "Host marked for maintenance" BP target milestone

2014-11-17 Thread David Lyle
The first review was actually the implementation for a separate blueprint.
https://blueprints.launchpad.net/horizon/+spec/evacuate-host

The content for this blueprint should follow the horizon blueprint process
for Kilo. See: https://blueprints.launchpad.net/horizon/+spec/template

Once the blueprint contains the appropriate information, I'd be happy to
consider it for Kilo.

David

On Mon, Nov 17, 2014 at 8:46 AM, Fic, Bartosz  wrote:

>  Hi guys,
>
> I've start working on this BP:
> https://blueprints.launchpad.net/horizon/+spec/mark-host-down-for-maintenance
>
> One of the reviews from this BP has been already merged (in juno).
>
> Another one has to be finalized.
>
>
>
> So I have a question: is it possible to change milestone target for this
> BP feature into Kilo release?
>
>
>
> - Bart
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FUEL]Re-thinking Fuel Client

2014-11-17 Thread Roman Prykhodchenko
Hi folks!

I’ve made several internal discussions with Łukasz Oleś and Igor Kalnitsky and 
decided that the existing Fuel Client has to be redesigned.
The implementation of the client we have at the moment does not seem to be 
compliant with most of the use cases people have in production and cannot be 
used as a library-wrapper for FUEL’s API.

We’ve came of with a draft of our plan about redesigning Fuel Client which you 
can see here: https://etherpad.openstack.org/p/fuelclient-redesign 

Everyone is welcome to add their notes, suggestions basing on their needs and 
use cases.

The next step is to create a detailed spec and put it to everyone’s review.



- romcheg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard

2014-11-17 Thread Wood, Matthew David (HP Cloud - Horizon)
On 11/17/14, 10:00 AM, "Jason Rist"  wrote:


>On 11/17/2014 06:43 AM, Yves-Gwenaël Bourhis wrote:
>> Le 17/11/2014 14:19, Matthias Runge a écrit :
>> 
>>> There is already horizon on pypi[1]
>>>
>>> IMHO this will lead only to more confusion.
>>>
>>> Matthias
>>>
>>>
>>> [1] https://pypi.python.org/pypi/horizon/2012.2
>> 
>> Well the current "horizon" on Pypi is "The OpenStack Dashboard" +
>> horizon(_lib) included
>> 
>> If the future "horizon" on pypi is "openstack_dashboard" alone, it would
>> still pull "horizon_lib" as a dependency, so it would not brake the
>> existing.
>> 
>> So indeed the "horizon" package itself in Pypi would not have
>> horizon(_lib) in it anymore, but he "pip install horizon" would pull
>> everything due to the dependency horizon will have with horizon_lib.
>> 
>> I find this the least confusing issue and the "horizon" package on Pypi
>> would still be seen as "The OpenStack Dashboard" like it is now. We
>> would only add an "horizon_lib" package on Pypi.
>> Therefore existing third-party "requirements.txt" would not brake
>> because they would pull horizon_lib with horizon. and they would still
>> import the proper module. Every backwards compatibility (requirements
>> and module) is therefore preserved.
>> 
>> 
>
>+1 to this solution

+1 from me as well.


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stale patches

2014-11-17 Thread Damon Wang
good job :-)

2014-11-14 18:34 GMT+08:00 Miguel Ángel Ajo :

> Thanks for cleaning up the house!,
>
> Best regards,
>
> Miguel Ángel Ajo
>
> On Friday, 14 de November de 2014 at 00:46, Salvatore Orlando wrote:
>
> There are a lot of neutron patches which, for different reasons, have not
> been updated in a while.
> In order to ensure reviewers focus on active patch, I have set a few
> patches (about 75) as 'abandoned'.
>
> No patch with an update in the past month, either patchset or review, has
> been abandoned. Moreover, only a part of the patches not updated for over a
> month have been abandoned. I took extra care in identifying which ones
> could safely be abandoned, and which ones were instead still valuable;
> nevertheless, if you find out I abandoned a change you're actively working
> on, please restore it.
>
> If you are the owner of one of these patches, you can use the 'restore
> change' button in gerrit to resurrect the change. If you're not the other
> and wish to resume work on these patches either contact any member of the
> neutron-core team in IRC or push a new patch.
>
> Salvatore
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Weekly meeting to resume tomorrow Nov. 18 at 1800 UTC

2014-11-17 Thread Morgan Fainberg
This is just a friendly reminder that the Keystone weekly meeting will resume 
this week at the normal time. I hope everyone has had a good summit (and 
potentially break post summit). Welcome back and see everyone tomorrow!

Cheers,
Morgan Fainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard

2014-11-17 Thread Jason Rist
On 11/17/2014 06:43 AM, Yves-Gwenaël Bourhis wrote:
> Le 17/11/2014 14:19, Matthias Runge a écrit :
> 
>> There is already horizon on pypi[1]
>>
>> IMHO this will lead only to more confusion.
>>
>> Matthias
>>
>>
>> [1] https://pypi.python.org/pypi/horizon/2012.2
> 
> Well the current "horizon" on Pypi is "The OpenStack Dashboard" +
> horizon(_lib) included
> 
> If the future "horizon" on pypi is "openstack_dashboard" alone, it would
> still pull "horizon_lib" as a dependency, so it would not brake the
> existing.
> 
> So indeed the "horizon" package itself in Pypi would not have
> horizon(_lib) in it anymore, but he "pip install horizon" would pull
> everything due to the dependency horizon will have with horizon_lib.
> 
> I find this the least confusing issue and the "horizon" package on Pypi
> would still be seen as "The OpenStack Dashboard" like it is now. We
> would only add an "horizon_lib" package on Pypi.
> Therefore existing third-party "requirements.txt" would not brake
> because they would pull horizon_lib with horizon. and they would still
> import the proper module. Every backwards compatibility (requirements
> and module) is therefore preserved.
> 
> 

+1 to this solution

-- 
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Process for program without lead

2014-11-17 Thread Andreas Jaeger
On 11/17/2014 11:45 AM, Anita Kuno wrote:
> On 11/17/2014 11:35 AM, Hayes, Graham wrote:
>> On 17/11/14 16:18, Anita Kuno wrote:
>>> In the last two elections there was a program that was in the last hours
>>> of the nomination period before someone stepped up to lead. Currently
>>> there is no process for how to address leadership for a program should
>>> the nomination period expire without a someone stepping forward. I would
>>> like to discuss this with the goal of having a process should this
>>> situation arise.
>>>
>>> By way of kicking things off, I would like to propose the following process:
>>> Should the nomination period expire and no PTL candidate has stepped
>>> forward for the program in question, the program will be identified to
>>> the TC by the election officials.
>>> The TC can appoint a leadership candidate by mutual agreement of the
>>> TC and the candidate in question.
>>> The appointed candidate has all the same responsibilities and
>>> obligations as a self-nominated, elected PTL.
>>>
>>> I welcome ideas and discussion on the above situation and proposed solution.
>>>
>>> Thank you,
>>> Anita.
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> Would a by-election be an option - with the TC appointment being a last
>> resort? I personally think as many options for a vote as possible is a
>> good idea. Is there technical / administrative barriers to a by-election?
> A by-election with whom?
> 
> If noone came forward during the nomination period, why would there be
> an expectation that someone would come forward during a nomination
> period after the nomination period?
>>
>> Also, something to consider - would the TC nomination be required to be
>> an ATC in that Program? I assume that the TC would try and find a
>> someone within the program, but if a program did not have anyone willing
>> to be the PTL, should outside candidates be considered?
> That is a good question. I have no proposal here.

Let's not put any restrictions on who can be PTL in that case. This is
an exceptional situation and the TC has to take care of. Choosing
somebody from the program itself sounds like the first step but I
suggest to let the TC discuss the exact way forward in case of "PTL
orphaned programs",

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF:Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes/log - 11/17/2014

2014-11-17 Thread Renat Akhmerov
Thanks for joining us today,

Here are the links to the meeting minutes and full log:
Minutes - 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-11-17-16.00.html
 

Full log - 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-11-17-16.00.log.html
 


The next meeting will be next Monday Nov 24 at 16.00 UTC.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] URLs

2014-11-17 Thread John Dickinson
Adam,

I'm not sure why you've marked Swift URLs as having their own scheme. It's true 
that Swift doesn't have the concept of "admin" URLs, but in general if Swift 
were to assume some URL path prefix, I'm not sure why it wouldn't work (for 
some definition of work).

Other issues might be the fact that you'd have the extra complexity of a broker 
layer for all the OpenStack components. iie instead of clients accessing Swift 
directly and the operator scaling that, the new scheme would require the 
operator to manage and scale the broker layer and also the Swift layer.

For the record, Swift would need to be updated since it assumes it's the only 
service running on the domain at that port (Swift does a lot of path parsing).

--John






> On Nov 11, 2014, at 2:35 PM, Adam Young  wrote:
> 
> Recent recurrence of the "Why ios everything on its own port" question 
> triggered my desire to take this pattern and put it to rest.
> 
> My suggestion, from a while ago, was to have a naming scheme that deconflicts 
> putting all of the services onto a single server, on port 443.
> 
> I've removed a lot of the cruft, but not added in entries for all the new 
> *aaS services.
> 
> 
> https://wiki.openstack.org/wiki/URLs
> 
> Please add in anything that should be part of OpenStack.  Let's make this a 
> reality, and remove the  specific ports.
> 
> If you are worried about debugging, look into rpdb.  It is a valuable tool 
> for debugging a mod_wsgi based application.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Scale out bug-triage by making it easier for people to contribute

2014-11-17 Thread Flavio Percoco

Greetings,

Regardless of how big/small bugs backlog is for each project, I
believe this is a common, annoying and difficult problem. At the oslo
meeting today, we're talking about how to address our bug triage
process and I proposed something that I've seen done in other
communities (rust-language [0]) that I consider useful and a good
option for OpenStack too.

The process consist in a bot that sends an email to every *volunteer*
with 10 bugs to review/triage for the week. Each volunteer follows the
triage standards, applies tags and provides information on whether the
bug is still valid or not. The volunteer doesn't have to fix the bug,
just triage it.

In openstack, we could have a job that does this and then have people
from each team volunteer to help with triage. The benefits I see are:

* Interested folks don't have to go through the list and filter the
bugs they want to triage. The bot should be smart enough to pick the
oldest, most critical, etc.

* It's a totally opt-in process and volunteers can obviously ignore
emails if they don't have time that week.

* It helps scaling out the triage process without poking people around
and without having to do a "call for volunteers" every meeting/cycle/etc

The above doesn't solve the problme completely but just like reviews,
it'd be an optional, completely opt-in process that people can sign up
for.

Thoughts?
Flavio

[0] https://mail.mozilla.org/pipermail/rust-dev/2013-April/003668.html

--
@flaper87
Flavio Percoco


pgpAF92uXuyZ7.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Process for program without lead

2014-11-17 Thread Anita Kuno
On 11/17/2014 11:35 AM, Hayes, Graham wrote:
> On 17/11/14 16:18, Anita Kuno wrote:
>> In the last two elections there was a program that was in the last hours
>> of the nomination period before someone stepped up to lead. Currently
>> there is no process for how to address leadership for a program should
>> the nomination period expire without a someone stepping forward. I would
>> like to discuss this with the goal of having a process should this
>> situation arise.
>>
>> By way of kicking things off, I would like to propose the following process:
>> Should the nomination period expire and no PTL candidate has stepped
>> forward for the program in question, the program will be identified to
>> the TC by the election officials.
>> The TC can appoint a leadership candidate by mutual agreement of the
>> TC and the candidate in question.
>> The appointed candidate has all the same responsibilities and
>> obligations as a self-nominated, elected PTL.
>>
>> I welcome ideas and discussion on the above situation and proposed solution.
>>
>> Thank you,
>> Anita.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> Would a by-election be an option - with the TC appointment being a last
> resort? I personally think as many options for a vote as possible is a
> good idea. Is there technical / administrative barriers to a by-election?
A by-election with whom?

If noone came forward during the nomination period, why would there be
an expectation that someone would come forward during a nomination
period after the nomination period?
> 
> Also, something to consider - would the TC nomination be required to be
> an ATC in that Program? I assume that the TC would try and find a
> someone within the program, but if a program did not have anyone willing
> to be the PTL, should outside candidates be considered?
That is a good question. I have no proposal here.

> 
> I am not advocating yes or no on the second point, just putting it out
> for discussion.
> 
> Graham

Thanks for your thoughts Graham,
Anita.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] URLs

2014-11-17 Thread Jay Pipes

++ from me.

On 11/11/2014 05:35 PM, Adam Young wrote:

Recent recurrence of the "Why ios everything on its own port" question
triggered my desire to take this pattern and put it to rest.

My suggestion, from a while ago, was to have a naming scheme that
deconflicts putting all of the services onto a single server, on port 443.

I've removed a lot of the cruft, but not added in entries for all the
new *aaS services.


https://wiki.openstack.org/wiki/URLs

Please add in anything that should be part of OpenStack.  Let's make
this a reality, and remove the  specific ports.

If you are worried about debugging, look into rpdb.  It is a valuable
tool for debugging a mod_wsgi based application.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Process for program without lead

2014-11-17 Thread Hayes, Graham
On 17/11/14 16:18, Anita Kuno wrote:
> In the last two elections there was a program that was in the last hours
> of the nomination period before someone stepped up to lead. Currently
> there is no process for how to address leadership for a program should
> the nomination period expire without a someone stepping forward. I would
> like to discuss this with the goal of having a process should this
> situation arise.
> 
> By way of kicking things off, I would like to propose the following process:
> Should the nomination period expire and no PTL candidate has stepped
> forward for the program in question, the program will be identified to
> the TC by the election officials.
> The TC can appoint a leadership candidate by mutual agreement of the
> TC and the candidate in question.
> The appointed candidate has all the same responsibilities and
> obligations as a self-nominated, elected PTL.
> 
> I welcome ideas and discussion on the above situation and proposed solution.
> 
> Thank you,
> Anita.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Would a by-election be an option - with the TC appointment being a last
resort? I personally think as many options for a vote as possible is a
good idea. Is there technical / administrative barriers to a by-election?

Also, something to consider - would the TC nomination be required to be
an ATC in that Program? I assume that the TC would try and find a
someone within the program, but if a program did not have anyone willing
to be the PTL, should outside candidates be considered?

I am not advocating yes or no on the second point, just putting it out
for discussion.

Graham


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RT/Scheduler summit summary and Kilo development plan

2014-11-17 Thread Sylvain Bauza


Le 17/11/2014 16:58, Jay Pipes a écrit :

Good morning Stackers,

At the summit in Paris, we put together a plan for work on the Nova 
resource tracker and scheduler in the Kilo timeframe. A large number 
of contributors across many companies are all working on this 
particular part of the Nova code base, so it's important that we keep 
coordinated and updated on the overall efforts. I'll work together 
with Don Dugger this cycle to make sure we make steady, measured 
progress. If you are involved in this effort, please do be sure to 
attend the weekly scheduler IRC meetings [1] (Tuesdays @ 1500UTC on 
#openstack-meeting).


== Decisions from Summit ==

The following decisions were made at the summit session [2]:

1) The patch series for virt CPU pinning [3] and huge page support [4] 
shall not be approved until nova/virt/hardware.py is modified to use 
nova.objects as its serialization/domain object model. Jay is 
responsible for the conversion patches, and this patch series should 
be fully proposed by end of this week.


2) We agreed on the concepts introduced by the resource-objects 
blueprint [5], with a caveat that child object versioning be discussed 
in greater depth with Jay, Paul, and Dan Smith.


3) We agreed on all concepts and implementation from the 2 
isolate-scheduler-db blueprints: aggregates [6] and instance groups [7]




Well, this is no longer needed to implement [7] as a previous merge 
fixed the problem by moving the instance group setup to the conductor 
layer. I was on PTO while this spec was created so I had no time to say 
it was not necessary, my bad.


4) We agreed on implementation and need for separating compute node 
object from the service object [8]


5) We agreed on concept and implementation for converting the request 
spec from a dict to a versioned object [9] as well as converting 
select_destinations() to use said object [10]


[6] We agreed on the need for returning a proper object from the virt 
driver's get_available_resource() method [11] but AFAICR, we did not 
say that this object needed to use nova/objects because this is an 
interface internal to the virt layer and resource tracker, and the 
ComputeNode nova.object will handle the setting of resource-related 
fields properly.


[7] We agreed the unit tests for the resource tracker were, well, 
crappy, and are a real source of pain in making changes to the 
resource tracker itself. So, we resolved to fix them up in early Kilo-1


[8] We are not interested in adding any additional functionality to 
the scheduler outside already-agreed NUMA blueprint functionality in 
Kilo. The goal is to get the scheduler fully independent of the Nova 
database, and communicating with nova-conductor and nova-compute via 
fully versioned interfaces by the end of Kilo, so that a split of the 
scheduler can occur at the start of the L release cycle.


== Action Items ==

1) Jay to propose patches that objectify the domain objects in 
nova/virt/hardware.py by EOB November 21


2) Paul Murray, Jay, and Alexis Lee to work on refactoring of the unit 
tests around the resource tracker in early Kilo-1


3) Dan Smith, Paul Murray, and Jay to discuss the issues with child 
object versioning


4) Ed Leafe to work on separating the compute node from the service 
object in Kilo-1




That's actually managed by my own, you can find both spec and 
implementation in the patch series, waiting for reviews [13]



5) Sylvain Bauza to work on the request spec and select_destinations() 
to use request spec blueprints to be completed for Kilo-2


6) Paul Murray, Sylvain Bauza to work on the isolate-scheduler-db 
aggregate and instance groups blueprints to be completed by Kilo-3



As said above, there is only one spec to validate ie. [6]


7) Jay to complete the resource-objects blueprint work by Kilo-2

8) Dan Berrange, Sahid, and Nikola Dipanov to work on completing the 
CPU pinning, huge page support, and get_available_resources() 
blueprints in Kilo-1


== Open Items ==

1) We need to figure out who is working on the objectification of the 
PCI tracker stuff (Yunjong maybe or Robert Li?)


2) The child object version thing needs to be thoroughly vetted. 
Basically, the nova.objects.compute_node.ComputeNode object will have 
a series of sub objects for resources (NUMA, PCI, other stuff) and 
Paul Murray has some questions on how to handle the child object 
versioning properly.


3) Need to coordinate with Steve Gordon, Adrian Hoban, and Ian Wells  
on NUMA hardware in an external testing lab that the NFV subteam is 
working on getting up and running [12]. We need functional tests 
(Tempest+Nova) written for all NUMA-related functionality in the RT 
and scheduler by end of Kilo-3, but have yet to divvy up the work to 
make this a reality.


== Conclusion ==

Please everyone read the above thoroughly and respond if I have missed 
anything or left anyone out of the conversation. Really appreciate 
everyone coming together to get this work done over 

Re: [openstack-dev] [oslo] oslo.messaging outcome from the summit

2014-11-17 Thread Ilya Pekelny
 Flavio Percoco wrote:

> Still, I'd like us to learn from
> previous experiences and have a better plan for this driver (and
> future cases like this one).


Hi, all!

As one of a just joined ZeroMQ maintainers I have a growing plan of ZeroMQ
refactoring and development. At the most abstract view our plan is to
remove single broker and implement peer-2-peer model in the messaging
driver. Now exists a blueprint with this goal
https://blueprints.launchpad.net/oslo.messaging/+spec/reduce-central-broker.
I maintain a patch and a spec which I had inherited from Aleksey Kornienko.
For now this blueprint is the first step in the planning process. I believe
we can split this big work in a set of specs and if is needed in several
related blueprints. With these specs and BPs our plan should become
obvious. I wrote a mail in the dev mail list with short overview to the
coming work.

Please, feel free to discuss it all with me and correct me on this big road.

On Mon, Nov 17, 2014 at 4:45 PM, Doug Hellmann 
wrote:

> Thanks, Josh, I’ll subscribe to the issue to keep up to date.
>
> On Nov 16, 2014, at 6:58 PM, Joshua Harlow  wrote:
>
> > I started the following issue on kombu's github page (to see if there is
> any interest on there side to such an effort):
> >
> > https://github.com/celery/kombu/issues/430
> >
> > It's about seeing if the kombu folks would be ok with a 'rpc' subfolder
> in there repository that can start to contain 'rpc' like functionality that
> now exists in oslo.messaging (I don't see why they would be against this
> kind of idea, since it seems to make sense IMHO).
> >
> > Let's see what happens,
> >
> > -Josh
> >
> > Doug Hellmann wrote:
> >>
> >> On Nov 13, 2014, at 7:02 PM, Joshua Harlow  >> > wrote:
> >>
> >>> Don't forget my executor which isn't dependent on a larger set of
> >>> changes for asyncio/trollious...
> >>>
> >>> https://review.openstack.org/#/c/70914/
> >>>
> >>> The above will/should just 'work', although I'm unsure what thread
> >>> count should be by default (the number of green threads that is set at
> >>> like 200 shouldn't be the same number used in that executor which uses
> >>> real python/system threads). The neat thing about that executor is
> >>> that it can also replace the eventlet one, since when eventlet is
> >>> monkey patching the threading module (which it should be) then it
> >>> should behave just as the existing eventlet one; which IMHO is pretty
> >>> cool (and could be one way to completely remove the eventlet usage in
> >>> oslo.messaging).
> >>
> >> Good point, thanks for reminding me.
> >>
> >>>
> >>> As for the kombu discussions, maybe its time to jump on the #celery
> >>> channel (where the kombu folks hang out) and start talking to them
> >>> about how we can work better together to move some of our features
> >>> into kombu (and also depreciate/remove some of the oslo.messaging
> >>> features that now are in kombu). I believe
> >>> https://launchpad.net/~asksol is the main guy there (and also the main
> >>> maintainer of celery/kombu?). It'd be nice to have these
> >>> cross-community talks and at least come up with some kind of game
> >>> plan; hopefully one that benefits both communities…
> >>
> >> I would like that, but won’t have time to do it myself this cycle. Maybe
> >> we can find another volunteer from the team?
> >>
> >> Doug
> >>
> >>>
> >>> -Josh
> >>>
> >>> 
> >>>
> 
> >>> *From:* Doug Hellmann  >>> >
> >>> *To:* OpenStack Development Mailing List (not for usage questions)
> >>>  >>> >
> >>> *Sent:* Wednesday, November 12, 2014 12:22 PM
> >>> *Subject:* [openstack-dev] [oslo] oslo.messaging outcome from the
> summit
> >>>
> >>> The oslo.messaging session at the summit [1] resulted in some plans to
> >>> evolve how oslo.messaging works, but probably not during this cycle.
> >>>
> >>> First, we talked about what to do about the various drivers like
> >>> ZeroMQ and the new AMQP 1.0 driver. We decided that rather than moving
> >>> those out of the main tree and packaging them separately, we would
> >>> keep them all in the main repository to encourage the driver authors
> >>> to help out with the core library (oslo.messaging is a critical
> >>> component of OpenStack, and we’ve lost several of our core reviewers
> >>> for the library to other priorities recently).
> >>>
> >>> There is a new set of contributors interested in maintaining the
> >>> ZeroMQ driver, and they are going to work together to review each
> >>> other’s patches. We will re-evaluate keeping ZeroMQ at the end of
> >>> Kilo, based on how things go this cycle.
> >>>
> >>> We also talked about the fact that the new version of Kombu includes
> >>> some of the features we have implemented in our own driver, like
> >>> heartbeats and connection management. Kombu does not inc

Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard

2014-11-17 Thread Yves-Gwenaël Bourhis
Le 17/11/2014 14:43, Yves-Gwenaël Bourhis a écrit :
> Well the current "horizon" on Pypi is "The OpenStack Dashboard" +
> horizon(_lib) included
> 
> If the future "horizon" on pypi is "openstack_dashboard" alone, it would
> still pull "horizon_lib" as a dependency, so it would not brake the
> existing.
> 
> So indeed the "horizon" package itself in Pypi would not have
> horizon(_lib) in it anymore, but he "pip install horizon" would pull
> everything due to the dependency horizon will have with horizon_lib.
> 
> I find this the least confusing issue and the "horizon" package on Pypi
> would still be seen as "The OpenStack Dashboard" like it is now. We
> would only add an "horizon_lib" package on Pypi.
> Therefore existing third-party "requirements.txt" would not brake
> because they would pull horizon_lib with horizon. and they would still
> import the proper module. Every backwards compatibility (requirements
> and module) is therefore preserved.

s/brake/break/g
Sorry for the typo.

-- 
Yves-Gwenaël Bourhis

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Process for program without lead

2014-11-17 Thread Anita Kuno
In the last two elections there was a program that was in the last hours
of the nomination period before someone stepped up to lead. Currently
there is no process for how to address leadership for a program should
the nomination period expire without a someone stepping forward. I would
like to discuss this with the goal of having a process should this
situation arise.

By way of kicking things off, I would like to propose the following process:
Should the nomination period expire and no PTL candidate has stepped
forward for the program in question, the program will be identified to
the TC by the election officials.
The TC can appoint a leadership candidate by mutual agreement of the
TC and the candidate in question.
The appointed candidate has all the same responsibilities and
obligations as a self-nominated, elected PTL.

I welcome ideas and discussion on the above situation and proposed solution.

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A mascot for Ironic

2014-11-17 Thread Jim Rollenhagen
On Sun, Nov 16, 2014 at 01:14:13PM +, Lucas Alvares Gomes wrote:
> Hi Ironickers,
> 
> I was thinking this weekend: All the cool projects does have a mascot
> so I thought that we could have one for Ironic too.
> 
> The idea about what the mascot would be was easy because the RAX guys
> put "bear metal" their presentation[1] and that totally rocks! So I
> drew a bear. It also needed an instrument, at first I thought about a
> guitar, but drums is actually my favorite instrument so I drew a pair
> of drumsticks instead.
> 
> The drawing thing wasn't that hard, the problem was to digitalize it.
> So I scanned the thing and went to youtube to watch some tutorials
> about gimp and inkspace to learn how to vectorize it. Magic, it
> worked!
> 
> Attached in the email there's the original draw, the vectorized
> version without colors and the final version of it (with colors).
> 
> Of course, I know some people does have better skills than I do, so I
> also attached the inkspace file of the final version in case people
> want to tweak it :)
> 
> So, what you guys think about making this little drummer bear the
> mascot of the Ironic project?
> 
> Ahh he also needs a name. So please send some suggestions and we can
> vote on the best name for him.
> 
> [1] http://www.youtube.com/watch?v=2Oi2T2pSGDU#t=90
> 
> Lucas

+1000, this is awesome.

A cool variation would be to put a drum set behind the bear, made
out of servers. :)

// jim


> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RT/Scheduler summit summary and Kilo development plan

2014-11-17 Thread Daniel P. Berrange
On Mon, Nov 17, 2014 at 10:58:52AM -0500, Jay Pipes wrote:
> Good morning Stackers,
> 
> At the summit in Paris, we put together a plan for work on the Nova resource
> tracker and scheduler in the Kilo timeframe. A large number of contributors
> across many companies are all working on this particular part of the Nova
> code base, so it's important that we keep coordinated and updated on the
> overall efforts. I'll work together with Don Dugger this cycle to make sure
> we make steady, measured progress. If you are involved in this effort,
> please do be sure to attend the weekly scheduler IRC meetings [1] (Tuesdays
> @ 1500UTC on #openstack-meeting).
> 
> == Decisions from Summit ==
> 
> The following decisions were made at the summit session [2]:
> 
> 1) The patch series for virt CPU pinning [3] and huge page support [4] shall
> not be approved until nova/virt/hardware.py is modified to use nova.objects
> as its serialization/domain object model. Jay is responsible for the
> conversion patches, and this patch series should be fully proposed by end of
> this week.
> 
> 2) We agreed on the concepts introduced by the resource-objects blueprint
> [5], with a caveat that child object versioning be discussed in greater
> depth with Jay, Paul, and Dan Smith.
> 
> 3) We agreed on all concepts and implementation from the 2
> isolate-scheduler-db blueprints: aggregates [6] and instance groups [7]
> 
> 4) We agreed on implementation and need for separating compute node object
> from the service object [8]
> 
> 5) We agreed on concept and implementation for converting the request spec
> from a dict to a versioned object [9] as well as converting
> select_destinations() to use said object [10]
> 
> [6] We agreed on the need for returning a proper object from the virt
> driver's get_available_resource() method [11] but AFAICR, we did not say
> that this object needed to use nova/objects because this is an interface
> internal to the virt layer and resource tracker, and the ComputeNode
> nova.object will handle the setting of resource-related fields properly.

IIRC the consensus was that people didn't see the point in the
get_available_resource using a different objects compared to the
Nova object for the stuff used by the RT / scheduler. To that end
I wrote a spec up that describes an idea for just using a single
set of nova objects end-to-end.

  https://review.openstack.org/#/c/133728/

Presumably this would have to dovetail with your resource object
models spec.

  https://review.openstack.org/#/c/127609/

Perhaps we should consider your spec as the place where we define
what all the objects look like, and have my blueprint just focus
on the actual conversion of get_available_resource() method impls
in the virt drivers ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v2 or v3 for new api

2014-11-17 Thread Pasquale Porreca

Thank you very much Christopher

On 11/17/14 12:15, Christopher Yeoh wrote:
Yes, sorry documentation has been on our todo list for too long. Could 
I get you to submit a bug report about the lack of developer 
documentation for api plugins? It might hurry us up :-)


I reported as a bug and subscribed you to it. 
https://bugs.launchpad.net/nova/+bug/1393455




In the meantime, off the top of my head. you'll need to create or 
modify the following files in a typical plugin:


setup.cfg - add an entry in at least the nova.api.v3.extensions section

etc/nova/policy.json - an entry for the permissions for you plugin, 
perhaps one per api method for maximum flexibility. Also will need a 
discoverable entry (lots of examples in this file)


nova/tests/unit/fake_policy.json (similar to policy.json)


I wish I had asked about this before, I found yet these files, but I 
confess it took quite a bit of time to guess I had to modify them (I 
actually didn't modify yet fake_policy, but my tests are still not 
completed).

What about nova/nova.egg-info/entry_points.txt I mentioned earlier?



nova/api/openstack/compute/plugins/v3/ - please make 
the alias name something os-scheduler-hints rather than OS-SCH-HNTS. 
No skimping on vowels. Probably the easiest way at this stage without 
more doco is look for for a plugin in that directory that does the 
sort of the thing you want to do.


Following the path of other plugins, I created a module 
nova/api/openstack/compute/plugins/v3/node_uuid.py, while the class is 
NodeUuid(extensions.V3APIExtensionBase) the alias is os-node-uuid and 
the actual json parameter is node_uuid. I hope this is correct...




nova/tests/unit/nova/api/openstack/compute/contrib/test_your_plugin.py 
- we have been combining the v2 and v2.1(v3) unittests to share as 
much as possible, so please do the same here for new tests as the v3 
directory will be eventually removed. There's quite a few examples now 
in that directory of sharing unittests between v2.1 and v2 but with a 
new extension the customisation between the two should be pretty 
minimal (just a bit of inheritance to call the right controller)




Very good to know. I put my test in 
nova/tests/unit/api/openstack/plugins/v3 , but I was getting confused by 
the fact only few tests were in this folder while the tests in 
nova/tests/unit/api/openstack/compute/contrib/ covered both v2 and v2.1 
cases.
So should I move my test in 
nova/tests/unit/api/openstack/compute/contrib/ folder, right?



nova/tests/unit/integrated/v3/test_your_plugin.py
nova/tests/unit/integrated/test_api_samples.py

Sorry the api samples tests are not unified yet. So you'll need to 
create two. All of the v2 api sample tests are in one directory, 
whilst the the v2.1 are separated into different files by plugin.


There's some rather old documentation on how to generate the api 
samples themselves (hint: directories aren't made automatically) here:


https://blueprints.launchpad.net/nova/+spec/nova-api-samples

Personally I wouldn't bother with any xml support if you do decide to 
support v2 as its deprecated anyway.


After reading your answer I understood I have to work more on this part :)



Hope this helps. Feel free to add me as a reviewer for the api parts 
of your changesets.


It helps a lot! I will add you for sure as soon as I will upload my 
code. For now the specification has still to be approved, so I think I 
have to wait before to upload it, is that correct?


This is the blueprint link anyway: 
https://blueprints.launchpad.net/nova/+spec/use-uuid-v1




Regards,

Chris


--
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] RT/Scheduler summit summary and Kilo development plan

2014-11-17 Thread Jay Pipes

Good morning Stackers,

At the summit in Paris, we put together a plan for work on the Nova 
resource tracker and scheduler in the Kilo timeframe. A large number of 
contributors across many companies are all working on this particular 
part of the Nova code base, so it's important that we keep coordinated 
and updated on the overall efforts. I'll work together with Don Dugger 
this cycle to make sure we make steady, measured progress. If you are 
involved in this effort, please do be sure to attend the weekly 
scheduler IRC meetings [1] (Tuesdays @ 1500UTC on #openstack-meeting).


== Decisions from Summit ==

The following decisions were made at the summit session [2]:

1) The patch series for virt CPU pinning [3] and huge page support [4] 
shall not be approved until nova/virt/hardware.py is modified to use 
nova.objects as its serialization/domain object model. Jay is 
responsible for the conversion patches, and this patch series should be 
fully proposed by end of this week.


2) We agreed on the concepts introduced by the resource-objects 
blueprint [5], with a caveat that child object versioning be discussed 
in greater depth with Jay, Paul, and Dan Smith.


3) We agreed on all concepts and implementation from the 2 
isolate-scheduler-db blueprints: aggregates [6] and instance groups [7]


4) We agreed on implementation and need for separating compute node 
object from the service object [8]


5) We agreed on concept and implementation for converting the request 
spec from a dict to a versioned object [9] as well as converting 
select_destinations() to use said object [10]


[6] We agreed on the need for returning a proper object from the virt 
driver's get_available_resource() method [11] but AFAICR, we did not say 
that this object needed to use nova/objects because this is an interface 
internal to the virt layer and resource tracker, and the ComputeNode 
nova.object will handle the setting of resource-related fields properly.


[7] We agreed the unit tests for the resource tracker were, well, 
crappy, and are a real source of pain in making changes to the resource 
tracker itself. So, we resolved to fix them up in early Kilo-1


[8] We are not interested in adding any additional functionality to the 
scheduler outside already-agreed NUMA blueprint functionality in Kilo. 
The goal is to get the scheduler fully independent of the Nova database, 
and communicating with nova-conductor and nova-compute via fully 
versioned interfaces by the end of Kilo, so that a split of the 
scheduler can occur at the start of the L release cycle.


== Action Items ==

1) Jay to propose patches that objectify the domain objects in 
nova/virt/hardware.py by EOB November 21


2) Paul Murray, Jay, and Alexis Lee to work on refactoring of the unit 
tests around the resource tracker in early Kilo-1


3) Dan Smith, Paul Murray, and Jay to discuss the issues with child 
object versioning


4) Ed Leafe to work on separating the compute node from the service 
object in Kilo-1


5) Sylvain Bauza to work on the request spec and select_destinations() 
to use request spec blueprints to be completed for Kilo-2


6) Paul Murray, Sylvain Bauza to work on the isolate-scheduler-db 
aggregate and instance groups blueprints to be completed by Kilo-3


7) Jay to complete the resource-objects blueprint work by Kilo-2

8) Dan Berrange, Sahid, and Nikola Dipanov to work on completing the CPU 
pinning, huge page support, and get_available_resources() blueprints in 
Kilo-1


== Open Items ==

1) We need to figure out who is working on the objectification of the 
PCI tracker stuff (Yunjong maybe or Robert Li?)


2) The child object version thing needs to be thoroughly vetted. 
Basically, the nova.objects.compute_node.ComputeNode object will have a 
series of sub objects for resources (NUMA, PCI, other stuff) and Paul 
Murray has some questions on how to handle the child object versioning 
properly.


3) Need to coordinate with Steve Gordon, Adrian Hoban, and Ian Wells  on 
NUMA hardware in an external testing lab that the NFV subteam is working 
on getting up and running [12]. We need functional tests (Tempest+Nova) 
written for all NUMA-related functionality in the RT and scheduler by 
end of Kilo-3, but have yet to divvy up the work to make this a reality.


== Conclusion ==

Please everyone read the above thoroughly and respond if I have missed 
anything or left anyone out of the conversation. Really appreciate 
everyone coming together to get this work done over the next 4-5 months.


Best,
-jay

[1] 
https://wiki.openstack.org/wiki/Meetings#Gantt_.28Scheduler.29_team_meeting

[2] https://etherpad.openstack.org/p/kilo-nova-scheduler-rt
[3] https://review.openstack.org/#/c/129606/
[4] https://review.openstack.org/#/c/129608/
[5] https://review.openstack.org/#/c/127609/
[6] https://review.openstack.org/#/c/89893/
[7] https://review.openstack.org/#/c/131553/
[8] https://review.openstack.org/#/c/126895/
[9] https://review.openstack.org/#/c/

  1   2   >