Re: [openstack-dev] [heat] [qa] gate-tempest-dsvm-neutron-heat-slow uture

2014-11-25 Thread Miguel Ángel Ajo
+1 from me! good catch.

Miguel Ángel Ajo


On Tuesday, 25 de November de 2014 at 16:57, Kyle Mestery wrote:

> On Tue, Nov 25, 2014 at 9:28 AM, Sean Dague  (mailto:s...@dague.net)> wrote:
> > So as I was waiting for other tests to return, I started looking through
> > our existing test lists.
> >  
> > gate-tempest-dsvm-neutron-heat-slow has been slowly evaporating, and I'm
> > no longer convinced that it does anything useful (and just burns test
> > nodes).
> >  
> > The entire output of the job is currently as follows:
> >  
> > 2014-11-25 14:43:13.801 | heat-slow runtests: commands[0] | bash
> > tools/pretty_tox.sh (http://pretty_tox.sh)
> > (?=.*\[.*\bslow\b.*\])(^tempest\.(api|scenario)\.orchestration)
> > --concurrency=4
> > 2014-11-25 14:43:21.313 | {1}
> > tempest.scenario.orchestration.test_server_cfn_init.CfnInitScenarioTest.test_server_cfn_init
> > ... SKIPPED: Skipped until Bug: 1374175 is resolved.
> > 2014-11-25 14:47:36.271 | {0}
> > tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_network
> > [0.169736s] ... ok
> > 2014-11-25 14:47:36.271 | {0}
> > tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_resources
> > [0.000508s] ... ok
> > 2014-11-25 14:47:36.291 | {0}
> > tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router
> > [0.019679s] ... ok
> > 2014-11-25 14:47:36.313 | {0}
> > tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router_interface
> > [0.020597s] ... ok
> > 2014-11-25 14:47:36.564 | {0}
> > tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_server
> > [0.250788s] ... ok
> > 2014-11-25 14:47:36.580 | {0}
> > tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_subnet
> > [0.015410s] ... ok
> > 2014-11-25 14:47:44.113 |
> > 2014-11-25 14:47:44.113 | ==
> > 2014-11-25 14:47:44.113 | Totals
> > 2014-11-25 14:47:44.113 | ==
> > 2014-11-25 14:47:44.114 | Run: 7 in 0.478173 sec.
> > 2014-11-25 14:47:44.114 | - Passed: 6
> > 2014-11-25 14:47:44.114 | - Skipped: 1
> > 2014-11-25 14:47:44.115 | - Failed: 0
> > 2014-11-25 14:47:44.115 |
> > 2014-11-25 14:47:44.116 | ==
> > 2014-11-25 14:47:44.116 | Worker Balance
> > 2014-11-25 14:47:44.116 | ==
> > 2014-11-25 14:47:44.117 | - Worker 0 (6 tests) => 0:00:00.480677s
> > 2014-11-25 14:47:44.117 | - Worker 1 (1 tests) => 0:00:00.001455s
> >  
> > So we are running about 1s worth of work, no longer in parallel (as
> > there aren't enough classes to even do parallel runs).
> >  
> > Given the emergence of the heat functional job, and the fact that this
> > is really not testing anything any more, I'd like to propose we just
> > remove it entirely at this stage and get the test nodes back.
> >  
>  
> +1 from me.
>  
> > -Sean
> >  
> > --
> > Sean Dague
> > http://dague.net
> >  
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >  
>  
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  
>  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Proposing Ghanshyam Mann for Tempest Core

2014-11-25 Thread Marc Koderer
+1

Am 22.11.2014 um 15:51 schrieb Andrea Frittoli :

> +1
> 
> On 21 Nov 2014 18:25, "Ken1 Ohmichi"  wrote:
> +1 :-)
> 
> Sent from my iPod
> 
> On 2014/11/22, at 7:56, Christopher Yeoh  wrote:
> 
> > +1
> >
> > Sent from my iPad
> >
> >> On 22 Nov 2014, at 4:56 am, Matthew Treinish  wrote:
> >>
> >>
> >> Hi Everyone,
> >>
> >> I'd like to propose we add Ghanshyam Mann (gmann) to the tempest core 
> >> team. Over
> >> the past couple of cycles Ghanshyam has been actively engaged in the 
> >> Tempest
> >> community. Ghanshyam has had one of the highest review counts on Tempest 
> >> for
> >> the past cycle, and he has consistently been providing reviews that have 
> >> been
> >> of consistently high quality that show insight into both the project 
> >> internals
> >> and it's future direction. I feel that Ghanshyam will make an excellent 
> >> addition
> >> to the core team.
> >>
> >> As per the usual, if the current Tempest core team members would please 
> >> vote +1
> >> or -1(veto) to the nomination when you get a chance. We'll keep the polls 
> >> open
> >> for 5 days or until everyone has voted.
> >>
> >> Thanks,
> >>
> >> Matt Treinish
> >>
> >> References:
> >>
> >> https://review.openstack.org/#/q/reviewer:%22Ghanshyam+Mann+%253Cghanshyam.mann%2540nectechnologies.in%253E%22,n,z
> >>
> >> http://stackalytics.com/?user_id=ghanshyammann&metric=marks
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] - puka dependency

2014-11-25 Thread raghavendra.lad
y


Hi All,

I am new to Murano trying to implement on Openstack Juno with Ubuntu 14.04 LTS 
version. However It needs puka <0.7 as dependency.
I would appreciate any help with the build guides. I also find Github prompts 
for username and password while I try installing packages.



Warm Regards,
Raghavendra Lad





This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Where allowed by local law, electronic 
communications with Accenture and its affiliates, including e-mail and instant 
messaging (including content), may be scanned by our systems for the purposes 
of information security and assessment of internal compliance with Accenture 
policy.
__

www.accenture.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Meeting time update

2014-11-25 Thread James Polley
On Wed, Sep 3, 2014 at 2:27 PM, Thierry Carrez 
wrote:

> Tomas Sedovic wrote:
> > As you all (hopefully) know, our meetings alternate between Tuesdays
> > 19:00 UTC and Wednesdays 7:00 UTC.
> >
> > Because of the whining^W weekly-expressed preferences[1] of the
> > Europe-based folks, the latter meetings are going to be moved by +1 hour.
> >
> > So the new meeting times are:
> >
> > * Tuesdays at 19:00 UTC (unchanged)
> > * Wednesdays at 8:00 UTC (1 hour later)
> >
> > The first new EU-friendly meeting will take place on Wednesday 17th
> > September.
> >
> > The wiki page has been updated accordingly:
> >
> > https://wiki.openstack.org/wiki/Meetings/TripleO


I've now also updated
https://wiki.openstack.org/wiki/Meetings#TripleO_team_meeting to list the
same time

>
> >
> > but I don't know how to reflect the change in the iCal feed. Anyone
> > willing to do that, please?
>
> Time updated. For some reason I don't get notified on that page change
> anymore. Sigh.
>
> Also:
>
> http://lists.openstack.org/pipermail/openstack-infra/2013-December/000517.html
>


>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] Proposal to add examples/usecase as part of new features / cli / functionality patches

2014-11-25 Thread Deepak Shetty
Hi stackers,
   I was having this thought which i believe applies to all projects of
openstack (Hence All in the subject tag)

My proposal is to have examples or usecase folder in each project which has
info on how to use the feature/enhancement (which was submitted as part of
a gerrit patch)
In short, a description with screen shots (cli, not GUI) which should be
submitted (optionally or mandatory) along with patch (liek how testcases
are now enforced)

I would like to take an example to explain. Take this patch @
https://review.openstack.org/#/c/127587/ which adds a default volume type
in Manila

Now it would have been good if we could have a .txt or .md file alogn with
the patch that explains :

1) What changes are needed in manila.conf to make this work

2) How to use the cli with this change incorporated

3) Some screen shots of actual usage (Now the author/submitted would have
tested in devstack before sending patch, so just copying those cli screen
shots wouldn't be too big of a deal)

4) Any caution/caveats that one has to keep in mind while using this

It can be argued that some of the above is satisfied via commit msg and
lookign at test cases.
But i personally feel that those still doesn't give a good visualization of
how a feature patch works in reality

Adding such a example/usecase file along with patch helps in multiple ways:

1) It helps the reviewer get a good picture of how/which clis are affected
and how this patch fits in the flow

2) It helps documentor get a good view of how this patch adds value, hence
can document it better

3) It may help the author or anyone else write a good detailed blog post
using the examples/usecase as a reference

4) Since this becomes part of the patch and hence git log, if the
feature/cli/flow changes in future, we can always refer to how the feature
was designed, worked when it was first posted by looking at the example
usecase

5) It helps add a lot of clarity to the patch, since we know how the author
tested it and someone can point missing flows or issues (which otherwise
now has to be visualised)

6) I feel this will help attract more reviewers to the patch, since now its
more clear what this patch affects, how it affects and how flows are
changing, even a novice reviewer can feel more comfortable and be confident
to provide comments.

Thoughts ?

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Asia friendly IRC meeting time

2014-11-25 Thread Zhidong Yu
If 6am works for people in US west, then I'm fine with Matt's suggestion
(UTC14:00).

Thanks, Zhidong

On Tue, Nov 25, 2014 at 11:26 PM, Matthew Farrellee  wrote:

> On 11/25/2014 02:37 AM, Zhidong Yu wrote:
>
>  Current meeting time:
>>  18:00UTC: Moscow (9pm)China(2am) US West(10am)
>>
>> My proposal:
>>  18:00UTC: Moscow (9pm)China(2am) US West(10am)
>>  00:00UTC: Moscow (3am)China(8am) US West(4pm)
>>
>
> fyi, a number of us are UW East (US West + 3 hours), so...
>
> current meeting time:
>  18:00UTC: Moscow (9pm)  China(2am)  US West(10am)/US East (1pm)
>
> and during daylight savings it's US West(11am)/US East(2pm)
>
> so the proposal is:
>  18:00UTC: Moscow (9pm)  China(2am)  US (W 10am / E 1pm)
>  00:00UTC: Moscow (3am)  China(8am)  US (W 4pm / E 7pm)
>
> given it's literally impossible to schedule a meeting during business
> hours across saratov, china and the us, that's a pretty reasonable
> proposal. my concern is that 00:00UTC may be thin on saratov & US
> participants.
>
> also consider alternating the existing schedule w/ something that's ~4
> hours earlier...
>  14:00UTC: Moscow (5pm)  China(10pm)  US (W 6am / E 9am)
>
> best,
>
>
> matt
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [qa] which core team members are diving into - http://status.openstack.org/elastic-recheck/#1373513

2014-11-25 Thread John Griffith
On Tue, Nov 25, 2014 at 2:22 PM, Vishvananda Ishaya
 wrote:
>
> On Nov 25, 2014, at 7:29 AM, Matt Riedemann  
> wrote:
>
>>
>>
>> On 11/25/2014 9:03 AM, Matt Riedemann wrote:
>>>
>>>
>>> On 11/25/2014 8:11 AM, Sean Dague wrote:
 There is currently a review stream coming into Tempest to add Cinder v2
 tests in addition to the Cinder v1 tests. At the same time the currently
 biggest race fail in the gate related to the projects is
 http://status.openstack.org/elastic-recheck/#1373513 - which is cinder
 related.

 I believe these 2 facts are coupled. The number of volume tests we have
 in tempest is somewhat small, and as such the likelihood of them running
 simultaneously is also small. However the fact that as the # of tests
 with volumes goes up we are getting more of these race fails typically
 means that what's actually happening is 2 vol ops that aren't safe to
 run at the same time, are.

 This remains critical - https://bugs.launchpad.net/cinder/+bug/1373513 -
 with no assignee.

 So we really needs dedicated diving on this (last bug update with any
 code was a month ago), otherwise we need to stop adding these tests to
 Tempest, and honestly start skipping the volume tests if we can't have a
 repeatable success.

-Sean

>>>
>>> I just put up an e-r query for a newly opened bug
>>> https://bugs.launchpad.net/cinder/+bug/1396186 this morning, it looks
>>> similar to bug 1373513 but without the blocked task error in syslog.
>>>
>>> There is a three minute gap between when the volume is being deleted in
>>> c-vol logs and when we see the volume uuid logged again, at which point
>>> tempest has already timed out waiting for the delete to complete.
>>>
>>> We should at least get some patches to add diagnostic logging in these
>>> delete flows (or periodic tasks that use the same locks/low-level i/o
>>> bound commands?) to try and pinpoint these failures.
>>>
>>> I think I'm going to propose a skip patch for test_volume_boot_pattern
>>> since that just seems to be a never ending cause of pain until these
>>> root issues get fixed.
>>>
>>
>> I marked 1396186 as a duplicate of 1373513 since the e-r query for 1373513 
>> had an OR message which was the same as 1396186.
>>
>> I went ahead and proposed a skip for test_volume_boot_pattern due to bug 
>> 1373513 [1] until people get on top of debugging it.
>>
>> I added some notes to bug 1396186, the 3 minute hang seems to be due to a 
>> vgs call taking ~1 minute and an lvs call taking ~2 minutes.
>>
>> I'm not sure if those are hit in the volume delete flow or in some periodic 
>> task, but if there are multiple concurrent worker processes that could be 
>> hitting those commands at the same time can we look at off-loading one of 
>> them to a separate thread or something?
>
> Do we set up devstack to not zero volumes on delete 
> (CINDER_SECURE_DELETE=False) ? If not, the dd process could be hanging the 
> system due to io load. This would get significantly worse with multiple 
> deletes occurring simultaneously.
>
> Vish
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I'm trying to dig into this, so far I believe it might be related to
the new activate/deactivate semantics (new bug, similar to the old bug
Vish referenced re secure delete).  Also, we are setting
secure_delete=False still so that isn't the issue (although the
underlying root cause may be the same?).

There are two things that I'm seeing so far:
1. lvdisplay taking up to 5 minutes to respond (but it does respond)
2. lvremove on snapshots failing due to suspended state in dm mapper
and the activation command apparently failing (need to look at this
some more still)

I've been trying to duplicate issue 1 which shows up very frequently
in the gate, however haven't had much luck.  I've been focused on just
running volume tests however and am now adding in the full devstack
gate tests.  Suspect there might be something load related here.

Haven't duplicated issue 2 so far either, but have some ideas of
things that might help here with dmsetup.

Anyway, Eric looked at it for a bit before Paris as well; I'll chat
with him tomorrow and continue looking at it myself as well.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)

2014-11-25 Thread Ian Wienand

Hi,

My change [1] to enable a consistent tracing mechanism for the many
scripts diskimage-builder runs during its build seems to have hit a
stalemate.

I hope we can agree that the current situation is not good.  When
trying to develop with diskimage-builder, I find myself constantly
going and fiddling with "set -x" in various scripts, requiring me
re-running things needlessly as I try and trace what's happening.
Conversley some scripts set -x all the time and give output when you
don't want it.

Now nodepool is using d-i-b more, it would be even nicer to have
consistency in the tracing so relevant info is captured in the image
build logs.

The crux of the issue seems to be some disagreement between reviewers
over having a single "trace everything" flag or a more fine-grained
approach, as currently implemented after it was asked for in reviews.

I must be honest, I feel a bit silly calling out essentially a
four-line patch here.  But it's been sitting around for months and
I've rebased it countless times.  Please diskimage-builder +2ers, can
we please decide on *something* and I'll implement it.

-i

[1] https://review.openstack.org/#/c/119023/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Anyone Using the Open Solaris ZFS Driver?

2014-11-25 Thread John Griffith
On Mon, Nov 24, 2014 at 10:53 AM, Monty Taylor  wrote:
> On 11/24/2014 10:14 AM, Drew Fisher wrote:
>>
>>
>> On 11/17/14 10:27 PM, Duncan Thomas wrote:
>>> Is the new driver drop-in compatible with the old one? IF not, can
>>> existing systems be upgraded to the new driver via some manual steps, or
>>> is it basically a completely new driver with similar functionality?
>
> Possibly none of my business- but if the current driver is actually just
> flat broken, then upgrading from it to the new solaris ZFS driver seems
> unlikely to be possibly, simply because the from case is broken.

Most certainly is your business as much as anybody elses, and complete
valid point.

IMO upgrade is a complete non-issue, drivers that are no longer
maintained and obviously don't work should be marked as such in Kilo
and probably removed as well.  Removal question etc is up to PTL and
Core but my two cents is they're useless anyway for the most part.

>
>> The driver in san/solaris.py focuses entirely on iSCSI.  I don't think
>> existing systems can be upgraded manually but I've never really tried.
>> We started with a clean slate for Solaris 11 and Cinder and added local
>> ZFS support for single-system and demo rigs along with a fibre channel
>> and iSCSI drivers.
>>
>> The driver is publically viewable here:
>>
>> https://java.net/projects/solaris-userland/sources/gate/content/components/openstack/cinder/files/solaris/zfs.py
>>
>> Please note that this driver is based on Havana.  We know it's old and
>> we're working to get it updated to Juno right now.  I can try to work
>> with my team to get a blueprint filed and start working on getting it
>> integrated into trunk.
>>
>> -Drew
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-25 Thread Armando M.
Hi Henry,

Thanks for your input.


> No attention to argue on agent vs. agentless, built-in reference vs.
> external controller, Openstack is an open community. But, I just want
> to say that modularized agent re-factoring does make a lot of sense,
> while forcing customer to piggyback an extra SDN controller on their
> Cloud solution is not the only future direction of Neutron.
>

My main reference was with having the Kilo timeframe in mind. Once, we made
a clear demarcation between the various components, then they can develop
in isolation, which is very valuable IMO.
If enough critical mass gets behind an agent-based solution to control the
networking layer whichever it may be, I would be definitely okay with that,
but I don't think that is what Neutron should be concerned about, but
rather enabling these extra capabilities, the same way Nova does so with
hypervisors.

Cheers,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-25 Thread Mike Bayer

> On Nov 25, 2014, at 8:15 PM, Ahmed RAHAL  wrote:
> 
> Hi,
> 
> Le 2014-11-24 17:20, Michael Still a écrit :
>> Heya,
>> 
>> This is a new database, so its our big chance to get this right. So,
>> ideas welcome...
>> 
>> Some initial proposals:
>> 
>>  - we do what we do in the current nova database -- we have a deleted
>> column, and we set it to true when we delete the instance.
>> 
>>  - we have shadow tables and we move delete rows to a shadow table.
>> 
>>  - something else super clever I haven't thought of.
> 
> Some random thoughts that came to mind ...
> 
> 1/ as far as I remember, you rarely want to delete a row
> - it's usually a heavy DB operation (well, was back then)
> - it's destructive (but we may want that)
> - it creates fragmentation (less of a problem depending on db engine)
> - it can break foreign key relations if not done the proper way

deleting records with foreign key dependencies is a known quantity.  Those 
items are all related and being able to delete everything related is a 
well-solved problem, both via ON DELETE cascades as well as standard ORM 
features.


> 
> 2/ updating a row to 'deleted=1'
> - gives an opportunity to set a useful deletion time-stamp
> I would even argue that setting the deleted_at field would suffice to declare 
> a row 'deleted' (as in 'not NULL'). I know, "explicit is better than 
> implicit" …

the logic that’s used is that “deleted” is set to the primary key of the 
record, this is to allow UNIQUE constraints to be set up that serve on the 
non-deleted rows only (e.g. UNIQUE on “x” + “deleted” is possible when there 
are multiple “deleted” rows with “x”).

> - the update operation is not destructive
> - an admin/DBA can decide when and how to purge/archive rows
> 
> 3/ moving the row at deletion
> - you want to avoid additional steps to complete an operation, thus avoid 
> creating a new record while deleting one
> - even if you wrap things into a transaction, not being able to create a row 
> somewhere can make your delete transaction fail
> - if I were to archive all deleted rows, at scale I'd probably move them to 
> another db server altogether

if you’re really “archiving”, I’d just dump out a log of what occurred to a 
textual log file, then you archive the files.  There’s no need for a pure 
“audit trail” to even be in the relational DB.


> Now, I for one would keep the current mark-as-deleted model.
> 
> I however perfectly get the problem of massive churn with instance 
> creation/deletion.

is there?   inserting and updating rows is a normal thing in relational DBs.


> So, let's be crazy, why not have a config option 'on_delete=mark_delete', 
> 'on_delete=purge' or 'on_delete=archive' and let the admin choose ? (is that 
> feasible ?)

I’m -1 on that.  The need for records to be soft-deleted or not, and if those 
soft-deletes need to be accessible in the application, should be decided up 
front.  Adding a multiplicity of options just makes the code that much more 
complicated and fragments its behaviors and test coverage.   The suggestion 
basically tries to avoid making a decision and I think more thought should be 
put into what is truly needed.


> This would especially come handy if the admin decides the global cells 
> database may not need to keep track of deleted instances, the cell-local nova 
> database being the absolute reference for that.

why would an admin decide that this is, or is not, needed?   if the deleted 
data isn’t needed by the live app, it should just be dumped to an archive.  
admins can set how often that archive should be purged, but IMHO the “pipeline” 
of these records should be straight; there shouldn’t be junctions and switches 
that cause there to be multiple data paths.   It leads to too much complexity.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]Flushing trusts

2014-11-25 Thread Kodera, Yasuo
Hi,

Thank you for your help.

> Did you file an RFE for this?

No, I did not.
First, I will check the launchpad for similar report.


Regards,

Yasuo Kodera


> -Original Message-
> From: Udi Kalifon [mailto:ukali...@redhat.com]
> Sent: Wednesday, November 26, 2014 12:40 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Keystone]Flushing trusts
> 
> Hi Yasuo.
> 
> Did you file an RFE for this?
> 
> Udi.
> 
> - Original Message -
> From: "Yasuo Kodera" 
> To: "openstack-dev@lists.openstack.org" 
> Sent: Friday, November 21, 2014 2:00:42 AM
> Subject: [openstack-dev] [Keystone]Flushing trusts
> 
> Hi,
> 
> We can use "keystone-manage token_flush" to DELETE db-records of expired 
> tokens.
> 
> Similarly, expired or deleted trust should be flushed to avoid wasting db.
> But I don't know the way to do so.
> Is there any tools or patches?
> 
> If there are some reasons that these records must not be deleted easily, tell 
> me
> please.
> 
> 
> Regards,
> 
> Yasuo Kodera
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon]

2014-11-25 Thread Zhenguo Niu
+1 for both!

On Wed, Nov 26, 2014 at 7:29 AM, Lin Hua Cheng  wrote:

>
> +1 for both!
>
> Yep, thanks for all the hard work!
>
> On Tue, Nov 25, 2014 at 1:35 PM, Ana Krivokapic 
> wrote:
>
>>
>> On 25/11/14 00:09, David Lyle wrote:
>> > I am pleased to nominate Thai Tran and Cindy Lu to horizon-core.
>> >
>> > Both Thai and Cindy have been contributing significant numbers of high
>> > quality reviews during Juno and Kilo cycles. They are consistently
>> > among the top non-core reviewers. They are also responsible for a
>> > significant number of patches to Horizon. Both have a strong
>> > understanding of the Horizon code base and the direction of the project.
>> >
>> > Horizon core team members please vote +1 or -1 to the
>> > nominations either in reply or by private communication. Voting will
>> > close on Friday unless I hear from everyone before that.
>> >
>> > Thanks,
>> > David
>> >
>> >
>>
>> +1 for both.
>>
>> Cindy and Thai, thanks for your hard work!
>>
>> --
>> Regards,
>>
>> Ana Krivokapic
>> Software Engineer
>> OpenStack team
>> Red Hat Inc.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Zhenguo Niu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-25 Thread henry hly
On Wed, Nov 26, 2014 at 12:14 AM, Mathieu Rohon  wrote:
> On Tue, Nov 25, 2014 at 9:59 AM, henry hly  wrote:
>> Hi Armando,
>>
>> Indeed agent-less solution like external controller is very
>> interesting, and in some certain cases it has advantage over agent
>> solution, e.g. software installation is prohibited on Compute Node.
>>
>> However, Neutron Agent has its irreplaceable benefits: multiple
>> backends support like SRIOV, macvtap, vhost-user snabbswitch, hybrid
>> vswitch solution like NIC offloading or VDP based TOR offloading...All
>> these backend can not be easily controlled by an remote OF controller.
>
> Moreover, this solution is tested by the gate (at least ovs), and is
> simpler for small deployments

Not only for small deployments, but also for large scale production
deployments :)

We have deployed more than 500 hosts in the customer production
cluster. Now we are doing some tuning on L2pop / SG / DHCPagent, after
that 1000 nodes cluster is expected to be supported. Also for vxlan
data plane performance, we upgraded the host kernel to 3.14 (with udp
tunnel gro/gso), and it has awful satisfied performance.

The customers have very positive feedback, they have never thought
that openstack bulit-in ovs backend can work so fine, without any help
from external controller platforms, or any special hardware
offloading.


>
>> Also considering DVR (maybe with upcoming FW for W-E), and Security
>> Group, W-E traffic control capability gap still exists between linux
>> stack and OF flowtable, whether features like advanced netfilter, or
>> performance for webserver app which incur huge concurrent sessions
>> (because of basic OF upcall model, the more complex flow rule, the
>> less megaflow aggregation might take effect)
>>
>> Thanks to L2pop and DVR, now many customer give the feedback that
>> Neutron has made great progressing, and already meet nearly all their
>> L2/L3 connectivity W-E control directing (The next big expectation is
>> N-S traffic directing like dynamic routing agent), without forcing
>> them to learn and integrate another huge platform like external SDN
>> controller.
>
> +100. Note that Dynamic routing is in progress.
>
>> No attention to argue on agent vs. agentless, built-in reference vs.
>> external controller, Openstack is an open community. But, I just want
>> to say that modularized agent re-factoring does make a lot of sense,
>> while forcing customer to piggyback an extra SDN controller on their
>> Cloud solution is not the only future direction of Neutron.
>>
>> Best Regard
>> Henry
>>
>> On Wed, Nov 19, 2014 at 5:45 AM, Armando M.  wrote:
>>> Hi Carl,
>>>
>>> Thanks for kicking this off. I am also willing to help as a core reviewer of
>>> blueprints and code
>>> submissions only.
>>>
>>> As for the ML2 agent, we all know that for historic reasons Neutron has
>>> grown to be not only a networking orchestration project but also a reference
>>> implementation that is resembling what some might call an SDN controller.
>>>
>>> I think that most of the Neutron folks realize that we need to move away
>>> from this model and rely on a strong open source SDN alternative; for these
>>> reasons, I don't think that pursuing an ML2 agent would be a path we should
>>> go down to anymore. It's time and energy that could be more effectively
>>> spent elsewhere, especially on the refactoring. Now if the refactoring
>>> effort ends up being labelled ML2 Agent, I would be okay with it, but my gut
>>> feeling tells me that any attempt at consolidating code to embrace more than
>>> one agent logic at once is gonna derail the major goal of paying down the so
>>> called agent debt.
>>>
>>> My 2c
>>> Armando
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-25 Thread Ahmed RAHAL

Hi,

Le 2014-11-24 17:20, Michael Still a écrit :

Heya,

This is a new database, so its our big chance to get this right. So,
ideas welcome...

Some initial proposals:

  - we do what we do in the current nova database -- we have a deleted
column, and we set it to true when we delete the instance.

  - we have shadow tables and we move delete rows to a shadow table.

  - something else super clever I haven't thought of.


Some random thoughts that came to mind ...

1/ as far as I remember, you rarely want to delete a row
- it's usually a heavy DB operation (well, was back then)
- it's destructive (but we may want that)
- it creates fragmentation (less of a problem depending on db engine)
- it can break foreign key relations if not done the proper way

2/ updating a row to 'deleted=1'
- gives an opportunity to set a useful deletion time-stamp
I would even argue that setting the deleted_at field would suffice to 
declare a row 'deleted' (as in 'not NULL'). I know, "explicit is better 
than implicit" ...

- the update operation is not destructive
- an admin/DBA can decide when and how to purge/archive rows

3/ moving the row at deletion
- you want to avoid additional steps to complete an operation, thus 
avoid creating a new record while deleting one
- even if you wrap things into a transaction, not being able to create a 
row somewhere can make your delete transaction fail
- if I were to archive all deleted rows, at scale I'd probably move them 
to another db server altogether



Now, I for one would keep the current mark-as-deleted model.

I however perfectly get the problem of massive churn with instance 
creation/deletion.
So, let's be crazy, why not have a config option 
'on_delete=mark_delete', 'on_delete=purge' or 'on_delete=archive' and 
let the admin choose ? (is that feasible ?)


This would especially come handy if the admin decides the global cells 
database may not need to keep track of deleted instances, the cell-local 
nova database being the absolute reference for that.


HTH,

Ahmed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova host-update gives error 'Virt driver does not implement host disabled status'

2014-11-25 Thread Chen CH Ji
are you using libvirt ? it's not implemented
,guess your bug are talking about other hypervisors?

the message was printed here:
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/hosts.py#n236

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Vineet Menon 
To: openstack-dev 
Date:   11/26/2014 12:10 AM
Subject:[openstack-dev] [nova] nova host-update gives error 'Virt
driver does not implement host disabled status'



Hi,

I'm trying to reproduce the bug
https://bugs.launchpad.net/nova/+bug/1259535.
While trying to issue the command, nova host-update --status disable
machine1, an error is thrown saying,

  ERROR (HTTPNotImplemented): Virt driver does not implement host disabled
  status. (HTTP 501) (Request-ID: req-1f58feda-93af-42e0-b7b6-bcdd095f7d8c)


What is this error about?

Regards,
Vineet Menon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Anyone Using the Open Solaris ZFS Driver?

2014-11-25 Thread Mike Perez
On 14:37 Tue 25 Nov , Drew Fisher wrote:
> 
> 
> On 11/25/14, 12:56 PM, Jay S. Bryant wrote:
> > Monty,
> > 
> > I agree that upgrade is not a significant concern right now if the
> > existing driver is not working.
> > 
> > Drew,
> > 
> > I am having trouble following where you guys are currently at with this
> > work.  I would like to help get you guys up and going during Kilo.
> > 
> > I am concerned that maybe there is confusion about the
> > blueprints/patches that we are all talking about here.  I see this
> > Blueprint that was accepted for Juno and appears to have an associated
> > patch merged:  [1]  I also see this Blueprint that doesn't appear to be
> > started yet: [2]  So, can you help me understand what it is you are
> > hoping to get in for Kilo?
> 
> OK, so we have two drivers here at Oracle in Solaris.
> 
> 1:  A driver for the ZFS storage appliance (zfssa).  That driver was
> integrated into the Icehouse branch and still there in Juno.  That team,
> separate from mine, is working along side of us with the CI requirements
> to keep the driver in Kilo
> 
> 2:  The second driver is one for generic ZFS on Solaris.  We have three
> different sub-drivers in one:
> 
> - An iSCSI driver (COMSTAR on top of ZFS)
> - A FC driver (on top of ZFS)
> - A simple "local" ZFS driver useful for single-system / devstack /
>   demo rigs.
> 
> The local driver simply creates ZVOLs for Zones to use on the local
> system.  It's not designed with any kind of migration abilities unlike
> iSCSI or FC.
> 
> > 
> > I know that you have been concerned about CI.  For new drivers we are
> > allowing some grace period to get things working.  Once we get the
> > confusion over blueprints worked out and have some code to start
> > reviewing we can continue to discuss that issue.
> 
> My plan is to discuss this plan with my team next week after the
> holiday.  Once we get something in place on our side, we'll try to get a
> blueprint submitted ASAP for review.
> 
> Sound good?

Hi Drew,

We're not accepting anymore drivers for the Kilo release. This was from
a discussion that started back in September and mentioned on the mailing list
a couple of times [1][2], multiple Cinder meetings [3][4], and the last design
summit. The one driver we have from Oracle is a ZFS NFS [5] driver that was
registered before the deadline.

If you could verify with your team if they plan to fix the existing Solaris
ISCSI driver [5], or can we remove it?

[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044990.html
[2] - 
http://lists.openstack.org/pipermail/openstack-dev/2014-October/049512.html
[3] - 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-09-03-16.01.log.html
[4] - 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-10-29-16.00.log.html
[5] - 
https://blueprints.launchpad.net/cinder/+spec/oracle-zfssa-nfs-cinder-driver
[5] - 
https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/san/solaris.py

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telco] [NFV] [Heat] Telco Orchestration

2014-11-25 Thread Georgy Okrokvertskhov
Hi,

In Murano we did couple projects related to networking orchestration. As
NFV is a quite broad term I can say that Murano approach fits into it too.
In our case we had bunch of virtual appliances with specific networking
capabilities and requirements. Some of these appliances had to work
together to provide a required functionality. These virtual appliances were
exposed as Murano applications with defined dependencies between apps and
operators were able to create different networking configuration with these
apps combining them according their requirements\capabilities. Underlying
workflows were responsible to bind these virtual appliances together.
I will be glad to participate in tomorrow meeting and answer any questions
you have.

Thanks
Georgy

On Tue, Nov 25, 2014 at 6:14 AM, Marc Koderer  wrote:

> Hi Angus,
>
> Am 25.11.2014 um 12:48 schrieb Angus Salkeld :
>
> On Tue, Nov 25, 2014 at 7:27 PM, Marc Koderer wrote:
>
>> Hi all,
>>
>> as discussed during our summit sessions we would like to expand the scope
>> of the Telco WG (aka OpenStack NFV group) and start working
>> on the orchestration topic (ETSI MANO).
>>
>> Therefore we started with an etherpad [1] to collect ideas, use-cases and
>> requirements.
>>
>
> Hi Marc,
>
> You have quite a high acronym per sentence ratio going on that etherpad;)
>
>
> Haha, welcome to the telco world :)
>
>
> From Heat's perspective, we have a lot going on already, but we would love
> to support
> what you are doing.
>
>
> That’s exactly what we are planning. What we have is a long list of
> use-cases and
> requirements. We need to transform them into specs for the OpenStack
> projects.
> Many of those specs won’t be NFV specify, for instance a Telco cloud will
> be highly
> distributed. So what we need is a multi-region heat support (which is
> already a planned
> feature for Heat as I learned today).
>
>
> You need to start getting specific about what you need and what the
> missing gaps are.
> I see you are already looking at higher layers (TOSCA) also check out
> Murano as well.
>
>
> Yep, I will check Murano.. I never had a closer look to it.
>
> Regards
> Marc
>
>
> Regards
> -Angus
>
>
>> Goal is to discuss this document and move it onto the Telco WG wiki [2]
>> when
>> it becomes stable.
>>
>> Feedback welcome ;)
>>
>> Regards
>> Marc
>> Deutsche Telekom
>>
>> [1] https://etherpad.openstack.org/p/telco_orchestration
>> [2] https://wiki.openstack.org/wiki/TelcoWorkingGroup
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon]

2014-11-25 Thread Lin Hua Cheng
+1 for both!

Yep, thanks for all the hard work!

On Tue, Nov 25, 2014 at 1:35 PM, Ana Krivokapic  wrote:

>
> On 25/11/14 00:09, David Lyle wrote:
> > I am pleased to nominate Thai Tran and Cindy Lu to horizon-core.
> >
> > Both Thai and Cindy have been contributing significant numbers of high
> > quality reviews during Juno and Kilo cycles. They are consistently
> > among the top non-core reviewers. They are also responsible for a
> > significant number of patches to Horizon. Both have a strong
> > understanding of the Horizon code base and the direction of the project.
> >
> > Horizon core team members please vote +1 or -1 to the
> > nominations either in reply or by private communication. Voting will
> > close on Friday unless I hear from everyone before that.
> >
> > Thanks,
> > David
> >
> >
>
> +1 for both.
>
> Cindy and Thai, thanks for your hard work!
>
> --
> Regards,
>
> Ana Krivokapic
> Software Engineer
> OpenStack team
> Red Hat Inc.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [ironic] how to remove check-tempest-dsvm-ironic-pxe_ssh on Nova check

2014-11-25 Thread Devananda van der Veen
On Tue Nov 25 2014 at 7:20:00 AM Sean Dague  wrote:

> On 11/25/2014 10:07 AM, Jim Rollenhagen wrote:
> > On Tue, Nov 25, 2014 at 08:02:56AM -0500, Sean Dague wrote:
> >> When at Summit I discovered that check-tempest-dsvm-ironic-pxe_ssh is
> >> now voting on Nova check queue. The reasons given is that the Nova team
> >> ignored the interface contract that was being provided to Ironic, broke
> >> them, so the Ironic team pushed for co-gating (which basically means the
> >> interface contract is now enforced by a 3rd party outside of Nova /
> Ironic).
> >>
> >> However, this was all in vague term, and I think is exactly the kind of
> >> thing we don't want to do. Which is use the gate as a proxy fight over
> >> teams breaking contracts with other teams.
> >>
> >> So I'd like to dive into what changes happened and what actually broke,
> >> so that we can get back to doing this smarter.
> >>
> >> Because if we are going to continue to grow as a community, we have to
> >> co-gate less. It has become a crutch to not think about interfaces and
> >> implications of changes, and is something we need to be doing a lot
> less of.
>

Definitely -- and we're already co-gating less than other projects :)

You might notice that ironic jobs aren't running in the check queues for
any other Integrated project, even though Ironic depends on interacting
directly with Glance, Neutron, Keystone, and (for some drivers) Swift.


> >
> > Completely agree here; I would love to not gate Nova on Ironic.
> >
> > The major problem is that Nova's driver API is explicitly not stable.[0]
> > If the driver API becomes stable and properly versioned, Nova should be
> > able to change the driver API without breaking Ironic.
> >
> > Now, this won't fix all cases where Nova could break Ironic. The
> > resource tracker or scheduler could change in a breaking way. However, I
> > think the driver API has been the most common cause of Ironic breakage,
> > so I think it's a great first step.
> >
> > // jim
> >
> > [0] http://docs.openstack.org/developer/nova/devref/
> policies.html#public-contractual-apis
>
> So we actually had a test in tree for that part of the contract...
>
>
We did. It was added here:
  https://review.openstack.org/#/c/98201

.. and removed here:
  https://review.openstack.org/#/c/111425/

.. because there is now unit test coverage for the internal API usage by
in-tree virt drivers via assertPublicAPISignatures() here:
  https://git.openstack.org/cgit/openstack/nova/tree/nova/test.py#n370


We don't need the surface to be public contractual, we just need to know
> what things Ironic is depending on and realize that can't be changed
> without some compatibility code put in place.


There is the virt driver API, which is the largest and most obvious one.
Then there's the HostManager, ResourceTracker, and SchedulerFilter classes
(or sublcasses thereof). None of these are "public contractual APIs" as
defined by Nova. Without any guarantee of stability in those interfaces, I
believe co-gating is a reasonable thing to do between the projects.

Also, there have been discussions [2] at and leading up to the Paris summit
(some ongoing for many cycles now) regarding changing every one of those
interfaces. Until those interfaces are refactored / split out / otherwise
deemed stable, I would like to continue running Ironic's functional tests
on all Nova changes. If you think we don't need to co-gate while that work
is underway, I'd like to understand why, and what that would look like.

Again... without knowing exactly what happened (I was on leave) it's
> hard to come up with a solution. However, I think the co-gate was an
> elephant gun that we don't actually want.
>

Apologies, but I don't recall exactly what time period you were on leave
for, so you may have already seen some or all of these.

I have looked up up several cases of asymmetrical breaks that happened (due
to changes in multiple projects) during Juno (before Ironic was voting in
Nova's check queue). At least one of these was the result of a change in
Nova after the Ironic driver merged. Links at the bottom for reference [1].

Here is a specific example where a patch introduced subtle behavior changes
within the resource tracker that were not caught by any of Nova's tests,
and would not have been caught by the API contract test, even if that had
been in place at the time, nor any other API contract test, since it did
not, in fact, change the API. It changed a behavior of the resource tracker
which, it turns out, libvirt does not use (at least within the
devstack-gate environment).

https://review.openstack.org/#/c/71557/33/nova/compute/resource_tracker.py

The problem is at line 385, where the supplied 'stats' are overwritten.
None of the libvirt-based tests touched this code path, though.

> def _write_ext_resources(self, resources):
>resources['stats'] = {}## right here
>resources['stats'].update(self.stats)
>self.ext_resources_handler.write_resources(re

Re: [openstack-dev] [NFV][Telco] pxe-boot

2014-11-25 Thread Monty Taylor
On 11/25/2014 05:33 PM, Angelo Matarazzo wrote:
> 
> Hi all,
> my team and I are working on pxe boot feature very similar to the
> "Discless VM" one  in Active blueprint list[1]
> The blueprint [2] is no longer active and we created a new spec [3][4].
> 
> Nova core reviewers commented our spec and the first and the most
> important objection is that there is not a compelling reason to provide
> this kind of feature : booting from network.

Why not just work with the Ironic project, who already have OpenStack
code to deal with PXE booting, and already does it in a way that's
driven by nova.

> Aside from the specific implementation, I think that some members of
> TelcoWorkingGroup could be interested in  and provide a use case.
> I would also like to add this item to the agenda of next meeting
> 
> Any thought?
> 
> Best regards,
> Angelo
> 
> [1] https://wiki.openstack.org/wiki/TelcoWorkingGroup#Active_Blueprints
> [2] https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe
> [3] https://blueprints.launchpad.net/nova/+spec/boot-order-for-instance
> [4] https://review.openstack.org/#/c/133254/
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV][Telco] pxe-boot

2014-11-25 Thread Angelo Matarazzo


Hi all,
my team and I are working on pxe boot feature very similar to the  
"Discless VM" one  in Active blueprint list[1]

The blueprint [2] is no longer active and we created a new spec [3][4].

Nova core reviewers commented our spec and the first and the most  
important objection is that there is not a compelling reason to  
provide this kind of feature : booting from network.


Aside from the specific implementation, I think that some members of  
TelcoWorkingGroup could be interested in  and provide a use case.

I would also like to add this item to the agenda of next meeting

Any thought?

Best regards,
Angelo

[1] https://wiki.openstack.org/wiki/TelcoWorkingGroup#Active_Blueprints
[2] https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe
[3] https://blueprints.launchpad.net/nova/+spec/boot-order-for-instance
[4] https://review.openstack.org/#/c/133254/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No PROTOCOL_SSLv3 in Python 2.7 in Sid since 3 days

2014-11-25 Thread Brant Knudson
Could someone take over my patch? :)
> I'm quite busy doing other things, and it isn't my role to work on such
> things directly. I often send a patch here and there when I see fit, but
> here, I don't think I'm the best person to do so.
>
>
I updated the patch based on the comments so far:
https://review.openstack.org/#/c/136278/

Looks like it's failing the Jenkins check now due to unrelated issues.

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-25 Thread Ben Nemec
On 11/18/2014 05:11 PM, Sachi King wrote:
> On Wednesday, November 12, 2014 02:06:02 PM Doug Hellmann wrote:
>> During our “Graduation Schedule” summit session we worked through the list 
>> of modules remaining the in the incubator. Our notes are in the etherpad 
>> [1], but as part of the "Write it Down” theme for Oslo this cycle I am also 
>> posting a summary of the outcome here on the mailing list for wider 
>> distribution. Let me know if you remembered the outcome for any of these 
>> modules differently than what I have written below.
>>
>> Doug
>>
>>
>>
>> Deleted or deprecated modules:
>>
>> funcutils.py - This was present only for python 2.6 support, but it is no 
>> longer used in the applications. We are keeping it in the stable/juno branch 
>> of the incubator, and removing it from master 
>> (https://review.openstack.org/130092)
>>
>> hooks.py - This is not being used anywhere, so we are removing it. 
>> (https://review.openstack.org/#/c/125781/)
>>
>> quota.py - A new quota management system is being created 
>> (https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and should 
>> replace this, so we will keep it in the incubator for now but deprecate it.
>>
>> crypto/utils.py - We agreed to mark this as deprecated and encourage the use 
>> of Barbican or cryptography.py (https://review.openstack.org/134020)
>>
>> cache/ - Morgan is going to be working on a new oslo.cache library as a 
>> front-end for dogpile, so this is also deprecated 
>> (https://review.openstack.org/134021)
>>
>> apiclient/ - With the SDK project picking up steam, we felt it was safe to 
>> deprecate this code as well (https://review.openstack.org/134024).
>>
>> xmlutils.py - This module was used to provide a security fix for some XML 
>> modules that have since been updated directly. It was removed. 
>> (https://review.openstack.org/#/c/125021/)
>>
>>
>>
>> Graduating:
>>
>> oslo.context:
>> - Dims is driving this
>> - https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
>> - includes:
>>  context.py
>>
>> oslo.service:
> 
> During the "Oslo graduation schedule" meet up someone was mentioning they'd 
> be willing to help out as a contact for questions during this process.
> Can anyone put me in contact with that person or remember who he was?

I think that was me, but I've been kind of out of touch since the
summit.  Feel free to contact me with any questions though.  bnemec on
#openstack-oslo.

> 
>> - Sachi is driving this
>> - https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
>> - includes:
>>  eventlet_backdoor.py
>>  loopingcall.py
>>  periodic_task.py
>>  request_utils.py
>>  service.py
>>  sslutils.py
>>  systemd.py
>>  threadgroup.py
>>
>> oslo.utils:
>> - We need to look into how to preserve the git history as we import these 
>> modules.
>> - includes:
>>  fileutils.py
>>  versionutils.py
>>
>>
>>
>> Remaining untouched:
>>
>> scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
>> whether Gantt has enough traction yet so we will hold onto these in the 
>> incubator for at least another cycle.
>>
>> report/ - There’s interest in creating an oslo.reports library containing 
>> this code, but we haven’t had time to coordinate with Solly about doing that.
>>
>>
>>
>> Other work:
>>
>> We will continue the work on oslo.concurrency and oslo.log that we started 
>> during Juno.
>>
>> [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] proposal: alternating weekly meeting time [doodle poll created]

2014-11-25 Thread David Lyle
Thanks Richard for setting up the poll.

Today, we finalized on the new meeting times. The decision is to meet at
alternating times, 2000 UTC and 1200 UTC, on Weds. The meetings will remain
in openstack-meeting-3.

A schedule of upcoming meeting times has been added to
https://wiki.openstack.org/wiki/Meetings/Horizon

The next Horizon meeting will be on Wed Dec 3 at 2000 UTC in
openstack-meeting-3. I'll also send out a reminder.

David

On Tue, Nov 25, 2014 at 2:23 PM, Richard Jones 
wrote:

> Thanks!
>
>
> On Tue Nov 25 2014 at 3:15:49 AM Yves-Gwenaël Bourhis <
> yves-gwenael.bour...@cloudwatt.com> wrote:
>
>>
>>
>> Le 24/11/2014 04:20, Richard Jones a écrit :
>> > Thanks everyone, I've closed the poll. I'm sorry to say that there's no
>> > combination of two timeslots which allows everyone to attend a meeting.
>> > Of the 25 respondents, the best we can do is cater for 24 of you.
>> >
>> > Optimising for the maximum number of attendees, the potential meeting
>> > times are 2000 UTC Tuesday and 1000 UTC on one of Monday, Wednesday or
>> > Friday. In all three cases the only person who has indicated they cannot
>> > attend is Lifeless.
>> >
>> > Unfortunately, David has indicated that he can't be present at the
>> > Tuesday 2000 UTC slot. Optimising for him as a required attendee for
>> > both meetings means we lose an additional attendee, and gives us the
>> > Wednesday 2000 UTC slot and a few options:
>> >
>> > - Monday, Wednesday and Thursday at 1200 UTC (Lifeless and ygbo miss)
>>
>> 1200 UTC is perfect for me.
>> The doodle was proposing 1200 UTC to 1400 UTC and in the 2 hours
>> bandwidth I can not be sure to be there. but if it's 1200 "on the spot"
>> I can for sure :-)
>> Since I couldn't precise this on the doodle I didn't select this slot. A
>> one hour bandwidth would have allowed more precision but I understand
>> you concern that the doodle would have been to long to scroll.
>>
>> --
>> Yves-Gwenaël Bourhis
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-25 Thread Steve Gordon
- Original Message -
> From: "Daniel P. Berrange" 
> To: "Dan Smith" 
> 
> On Thu, Nov 13, 2014 at 05:43:14PM +, Daniel P. Berrange wrote:
> > On Thu, Nov 13, 2014 at 09:36:18AM -0800, Dan Smith wrote:
> > > > That sounds like something worth exploring at least, I didn't know
> > > > about that kernel build option until now :-) It sounds like it ought
> > > > to be enough to let us test the NUMA topology handling, CPU pinning
> > > > and probably huge pages too.
> > > 
> > > Okay. I've been vaguely referring to this as a potential test vector,
> > > but only just now looked up the details. That's my bad :)
> > > 
> > > > The main gap I'd see is NUMA aware PCI device assignment since the
> > > > PCI <-> NUMA node mapping data comes from the BIOS and it does not
> > > > look like this is fakeable as is.
> > > 
> > > Yeah, although I'd expect that the data is parsed and returned by a
> > > library or utility that may be a hook for fakeification. However, it may
> > > very well be more trouble than it's worth.
> > > 
> > > I still feel like we should be able to test generic PCI in a similar way
> > > (passing something like a USB controller through to the guest, etc).
> > > However, I'm willing to believe that the intersection of PCI and NUMA is
> > > a higher order complication :)
> > 
> > Oh I forgot to mention with PCI device assignment (as well as having a
> > bunch of PCI devices available[1]), the key requirement is an IOMMU.
> > AFAIK, neither Xen or KVM provide any IOMMU emulation, so I think we're
> > out of luck for even basic PCI assignment testing inside VMs.
> 
> Ok, turns out that wasn't entirely accurate in general.
> 
> KVM *can* emulate an IOMMU, but it requires that the guest be booted
> with the q35 machine type, instead of the ancient PIIX4 machine type,
> and also QEMU must be launched with "-machine iommu=on". We can't do
> this in Nova, so although it is theoretically possible, it is not
> doable for us in reality :-(
> 
> Regards,
> Daniel

Is it worth still pursuing virtual testing of the NUMA awareness work you, 
nikola, and others have been doing? It seems to me it would still be preferable 
to do this virtually (and ideally in the gate) wherever possible?

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] How to run tempest tests

2014-11-25 Thread Vineet Menon
Hi,
Thanjs for clearing that up... I had a hard time understanding the screws
before I went with testr.

Regards,
Vineet
On 25 Nov 2014 17:46, "Matthew Treinish"  wrote:

> On Mon, Nov 24, 2014 at 10:49:27AM +0100, Angelo Matarazzo wrote:
> > Sorry for my previous message with wrong subject
> >
> > Hi all,
> > By reading the tempest documentation page [1] a user can run tempest
> tests
> > by using whether testr or run_tempest.sh or tox.
> >
> > What is the best practice?
> > run_tempest.sh has several options (e.g. ./run_tempest.sh -h) and it is
> my
> > preferred way, currently.
> > Any thought?
>
> So the options are there for different reasons and fit different purposes.
> The
> run_tempest.sh script exists mostly for legacy reasons as some people
> prefer to
> use it, and it predates the usage of tox in tempest. It also has some
> advantages
> like that it can run without a venv and provides some other options.
>
> Tox is what we use for gating, and we keep most of job definitions for
> gating in
> the tox.ini file. If you're trying to reproduce a gate run locally using
> tox is
> what is recommended to use. Personally I use it to run everything just
> because
> I often mix unit tests and tempest runs and I like having separate venvs
> for
> both being created on demand.
>
> Calling testr directly is just what all of these tools are doing under the
> covers, and it'll always be an option.
>
> One thing we're looking to do this cycle is to add a single entry point for
> running tempest which will hopefully clear up this confusion, and make the
> interface for interacting with tempest a bit nicer. When this work is
> done, the
> run_tempest.sh script will most likely disappear and tox will probably
> just be
> used for gating job definitions and just call the new entry-point instead
> of
> testr directly.
>
> >
> > BR,
> > Angelo
> >
> > [1] http://docs.openstack.org/developer/tempest/overview.html#quickstart
> >
>
> -Matt Treinish
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Anyone Using the Open Solaris ZFS Driver?

2014-11-25 Thread Drew Fisher


On 11/25/14, 12:56 PM, Jay S. Bryant wrote:
> Monty,
> 
> I agree that upgrade is not a significant concern right now if the
> existing driver is not working.
> 
> Drew,
> 
> I am having trouble following where you guys are currently at with this
> work.  I would like to help get you guys up and going during Kilo.
> 
> I am concerned that maybe there is confusion about the
> blueprints/patches that we are all talking about here.  I see this
> Blueprint that was accepted for Juno and appears to have an associated
> patch merged:  [1]  I also see this Blueprint that doesn't appear to be
> started yet: [2]  So, can you help me understand what it is you are
> hoping to get in for Kilo?

OK, so we have two drivers here at Oracle in Solaris.

1:  A driver for the ZFS storage appliance (zfssa).  That driver was
integrated into the Icehouse branch and still there in Juno.  That team,
separate from mine, is working along side of us with the CI requirements
to keep the driver in Kilo

2:  The second driver is one for generic ZFS on Solaris.  We have three
different sub-drivers in one:

- An iSCSI driver (COMSTAR on top of ZFS)
- A FC driver (on top of ZFS)
- A simple "local" ZFS driver useful for single-system / devstack /
  demo rigs.

The local driver simply creates ZVOLs for Zones to use on the local
system.  It's not designed with any kind of migration abilities unlike
iSCSI or FC.

> 
> I know that you have been concerned about CI.  For new drivers we are
> allowing some grace period to get things working.  Once we get the
> confusion over blueprints worked out and have some code to start
> reviewing we can continue to discuss that issue.

My plan is to discuss this plan with my team next week after the
holiday.  Once we get something in place on our side, we'll try to get a
blueprint submitted ASAP for review.

Sound good?

-Drew


> 
> Look forward to hearing back from you!
> Jay
> 
> 
> [1]
> https://blueprints.launchpad.net/cinder/+spec/oracle-zfssa-cinder-driver
> [2]
> https://blueprints.launchpad.net/cinder/+spec/oracle-zfssa-nfs-cinder-driver
> 
> 
> On 11/24/2014 11:53 AM, Monty Taylor wrote:
>> On 11/24/2014 10:14 AM, Drew Fisher wrote:
>>>
>>> On 11/17/14 10:27 PM, Duncan Thomas wrote:
 Is the new driver drop-in compatible with the old one? IF not, can
 existing systems be upgraded to the new driver via some manual
 steps, or
 is it basically a completely new driver with similar functionality?
>> Possibly none of my business- but if the current driver is actually just
>> flat broken, then upgrading from it to the new solaris ZFS driver seems
>> unlikely to be possibly, simply because the from case is broken.
>>
>>> The driver in san/solaris.py focuses entirely on iSCSI.  I don't think
>>> existing systems can be upgraded manually but I've never really tried.
>>> We started with a clean slate for Solaris 11 and Cinder and added local
>>> ZFS support for single-system and demo rigs along with a fibre channel
>>> and iSCSI drivers.
>>>
>>> The driver is publically viewable here:
>>>
>>> https://java.net/projects/solaris-userland/sources/gate/content/components/openstack/cinder/files/solaris/zfs.py
>>>
>>>
>>> Please note that this driver is based on Havana.  We know it's old and
>>> we're working to get it updated to Juno right now.  I can try to work
>>> with my team to get a blueprint filed and start working on getting it
>>> integrated into trunk.
>>>
>>> -Drew
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [qa] which core team members are diving into - http://status.openstack.org/elastic-recheck/#1373513

2014-11-25 Thread Matthew Treinish
On Tue, Nov 25, 2014 at 01:22:01PM -0800, Vishvananda Ishaya wrote:
> 
> On Nov 25, 2014, at 7:29 AM, Matt Riedemann  
> wrote:
> 
> > 
> > 
> > On 11/25/2014 9:03 AM, Matt Riedemann wrote:
> >> 
> >> 
> >> On 11/25/2014 8:11 AM, Sean Dague wrote:
> >>> There is currently a review stream coming into Tempest to add Cinder v2
> >>> tests in addition to the Cinder v1 tests. At the same time the currently
> >>> biggest race fail in the gate related to the projects is
> >>> http://status.openstack.org/elastic-recheck/#1373513 - which is cinder
> >>> related.
> >>> 
> >>> I believe these 2 facts are coupled. The number of volume tests we have
> >>> in tempest is somewhat small, and as such the likelihood of them running
> >>> simultaneously is also small. However the fact that as the # of tests
> >>> with volumes goes up we are getting more of these race fails typically
> >>> means that what's actually happening is 2 vol ops that aren't safe to
> >>> run at the same time, are.
> >>> 
> >>> This remains critical - https://bugs.launchpad.net/cinder/+bug/1373513 -
> >>> with no assignee.
> >>> 
> >>> So we really needs dedicated diving on this (last bug update with any
> >>> code was a month ago), otherwise we need to stop adding these tests to
> >>> Tempest, and honestly start skipping the volume tests if we can't have a
> >>> repeatable success.
> >>> 
> >>>-Sean
> >>> 
> >> 
> >> I just put up an e-r query for a newly opened bug
> >> https://bugs.launchpad.net/cinder/+bug/1396186 this morning, it looks
> >> similar to bug 1373513 but without the blocked task error in syslog.
> >> 
> >> There is a three minute gap between when the volume is being deleted in
> >> c-vol logs and when we see the volume uuid logged again, at which point
> >> tempest has already timed out waiting for the delete to complete.
> >> 
> >> We should at least get some patches to add diagnostic logging in these
> >> delete flows (or periodic tasks that use the same locks/low-level i/o
> >> bound commands?) to try and pinpoint these failures.
> >> 
> >> I think I'm going to propose a skip patch for test_volume_boot_pattern
> >> since that just seems to be a never ending cause of pain until these
> >> root issues get fixed.
> >> 
> > 
> > I marked 1396186 as a duplicate of 1373513 since the e-r query for 1373513 
> > had an OR message which was the same as 1396186.
> > 
> > I went ahead and proposed a skip for test_volume_boot_pattern due to bug 
> > 1373513 [1] until people get on top of debugging it.
> > 
> > I added some notes to bug 1396186, the 3 minute hang seems to be due to a 
> > vgs call taking ~1 minute and an lvs call taking ~2 minutes.
> > 
> > I'm not sure if those are hit in the volume delete flow or in some periodic 
> > task, but if there are multiple concurrent worker processes that could be 
> > hitting those commands at the same time can we look at off-loading one of 
> > them to a separate thread or something?
> 
> Do we set up devstack to not zero volumes on delete 
> (CINDER_SECURE_DELETE=False) ? If not, the dd process could be hanging the 
> system due to io load. This would get significantly worse with multiple 
> deletes occurring simultaneously.

Yes, we do that:

http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate.sh#n139

and

http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n170

it can be overridden, but I don't think that any of the job definitions do that.

-Matt Treinish


pgpyqzwGQmI7O.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] proposal: alternating weekly meeting time [doodle poll created]

2014-11-25 Thread Richard Jones
Thanks!

On Tue Nov 25 2014 at 3:15:49 AM Yves-Gwenaël Bourhis <
yves-gwenael.bour...@cloudwatt.com> wrote:

>
>
> Le 24/11/2014 04:20, Richard Jones a écrit :
> > Thanks everyone, I've closed the poll. I'm sorry to say that there's no
> > combination of two timeslots which allows everyone to attend a meeting.
> > Of the 25 respondents, the best we can do is cater for 24 of you.
> >
> > Optimising for the maximum number of attendees, the potential meeting
> > times are 2000 UTC Tuesday and 1000 UTC on one of Monday, Wednesday or
> > Friday. In all three cases the only person who has indicated they cannot
> > attend is Lifeless.
> >
> > Unfortunately, David has indicated that he can't be present at the
> > Tuesday 2000 UTC slot. Optimising for him as a required attendee for
> > both meetings means we lose an additional attendee, and gives us the
> > Wednesday 2000 UTC slot and a few options:
> >
> > - Monday, Wednesday and Thursday at 1200 UTC (Lifeless and ygbo miss)
>
> 1200 UTC is perfect for me.
> The doodle was proposing 1200 UTC to 1400 UTC and in the 2 hours
> bandwidth I can not be sure to be there. but if it's 1200 "on the spot"
> I can for sure :-)
> Since I couldn't precise this on the doodle I didn't select this slot. A
> one hour bandwidth would have allowed more precision but I understand
> you concern that the doodle would have been to long to scroll.
>
> --
> Yves-Gwenaël Bourhis
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [qa] which core team members are diving into - http://status.openstack.org/elastic-recheck/#1373513

2014-11-25 Thread Vishvananda Ishaya

On Nov 25, 2014, at 7:29 AM, Matt Riedemann  wrote:

> 
> 
> On 11/25/2014 9:03 AM, Matt Riedemann wrote:
>> 
>> 
>> On 11/25/2014 8:11 AM, Sean Dague wrote:
>>> There is currently a review stream coming into Tempest to add Cinder v2
>>> tests in addition to the Cinder v1 tests. At the same time the currently
>>> biggest race fail in the gate related to the projects is
>>> http://status.openstack.org/elastic-recheck/#1373513 - which is cinder
>>> related.
>>> 
>>> I believe these 2 facts are coupled. The number of volume tests we have
>>> in tempest is somewhat small, and as such the likelihood of them running
>>> simultaneously is also small. However the fact that as the # of tests
>>> with volumes goes up we are getting more of these race fails typically
>>> means that what's actually happening is 2 vol ops that aren't safe to
>>> run at the same time, are.
>>> 
>>> This remains critical - https://bugs.launchpad.net/cinder/+bug/1373513 -
>>> with no assignee.
>>> 
>>> So we really needs dedicated diving on this (last bug update with any
>>> code was a month ago), otherwise we need to stop adding these tests to
>>> Tempest, and honestly start skipping the volume tests if we can't have a
>>> repeatable success.
>>> 
>>>-Sean
>>> 
>> 
>> I just put up an e-r query for a newly opened bug
>> https://bugs.launchpad.net/cinder/+bug/1396186 this morning, it looks
>> similar to bug 1373513 but without the blocked task error in syslog.
>> 
>> There is a three minute gap between when the volume is being deleted in
>> c-vol logs and when we see the volume uuid logged again, at which point
>> tempest has already timed out waiting for the delete to complete.
>> 
>> We should at least get some patches to add diagnostic logging in these
>> delete flows (or periodic tasks that use the same locks/low-level i/o
>> bound commands?) to try and pinpoint these failures.
>> 
>> I think I'm going to propose a skip patch for test_volume_boot_pattern
>> since that just seems to be a never ending cause of pain until these
>> root issues get fixed.
>> 
> 
> I marked 1396186 as a duplicate of 1373513 since the e-r query for 1373513 
> had an OR message which was the same as 1396186.
> 
> I went ahead and proposed a skip for test_volume_boot_pattern due to bug 
> 1373513 [1] until people get on top of debugging it.
> 
> I added some notes to bug 1396186, the 3 minute hang seems to be due to a vgs 
> call taking ~1 minute and an lvs call taking ~2 minutes.
> 
> I'm not sure if those are hit in the volume delete flow or in some periodic 
> task, but if there are multiple concurrent worker processes that could be 
> hitting those commands at the same time can we look at off-loading one of 
> them to a separate thread or something?

Do we set up devstack to not zero volumes on delete 
(CINDER_SECURE_DELETE=False) ? If not, the dd process could be hanging the 
system due to io load. This would get significantly worse with multiple deletes 
occurring simultaneously.

Vish



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of Python 2.6 CI Testing

2014-11-25 Thread Andreas Jaeger
Just a reminder, the deprecation of the 2.6 python testing is planned
for the 30th of November,

Andreas

On 10/24/2014 02:31 AM, Clark Boylan wrote:
> Hello,
> 
> At the Atlanta summit there was a session on removing python2.6
> testing/support from the OpenStack Kilo release [0]. The Infra team is
> working on enacting this change in the near future.
> 
> The way that this will work is python26 jobs will be removed from
> running on master and feature branches of projects that have
> stable/icehouse and/or stable/juno branches. The python26 jobs will
> still continue to run against the stable branches. Any project that is a
> library consumed by stable releases but does not have stable branches
> will have python26 run against that project's master branch. This is
> necessary to ensure we don't break backward compatibility with stable
> releases.
> 
> This essentially boils down to: no python26 jobs against server project
> master branches, but python26 jobs continue to run against stable
> branches. Python-*client and oslo projects[1] will continue to have
> python26 jobs run against their master branches. All other projects will
> have python26 jobs completely removed (including stackforge).
> 
> If you are a project slated to have python26 removed and would prefer to
> continue testing python26 that is doable, but we ask that you propose a
> change atop the removal change [2] that adds python26 back to your
> project. This way it is clear through git history and review that this
> is a desired state. Also, this serves as a warning to the future where
> we will drop all python26 jobs when stable/juno is no longer supported.
> At that point we will stop building slaves capable of running python26
> jobs.
> 
> Rough timeline for making these changes is early next week for OpenStack
> projects. Then at the end of November (November 30th) we will make the 
> changes to stackforge. This should give us plenty of time to work out 
> which stackforge projects wish to continue testing python26.
> 
> [0] https://etherpad.openstack.org/p/juno-cross-project-future-of-python
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-October/048999.html
> [2] https://review.openstack.org/129434
> 
> Let me or the Infra team know if you have any questions,
> Clark
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF:Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-25 Thread Monty Taylor
On 11/25/2014 03:40 PM, Richard Jones wrote:
> On Wed Nov 26 2014 at 3:36:27 AM Thomas Goirand  wrote:
> 
>> On 11/21/2014 08:31 PM, Donald Stufft wrote:
>>>
 On Nov 21, 2014, at 3:59 AM, Thomas Goirand  wrote:

>
> I'm not sure I understand the meaning behind this question. "bower
> install angular" downloads a bower package called "angular".

 Isn't there is a simple URL that I may use with wget? I don't really
 want to use bower directly, I just would like to have a look to the
 content of the bower package.
>>>
>>> You can’t. Bower doesn’t have “traditional” packages where you take a
>>> directory and archive it using tar/zip/whatever and then upload it to
>>> some repo. Bower has a registry which maps names to git URLs and then
>>> the bower CLI looks up that mapping, fetches the git repository and then
>>> uses that as the input to the “look at metadata and do stuff with files”
>>> part of the package manager instead of the output of an un-unarchival
>>> command.
>>
>> Then this makes Bower a very bad candidate to "debianize" stuff. We'll
>> have a moving target with a constantly changing git from upstream,
>> meaning that we'll have all sorts of surprise in the gate.
>>
> 
> It's no more constantly moving than any other project. Bower versions are
> tied to git tags. In fact, since debian etc. usually go to the repository
> rather than use release tarballs, bower *improves* things by requiring the
> tags, making it easier for you to isolate the version in the repository
> that you need, whereas otherwise people just have to remember to tag and
> often don't :)
> 
> 
> 
>> Frankly, this Bower thing scares me, and I don't really understand why
>> we're not continuing to use XStatic stuff, which was really convenient
>> and has been proven to work during the Juno cycle.
>>
> 
> We're doing this so we don't have the additional burden of creating and
> maintaining the xstatic packages.

Also, it's _very_ standard javascript tooling. It's what javascript devs
use. So sooner or later there's going to need to be a proper story for
it in Debian if Debian wants to continue to be able to provide value
around applications written in javascript. Might as well be the
trailblazers, no?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-25 Thread Richard Jones
On Wed Nov 26 2014 at 3:36:27 AM Thomas Goirand  wrote:

> On 11/21/2014 08:31 PM, Donald Stufft wrote:
> >
> >> On Nov 21, 2014, at 3:59 AM, Thomas Goirand  wrote:
> >>
> >>>
> >>> I'm not sure I understand the meaning behind this question. "bower
> >>> install angular" downloads a bower package called "angular".
> >>
> >> Isn't there is a simple URL that I may use with wget? I don't really
> >> want to use bower directly, I just would like to have a look to the
> >> content of the bower package.
> >
> > You can’t. Bower doesn’t have “traditional” packages where you take a
> > directory and archive it using tar/zip/whatever and then upload it to
> > some repo. Bower has a registry which maps names to git URLs and then
> > the bower CLI looks up that mapping, fetches the git repository and then
> > uses that as the input to the “look at metadata and do stuff with files”
> > part of the package manager instead of the output of an un-unarchival
> > command.
>
> Then this makes Bower a very bad candidate to "debianize" stuff. We'll
> have a moving target with a constantly changing git from upstream,
> meaning that we'll have all sorts of surprise in the gate.
>

It's no more constantly moving than any other project. Bower versions are
tied to git tags. In fact, since debian etc. usually go to the repository
rather than use release tarballs, bower *improves* things by requiring the
tags, making it easier for you to isolate the version in the repository
that you need, whereas otherwise people just have to remember to tag and
often don't :)



> Frankly, this Bower thing scares me, and I don't really understand why
> we're not continuing to use XStatic stuff, which was really convenient
> and has been proven to work during the Juno cycle.
>

We're doing this so we don't have the additional burden of creating and
maintaining the xstatic packages.


  Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [qa] gate-tempest-dsvm-neutron-heat-slow future

2014-11-25 Thread Sean Dague
On 11/25/2014 03:33 PM, Steve Baker wrote:
> On 26/11/14 05:05, Matthew Treinish wrote:
>> On Tue, Nov 25, 2014 at 10:28:52AM -0500, Sean Dague wrote:
>>> So as I was waiting for other tests to return, I started looking through
>>> our existing test lists.
>>>
>>> gate-tempest-dsvm-neutron-heat-slow has been slowly evaporating, and I'm
>>> no longer convinced that it does anything useful (and just burns test
>>> nodes).
>>>
>>> The entire output of the job is currently as follows:
>>>
>>> 2014-11-25 14:43:13.801 | heat-slow runtests: commands[0] | bash
>>> tools/pretty_tox.sh
>>> (?=.*\[.*\bslow\b.*\])(^tempest\.(api|scenario)\.orchestration)
>>> --concurrency=4
>>> 2014-11-25 14:43:21.313 | {1}
>>> tempest.scenario.orchestration.test_server_cfn_init.CfnInitScenarioTest.test_server_cfn_init
>>>
>>> ... SKIPPED: Skipped until Bug: 1374175 is resolved.
>>> 2014-11-25 14:47:36.271 | {0}
>>> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_network
>>>
>>> [0.169736s] ... ok
>>> 2014-11-25 14:47:36.271 | {0}
>>> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_resources
>>>
>>> [0.000508s] ... ok
>>> 2014-11-25 14:47:36.291 | {0}
>>> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router
>>>
>>> [0.019679s] ... ok
>>> 2014-11-25 14:47:36.313 | {0}
>>> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router_interface
>>>
>>> [0.020597s] ... ok
>>> 2014-11-25 14:47:36.564 | {0}
>>> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_server
>>>
>>> [0.250788s] ... ok
>>> 2014-11-25 14:47:36.580 | {0}
>>> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_subnet
>>>
>>> [0.015410s] ... ok
>>> 2014-11-25 14:47:44.113 |
>>> 2014-11-25 14:47:44.113 | ==
>>> 2014-11-25 14:47:44.113 | Totals
>>> 2014-11-25 14:47:44.113 | ==
>>> 2014-11-25 14:47:44.114 | Run: 7 in 0.478173 sec.
>>> 2014-11-25 14:47:44.114 |  - Passed: 6
>>> 2014-11-25 14:47:44.114 |  - Skipped: 1
>>> 2014-11-25 14:47:44.115 |  - Failed: 0
>>> 2014-11-25 14:47:44.115 |
>>> 2014-11-25 14:47:44.116 | ==
>>> 2014-11-25 14:47:44.116 | Worker Balance
>>> 2014-11-25 14:47:44.116 | ==
>>> 2014-11-25 14:47:44.117 |  - Worker 0 (6 tests) => 0:00:00.480677s
>>> 2014-11-25 14:47:44.117 |  - Worker 1 (1 tests) => 0:00:00.001455s
>>>
>>> So we are running about 1s worth of work, no longer in parallel (as
>>> there aren't enough classes to even do parallel runs).
>>>
>>> Given the emergence of the heat functional job, and the fact that this
>>> is really not testing anything any more, I'd like to propose we just
>>> remove it entirely at this stage and get the test nodes back.
>>>
>>
> +1, I think its time to go. NeutronResourcesTestJSON needs to be ported
> to a functional test, but I don't think killing heat-slow needs to wait
> for that to happen.

Awesome. The change is proposed here -
https://review.openstack.org/#/c/137123/

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [qa] gate-tempest-dsvm-neutron-heat-slow future

2014-11-25 Thread Steve Baker

On 26/11/14 05:05, Matthew Treinish wrote:

On Tue, Nov 25, 2014 at 10:28:52AM -0500, Sean Dague wrote:

So as I was waiting for other tests to return, I started looking through
our existing test lists.

gate-tempest-dsvm-neutron-heat-slow has been slowly evaporating, and I'm
no longer convinced that it does anything useful (and just burns test
nodes).

The entire output of the job is currently as follows:

2014-11-25 14:43:13.801 | heat-slow runtests: commands[0] | bash
tools/pretty_tox.sh
(?=.*\[.*\bslow\b.*\])(^tempest\.(api|scenario)\.orchestration)
--concurrency=4
2014-11-25 14:43:21.313 | {1}
tempest.scenario.orchestration.test_server_cfn_init.CfnInitScenarioTest.test_server_cfn_init
... SKIPPED: Skipped until Bug: 1374175 is resolved.
2014-11-25 14:47:36.271 | {0}
tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_network
[0.169736s] ... ok
2014-11-25 14:47:36.271 | {0}
tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_resources
[0.000508s] ... ok
2014-11-25 14:47:36.291 | {0}
tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router
[0.019679s] ... ok
2014-11-25 14:47:36.313 | {0}
tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router_interface
[0.020597s] ... ok
2014-11-25 14:47:36.564 | {0}
tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_server
[0.250788s] ... ok
2014-11-25 14:47:36.580 | {0}
tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_subnet
[0.015410s] ... ok
2014-11-25 14:47:44.113 |
2014-11-25 14:47:44.113 | ==
2014-11-25 14:47:44.113 | Totals
2014-11-25 14:47:44.113 | ==
2014-11-25 14:47:44.114 | Run: 7 in 0.478173 sec.
2014-11-25 14:47:44.114 |  - Passed: 6
2014-11-25 14:47:44.114 |  - Skipped: 1
2014-11-25 14:47:44.115 |  - Failed: 0
2014-11-25 14:47:44.115 |
2014-11-25 14:47:44.116 | ==
2014-11-25 14:47:44.116 | Worker Balance
2014-11-25 14:47:44.116 | ==
2014-11-25 14:47:44.117 |  - Worker 0 (6 tests) => 0:00:00.480677s
2014-11-25 14:47:44.117 |  - Worker 1 (1 tests) => 0:00:00.001455s

So we are running about 1s worth of work, no longer in parallel (as
there aren't enough classes to even do parallel runs).

Given the emergence of the heat functional job, and the fact that this
is really not testing anything any more, I'd like to propose we just
remove it entirely at this stage and get the test nodes back.



+1, I think its time to go. NeutronResourcesTestJSON needs to be ported 
to a functional test, but I don't think killing heat-slow needs to wait 
for that to happen.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] Proposal for new Glance core reviewers

2014-11-25 Thread Arnaud Legendre
+1 for both! Congrats :)


On Nov 25, 2014, at 12:16 PM, Nikhil Komawar 
mailto:nikhil.koma...@rackspace.com>> wrote:

Hi all,

Please consider this email as a nomination for Erno and Alex (CC) for adding 
them to the list of Glance core reviewers. Over the last cycle, both of them 
have been doing good work with reviews, participating in the project 
discussions as well as taking initiatives to creatively improve the project. 
Their insights into project internals and it's future directions has been 
valuable too.

Please let me know if anyone has concerns with this change. If there none 
brought up, I will make this membership change official in about a week.

Thanks for your consideration and the hard work, Erno and Alex!

Cheers!
-Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] [qa] which test configs does the swift team find useful

2014-11-25 Thread Sean Dague
On 11/25/2014 02:46 PM, John Dickinson wrote:
> This is great!
> 
> Sean, I agree with your analysis.
> 
> gate-swift-pep8 (yes)
> gate-swift-docs (yes)
> gate-swift-python27 (yes)
> gate-swift-tox-func (yes)
> check-swift-dsvm-functional (yes)
> check-tempest-dsvm-full(to further ensure glance/heat/cinder checking)
> check-grenade-dsvm  (I can go either way on this one, I won't fight for 
> or against it)

Ok, great - https://review.openstack.org/#/c/137184/ should implement
that analysis if I got the patch right.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] [all] Proposal for new Glance core reviewers

2014-11-25 Thread Nikhil Komawar
Hi all,

Please consider this email as a nomination for Erno and Alex (CC) for adding 
them to the list of Glance core reviewers. Over the last cycle, both of them 
have been doing good work with reviews, participating in the project 
discussions as well as taking initiatives to creatively improve the project. 
Their insights into project internals and it's future directions has been 
valuable too.

Please let me know if anyone has concerns with this change. If there none 
brought up, I will make this membership change official in about a week.

Thanks for your consideration and the hard work, Erno and Alex!

Cheers!
-Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Anyone Using the Open Solaris ZFS Driver?

2014-11-25 Thread Jay S. Bryant

Monty,

I agree that upgrade is not a significant concern right now if the 
existing driver is not working.


Drew,

I am having trouble following where you guys are currently at with this 
work.  I would like to help get you guys up and going during Kilo.


I am concerned that maybe there is confusion about the 
blueprints/patches that we are all talking about here.  I see this 
Blueprint that was accepted for Juno and appears to have an associated 
patch merged:  [1]  I also see this Blueprint that doesn't appear to be 
started yet: [2]  So, can you help me understand what it is you are 
hoping to get in for Kilo?


I know that you have been concerned about CI.  For new drivers we are 
allowing some grace period to get things working.  Once we get the 
confusion over blueprints worked out and have some code to start 
reviewing we can continue to discuss that issue.


Look forward to hearing back from you!
Jay


[1] 
https://blueprints.launchpad.net/cinder/+spec/oracle-zfssa-cinder-driver
[2] 
https://blueprints.launchpad.net/cinder/+spec/oracle-zfssa-nfs-cinder-driver


On 11/24/2014 11:53 AM, Monty Taylor wrote:

On 11/24/2014 10:14 AM, Drew Fisher wrote:


On 11/17/14 10:27 PM, Duncan Thomas wrote:

Is the new driver drop-in compatible with the old one? IF not, can
existing systems be upgraded to the new driver via some manual steps, or
is it basically a completely new driver with similar functionality?

Possibly none of my business- but if the current driver is actually just
flat broken, then upgrading from it to the new solaris ZFS driver seems
unlikely to be possibly, simply because the from case is broken.


The driver in san/solaris.py focuses entirely on iSCSI.  I don't think
existing systems can be upgraded manually but I've never really tried.
We started with a clean slate for Solaris 11 and Cinder and added local
ZFS support for single-system and demo rigs along with a fibre channel
and iSCSI drivers.

The driver is publically viewable here:

https://java.net/projects/solaris-userland/sources/gate/content/components/openstack/cinder/files/solaris/zfs.py

Please note that this driver is based on Havana.  We know it's old and
we're working to get it updated to Juno right now.  I can try to work
with my team to get a blueprint filed and start working on getting it
integrated into trunk.

-Drew

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] [qa] which test configs does the swift team find useful

2014-11-25 Thread John Dickinson
This is great!

Sean, I agree with your analysis.

gate-swift-pep8 (yes)
gate-swift-docs (yes)
gate-swift-python27 (yes)
gate-swift-tox-func (yes)
check-swift-dsvm-functional (yes)
check-tempest-dsvm-full(to further ensure glance/heat/cinder checking)
check-grenade-dsvm  (I can go either way on this one, I won't fight for or 
against it)



--John





> On Nov 25, 2014, at 7:03 AM, Sean Dague  wrote:
> 
> As we are trying to do smart disaggregation of tests in the gate, I
> think it's important to figure out which test configurations seem to be
> actually helping, and which aren't. As the swift team has long had a
> functional test job, this seems like a good place to start. (Also the
> field deploy / upgrade story on Swift is probably one of the best of any
> OpenStack project, so removing friction is probably in order.)
> 
> gate-swift-pep8   SUCCESS in 1m 16s
> gate-swift-docs   SUCCESS in 1m 48s
> gate-swift-python27   SUCCESS in 3m 24s
> check-tempest-dsvm-full   SUCCESS in 56m 51s
> check-tempest-dsvm-postgres-full  SUCCESS in 54m 53s
> check-tempest-dsvm-neutron-full   SUCCESS in 1h 06m 09s
> check-tempest-dsvm-neutron-heat-slow  SUCCESS in 31m 18s
> check-grenade-dsvmSUCCESS in 39m 33s
> gate-tempest-dsvm-large-ops   SUCCESS in 29m 34s
> gate-tempest-dsvm-neutron-large-ops   SUCCESS in 22m 11s
> gate-swift-tox-func   SUCCESS in 2m 50s (non-voting)
> check-swift-dsvm-functional   SUCCESS in 17m 12s
> check-devstack-dsvm-cells SUCCESS in 15m 18s
> 
> 
> I think in looking at that it's obvious that:
> * check-devstack-dsvm-cells
> * check-tempest-dsvm-postgres-full
> * gate-tempest-dsvm-large-ops
> * gate-tempest-dsvm-neutron-large-ops
> * check-tempest-dsvm-neutron-full
> 
> Provide nothing new to swift, the access patterns on the glance => swift
> interaction aren't impacted on any of those, neither is the heat / swift
> resource tests or volumes / swift backup tests.
> 
> check-tempest-dsvm-neutron-heat-slow  doesn't touch swift either (it's
> actually remarkably sparse of any content).
> 
> Which kind of leaves us with 1 full stack run, and the grenade job. Have
> those caught real bugs? Does there remain value in them? Have other
> teams that rely on swift found those to block regressions?
> 
> Let's figure out what's helpful, and what's not, and purge out all the
> non helpful stuff.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV][Telco] Meeting Reminder - Wednesday @ 2200 UTC

2014-11-25 Thread Steve Gordon
Hi all,

Just a reminder - there is a Telco working group meeting at 2200 UTC on 
Wednesday in #openstack-meeting. If you have items you wish to add to the 
agenda please add them to the etherpad:

https://etherpad.openstack.org/p/nfv-meeting-agenda

Note that I fully expect attendance will take a hit as it's the day before the 
US Thanksgiving holiday but think it's worth trying to continue moving forward 
with those who are still available.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] New meeting room and time!

2014-11-25 Thread Devananda van der Veen
As discussed on this list previously [0] and agreed to in the last meeting
[1], our weekly IRC meeting time is changing to better accommodate the many
active contributors we have who are not well served by the current meeting
time.

The new meeting time will alternate between 1700 UTC on Mondays and 0500
UTC on Tuesdays. The next meeting, on Monday Dec 1st, will be at 1700 UTC

http://www.timeanddate.com/worldclock/fixedtime.html?iso=20141201T1700


*** NOTE ***
This change of time also requires us to change rooms; we will now be
meeting in the *#openstack-meeting-3* room. I'll remind folks by announcing
this in our main channel before hand as well.
*** NOTE ***

All relevant wiki pages have been updated, but the change has not
propagated to the iCal feed just yet.

Regards,
Devananda

[0]
http://lists.openstack.org/pipermail/openstack-dev/2014-November/050838.html

[1]
http://eavesdrop.openstack.org/meetings/ironic/2014/ironic.2014-11-24-19.01.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues regarding Jenkins

2014-11-25 Thread Jeremy Stanley
On 2014-11-25 07:33:14 -0500 (-0500), Sean Dague wrote:
> Right now all icehouse jobs are showing up as NOT_REGISTERED. It's
> cross project and current.

This should be solved now for any dsvm icehouse jobs started since
15:00 UTC today.

Contributing factors:

1. We restarted Zuul and it doesn't keep a history of Gearman worker
registrations over restart, relying on the Jenkins masters to
reregister what jobs they know about.

2. The Gearman plug-in for Jenkins only registers jobs for a given
worker type when a worker of that type is added to that master.

3. Due to extremely low relative demand for devstack-precise workers
(as they only run for stable/icehouse branch and backward-compat
jobs), Nodepool was only trying to build one at a time.

4. For some reason the latest devstack-precise snapshot in one of
our providers' regions was causing nova boot to consistently fail
and discard the instance as being in an error state.

5. Due to the way the Nodepool scheduler is designed, when it is
retrying to boot one single node of a given label it retries it in
the same provider and region: in this case repeatedly trying to
build one single devstack-precise node from a snapshot which was not
working in that location.

Resolution was ultimately achieved by briefly shuffling the provider
order for the devstack-precise label in Nodepool's configuration,
causing it to retry in a different region where the boot ultimately
succeeded and got attached to a Jenkins master which then registered
all jobs requiring devstack-precise nodes into Zuul.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Priorities for Kilo?

2014-11-25 Thread Collins, Sean
Hi,

Based on today's meeting and previous discussions with Kyle, I've tried
to put down as much on paper about what I'd like to get done this
development cycle. I used some of the links that we swapped today in the
IRC meeting, but I know right now that I didn't get everything we talked
about.

Please review, and give me links to bugs in launchpad and specs in
Gerrit.

I am trying to avoid linking directly to patches that fix bugs or
implement a spec, to keep things simple.

https://review.openstack.org/137169

Finally, please let me know if you agree with what I've marked highest
priority. The tl;dr is that we need to fix gaps in Neutron Routers and
DVR support since everyone is going to want tenant networks with IPv6
functionality and also DVR support.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard

2014-11-25 Thread Matthias Runge
On 17/11/14 14:43, Yves-Gwenaël Bourhis wrote:
> Le 17/11/2014 14:19, Matthias Runge a écrit :
> 
>> There is already horizon on pypi[1]
>>
>> IMHO this will lead only to more confusion.
>>
>> Matthias
>>
>>
>> [1] https://pypi.python.org/pypi/horizon/2012.2
> 
> Well the current "horizon" on Pypi is "The OpenStack Dashboard" +
> horizon(_lib) included
> 
> If the future "horizon" on pypi is "openstack_dashboard" alone, it would
> still pull "horizon_lib" as a dependency, so it would not brake the
> existing.

That would break expectations.

When installing a software package named python-foo, I would expect a
python package named foo in there, not a package named bar.

Dependencies between packages are best defined in distribution packages.
They solve dependencies since ages.

For Fedora, there is a policy for package naming. I'd assume the same to
be true for Debian.


Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][neutron] regression in Juno tree

2014-11-25 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,

we've introduced a regression when backporting the following patch [1]
which shows up as metadata proxy process not spawned for networks with
a ipv6 subnet. There was no official Juno release that would include
the patch though, yet. The issue is to be fixed by [2] and its
backport to Juno.

With Juno release scheduled some time at the start of December, I
would like to kindly ask Neutron developers to pay attention to and
merge the regression fix [2] into master so that we're able to
consider it for backporting.

If it's not merged in the near future, we're left with the only option
to avoid the regression: reverting [3] the failing patch before the
next Juno release. This is not a good thing since in that case we'll
release another point version of Juno with non-provider DHCPv6
completely broken.

Thanks,
/Ihar

[1]: https://review.openstack.org/#/c/123671/
[2]: https://review.openstack.org/#/c/134432/
[3]: https://review.openstack.org/#/c/137008/
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUdMK8AAoJEC5aWaUY1u57Y5cH/R3CCq1pal6HEUfhYHT0a2mP
9nHVSTGDqZs6ayiou4PqbSvlRYvhoHF5hBv1n3I0hCXngQxN43EUr1sVSUQd84Ff
a7n9UoZpI3ubT0Ksh6PO76+WO2I6OLV74BfFqEnQ41aPA1cNVMjQeBTEaq30NNE0
mQVET50YcUmtt/x8Iu6wKKp3eJG72SLynoRkPSnR7CVl0zQLyoPk2qr6Oy1KU4Fa
hJBchZemcpGp2IML1hlDmmeF5BHppBSAfuvS5+wewqz4ADJbBy3nAfbBDjfUJKng
9ghHmv8+jAtXi1xuMOnDK2o8Jn6IyWeU31B490jgtlOgu57tV9VOE+xR7zSzxyU=
=97N6
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-11-25 Thread Eoghan Glynn


> I think Doug's suggestion of keeping the schema files in-tree and pushing
> them to a well-known tarball maker in a build step is best so far.
> 
> It's still a little clunky, but not as clunky as having to sync two repos.

Yes, I tend to agree.

So just to confirm that my understanding lines up:

* the tarball would be used by the consumer-side for unit tests and
  limited functional tests (where the emitter service is not running)

* the tarball would be also be used by the consumer-side in DSVM-based
  CI and in a full production deployments (where the emitter service is
  running)

* the tarballs will be versioned, with old versions remaining accessible
  (as per the current practice with released source on tarballs.openstack.org)

* the consumer side will know which version of each schema it expects to
  support, and will download the appropriate tarball at runtime

* the emitter side will signal the schema version that's it actually using,
  via say a well-known field in the notification body

* the consumer will reject notification payloads with a mismatched major
  version to what it's expecting to support
 
> >[snip]
> >> >> d. Should we make separate distro packages? Install to a well known
> >> >> location all the time? This would work for local dev and integration
> >> >> testing and we could fall back on B and C for production distribution.
> >> >> Of
> >> >> course, this will likely require people to add a new distro repo. Is
> >> >> that
> >> >> a concern?
> >>
> >> >Quick clarification ... when you say "distro packages", do you mean
> >> >Linux-distro-specific package formats such as .rpm or .deb?
> >>
> >> Yep.
> 
> >So that would indeed work, but just to sound a small note of caution
> >that keeping an oft-changing package (assumption #5) up-to-date for
> >fedora20/21 & epel6/7, or precise/trusty, would involve some work.
> 
> >I don't know much about the Debian/Ubuntu packaging pipeline, in
> >particular how it could be automated.
> 
> >But in my small experience of Fedora/EL packaging, the process is
> >somewhat resistant to many fine-grained updates.
> 
> Ah, good to know. So, if we go with the tarball approach, we should be able
> to avoid this. And it allows the service to easily service up the schema
> using their existing REST API.

I'm not clear on how servicing up the schema via an existing API would
avoid the co-ordination issue identified in the original option (b)?

Would that API just be a very simple proxying in front of the well-known
source of these tarballs? 

For production deployments, is it likely that some shops will not want
to require access to an external site such as tarballs.openstack.org?

So in that case, would we require that they mirror, or just assume that
downstream packagers will bundle the appropriate schema versions with
the packages for the emitter and consumer services?

Cheers,
Eoghan

> Should we proceed under the assumption we'll push to a tarball in a
> post-build step? It could change if we find it's too messy.
> 
> -S
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Zabbix in HA mode

2014-11-25 Thread Mike Scherbakov
Regarding the licensing, it should not be an issue because we provide all
source code (if not as git repos, then as source RPMs/DEBs).

On Tue, Nov 25, 2014 at 7:34 PM, Bartosz Kupidura 
wrote:

> Hello Vladimir,
> I agree. But in most cases, zabbix-server would be moved from failed node
> by pacemaker.
> Moreover some clients dont want to „waste” 3 additional servers only for
> monitoring.
>
> As i said, this is only first drop of zabbix HA. Later we can allow user
> to deploy zabbix-server
> not only on controllers, but also on dedicated nodes.
>
> Best Regards,
> Bartosz Kupidura
>
>
> > Wiadomość napisana przez Vladimir Kuklin  w dniu
> 25 lis 2014, o godz. 15:47:
> >
> > Bartosz,
> >
> > It is obviously possible to install zabbix on the master nodes and put
> it under pacemaker control. But it seems very strange for me to monitor
> something with software located on the nodes that you are monitoring.
> >
> > On Tue, Nov 25, 2014 at 4:21 PM, Bartosz Kupidura <
> bkupid...@mirantis.com> wrote:
> > Hello All,
> >
> > Im working on Zabbix implementation which include HA support.
> >
> > Zabbix server should be deployed on all controllers in HA mode.
> >
> > Currently we have dedicated role 'zabbix-server', which does not support
> more
> > than one zabbix-server. Instead of this we will move monitoring solution
> (zabbix),
> > as an additional component.
> >
> > We will introduce additional role 'zabbix-monitoring', assigned to all
> servers with
> > lowest priority in serializer (run puppet after every other roles) when
> zabbix is
> > enabled.
> > 'Zabbix-monitoring' role will be assigned automatically.
> >
> > When zabbix component is enabled, we will install zabbix-server on all
> controllers
> > in active-backup mode (pacemaker+haproxy).
> >
> > In next stage, we can allow users to deploy zabbix-server on dedicated
> node OR
> > on controllers for performance reasons.
> > But for now we should force zabbix-server to be deployed on controllers.
> >
> > BP is in initial phase, but code is ready and working with Fuel 5.1.
> > Now im checking if it works with master.
> >
> > Any comments are welcome!
> >
> > BP link: https://blueprints.launchpad.net/fuel/+spec/zabbix-ha
> >
> > Best Regards,
> > Bartosz Kupidura
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > --
> > Yours Faithfully,
> > Vladimir Kuklin,
> > Fuel Library Tech Lead,
> > Mirantis, Inc.
> > +7 (495) 640-49-04
> > +7 (926) 702-39-68
> > Skype kuklinvv
> > 45bk3, Vorontsovskaya Str.
> > Moscow, Russia,
> > www.mirantis.com
> > www.mirantis.ru
> > vkuk...@mirantis.com
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [pecan] [WSME] Different content-type in request and response

2014-11-25 Thread Nikolay Makhotkin
Hi, folks!

I try to create a controller which should receive one http content-type in
request but it should be another content-type in response. I tried to use
pecan and wsme decorators for controller's methods.

I just want to receive text on server and send json-encoded string from
server (request has text/plain and response - application/json)

I tried:

class MyResource(resource.Resource):
id = wtypes.text
name = wtypes.text


class MyResourcesController(rest.RestController):
@wsexpose(MyResource, body=wtypes.text)
def put(self, text):
return MyResource(id='1', name=text)


According to WSME documentation (
http://wsme.readthedocs.org/en/latest/integrate.html#module-wsmeext.pecan)
signature wsexpose method as following:

  wsexpose(*return_type*, **arg_types*, ***options*)

Ok, I just set MyResource as return_type and body to text type. But it
didn't work as expected:
http://paste.openstack.org/show/138268/

I looked at pecan documentation at
https://media.readthedocs.org/pdf/pecan/latest/pecan.pdf but I didn't find
anything that can fit to my case.

Also, I tried:

class MyResource(resource.Resource):
id = wtypes.text
name = wtypes.text


class MyResourcesController(rest.RestController):
@expose('json')
@expose(content_type="text/plain")
def put(self):
text = pecan.request.text
return MyResource(id='1', name=text).to_dict()

It worked just in case if request and response have the same content-type.
(application/json<->application/json, text/plain<->text/plain)

I also tried a lot of combination of parameters but it is still not worked.

Does anyone know what the problem is?
How it can be done using WSME and/or Pecan?

Sorry if I misunderstand something.
-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-25 Thread Solly Ross
> I can't comment on other projects, but Nova definitely needs the soft
> delete in the main nova database. Perhaps not for every table, but
> there is definitely code in the code base which uses it right now.
> Search for read_deleted=True if you're curious.

Just to save people a bit of time, it's actually `read_deleted='yes'`
or `read_deleted="yes"` for many cases.

Just to give people a quick overview:

A cursory glance (no pun intended) seems to indicate that quite a few of
these are reading potentially deleted flavors.  For this case, it makes
sense to keep things in one table (as we do).

There are also quite a few that seem to be making sure deleted "things"
are properly cleaned up.  In this case, 'deleted' acts as a "CLEANUP"
state, so it makes just as much sense to keep the deleted rows in a
separate table.

> 
> For this case in particular, the concern is that operators might need
> to find where an instance was running once it is deleted to be able to
> diagnose issues reported by users. I think that's a valid use case of
> this particular data.
> 
> >> This is a new database, so its our big chance to get this right. So,
> >> ideas welcome...
> >>
> >> Some initial proposals:
> >>
> >> - we do what we do in the current nova database -- we have a deleted
> >> column, and we set it to true when we delete the instance.
> >>
> >> - we have shadow tables and we move delete rows to a shadow table.
> >
> >
> > Both approaches are viable, but as the soft-delete column is widespread, it
> > would be thorny for this new app to use some totally different scheme,
> > unless the notion is that all schemes should move to the audit table
> > approach (which I wouldn’t mind, but it would be a big job).FTR, the
> > audit table approach is usually what I prefer for greenfield development,
> > if all that’s needed is forensic capabilities at the database inspection
> > level, and not as much active GUI-based “deleted” flags.   That is, if you
> > really don’t need to query the history tables very often except when
> > debugging an issue offline.  The reason its preferable is because those
> > rows are still “deleted” from your main table, and they don’t get in the
> > way of querying.   But if you need to refer to these history rows in
> > context of the application, that means you need to get them mapped in such
> > a way that they behave like the primary rows, which overall is a more
> > difficult approach than just using the soft delete column.

I think it does really come down here to how you intend to use the soft-delete
functionality in Cells.  If you just are using it to debug or audit, then I 
think
the right way to go would be either the audit table (potentially can store more
lifecycle data, but could end up taking up more space) or a separate shadow
table (takes up less space).

If you are going to use the soft delete for application functionality, I would
consider differentiating between "deleted" and "we still have things left to
clean up", since this seems to be mixing two different requirements into one.

> >
> > That said, I have a lot of plans to send improvements down the way of the
> > existing approach of “soft delete column” into projects, from the querying
> > POV, so that criteria to filter out soft delete can be done in a much more
> > robust fashion (see
> > https://bitbucket.org/zzzeek/sqlalchemy/issue/3225/query-heuristic-inspector-event).
> > But this is still more complex and less performant than if the rows are
> > just gone totally, off in a history table somewhere (again, provided you
> > really don’t need to look at those history rows in an application context,
> > otherwise it gets all complicated again).
> 
> Interesting. I hadn't seen consistency between the two databases as
> trumping doing this less horribly, but it sounds like its more of a
> thing that I thought.
> 
> Thanks,
> Michael
> 
> --
> Rackspace Australia
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Splitting up the assignment component

2014-11-25 Thread Morgan Fainberg

> On Nov 25, 2014, at 4:25 AM, Henry Nash  wrote:
> 
> Hi
> 
> As most of you know, we have approved a spec 
> (https://review.openstack.org/#/c/129397/) to split the assignments component 
> up into two pieces, and the code (divided up into a series of patches) is 
> currently in review (https://review.openstack.org/#/c/130954/). While most 
> aspects of the split appear to have agreement, there is one aspect that has 
> been questioned - and that is the whether "roles' should be in the "resource" 
> component, as proposed?
> 
> First, let's recap the goals here:
> 
> 1) The current "assignment" component is really what's left after we split 
> off users/groups into "identity" some releases ago.  "Assignments" is pretty 
> complicated and messy - and we need a better structure (as an example, just 
> doing the split allowed me to find 5 bugs in our current implementation - and 
> I wouldn't be surprised if there are more).  This is made more urgent by the 
> fact that we are about to land some big new changes in this area, e.g. 
> hierarchical projects and a re-implemntation (for performance) of 
> list_role_assignments.
> 
> 2) While Keystone may have started off as a service where we "store all the 
> users, credentials & permissions" needed to access other OpenStack services, 
> we more and more see Keystone as a wrapper for existing corporate 
> authentication and authorisation mechanisms - and it's job is really to 
> provided a common mechanism and language for these to be consumed across 
> OpenStack services.  To do this well, we must make sure that the keystone 
> components are split along sensible lines...so that they can individually 
> wrap these corporate directories/services.  The classic case of this was are 
> previous split off of Identity...and this new proposal takes this a step 
> further.
> 
> 3) As more and more broad OpenStack powered clouds are created, we must makes 
> sure that our Keystone implementation is as flexible as possible. We already 
> plan to support new abstractions for things like cloud providers enabling 
> resellers to do business within one OpenStack cloud (by providing 
> hierarchical multi-tenancy, domain-roles etc.). Our current assignments model 
> is a) slightly unusual in that all roles are global and every assignment has 
> actor-target-role, and b) cannot easily be substituted for alternate 
> assignment models (even for the whole of an OpenStack installation, let alone 
> on a domain by domain basis)
> 
> The proposal for splitting the assignment component is trying to provide a 
> better basis for the above.  It separates the storing and CRUD operations of 
> domain/projects/roles into a "resource" component, while leaving the pure 
> assignment model in "assignment".  The rationale for this is that the 
> resource component defines the entities that the rest of the OpenStack 
> services (and their policy engines) understand...while assignment is a pure 
> mapper between these entities. The details of these mappings are never 
> exposed outside of Keystone, except for the generation of contents of a 
> token.  This would allow new assignment models to be introduced that, as long 
> as they support the api to "list what role_ids are mapped to project_id X for 
> user_id Y", then the rest of OpenStack would never know anything had changed.
> 
> So to (finally) get the the point of this post...where should the role 
> definitions live? The proposal is that these live in "resource", because:
> 
> a) They represent the definition of how Keystone and the other services 
> define permission - and this should be independent of whatever assignment 
> model we choose
> b) We may well chose (in the future) to morph what we currently means as a 
> role...into what they really are, which is a capability.  Once we have 
> domain-specifc roles (groups), which map to "global roles", then we may well 
> end up, more often than not, with a role representing a single API 
> capability.  "Roles" might even be "created" simply by a service registering 
> its capabilities with Keystone.  Again, this should be independent of any 
> assignment model.

I think there is some dissonance in storing the roles in resource. The 
assignment backend is what controls how the actor is connected with the scope. 
If you’re keeping the role within the resource backend you’re now needing to 
have yet another way of correlating the data from the resource side (2 
elements) to the identity (user/group) side. The assignment backend is 
connecting the role to the actor and scope (project/domain), it should in some 
way be responsible for knowing what the role definition is. In the case of a 
crazy ABAC system, a role would need to be a mapping of attributes on the actor 
object, that now needs *another* layer of connection (similar to what we have 
now) to the roles in resource. Logically the roles seem tightly coupled to the 
assignment backend, not a loosely 
coupled-and-theoretically-can-be

Re: [openstack-dev] [tempest] How to run tempest tests

2014-11-25 Thread Matthew Treinish
On Mon, Nov 24, 2014 at 10:49:27AM +0100, Angelo Matarazzo wrote:
> Sorry for my previous message with wrong subject
> 
> Hi all,
> By reading the tempest documentation page [1] a user can run tempest tests
> by using whether testr or run_tempest.sh or tox.
> 
> What is the best practice?
> run_tempest.sh has several options (e.g. ./run_tempest.sh -h) and it is my
> preferred way, currently.
> Any thought?

So the options are there for different reasons and fit different purposes. The
run_tempest.sh script exists mostly for legacy reasons as some people prefer to
use it, and it predates the usage of tox in tempest. It also has some advantages
like that it can run without a venv and provides some other options.

Tox is what we use for gating, and we keep most of job definitions for gating in
the tox.ini file. If you're trying to reproduce a gate run locally using tox is
what is recommended to use. Personally I use it to run everything just because
I often mix unit tests and tempest runs and I like having separate venvs for
both being created on demand.

Calling testr directly is just what all of these tools are doing under the
covers, and it'll always be an option. 

One thing we're looking to do this cycle is to add a single entry point for
running tempest which will hopefully clear up this confusion, and make the
interface for interacting with tempest a bit nicer. When this work is done, the
run_tempest.sh script will most likely disappear and tox will probably just be
used for gating job definitions and just call the new entry-point instead of
testr directly.

> 
> BR,
> Angelo
> 
> [1] http://docs.openstack.org/developer/tempest/overview.html#quickstart
> 

-Matt Treinish


pgpccVuLwrFTD.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-25 Thread Thomas Goirand
On 11/21/2014 08:31 PM, Donald Stufft wrote:
> 
>> On Nov 21, 2014, at 3:59 AM, Thomas Goirand  wrote:
>>
>>>
>>> I'm not sure I understand the meaning behind this question. "bower
>>> install angular" downloads a bower package called "angular".
>>
>> Isn't there is a simple URL that I may use with wget? I don't really
>> want to use bower directly, I just would like to have a look to the
>> content of the bower package.
> 
> You can’t. Bower doesn’t have “traditional” packages where you take a
> directory and archive it using tar/zip/whatever and then upload it to
> some repo. Bower has a registry which maps names to git URLs and then
> the bower CLI looks up that mapping, fetches the git repository and then
> uses that as the input to the “look at metadata and do stuff with files”
> part of the package manager instead of the output of an un-unarchival
> command.

Then this makes Bower a very bad candidate to "debianize" stuff. We'll
have a moving target with a constantly changing git from upstream,
meaning that we'll have all sorts of surprise in the gate.

Frankly, this Bower thing scares me, and I don't really understand why
we're not continuing to use XStatic stuff, which was really convenient
and has been proven to work during the Juno cycle.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Zabbix in HA mode

2014-11-25 Thread Bartosz Kupidura
Hello Vladimir,
I agree. But in most cases, zabbix-server would be moved from failed node by 
pacemaker. 
Moreover some clients dont want to „waste” 3 additional servers only for 
monitoring.

As i said, this is only first drop of zabbix HA. Later we can allow user to 
deploy zabbix-server
not only on controllers, but also on dedicated nodes. 

Best Regards,
Bartosz Kupidura 


> Wiadomość napisana przez Vladimir Kuklin  w dniu 25 lis 
> 2014, o godz. 15:47:
> 
> Bartosz, 
> 
> It is obviously possible to install zabbix on the master nodes and put it 
> under pacemaker control. But it seems very strange for me to monitor 
> something with software located on the nodes that you are monitoring. 
> 
> On Tue, Nov 25, 2014 at 4:21 PM, Bartosz Kupidura  
> wrote:
> Hello All,
> 
> Im working on Zabbix implementation which include HA support.
> 
> Zabbix server should be deployed on all controllers in HA mode.
> 
> Currently we have dedicated role 'zabbix-server', which does not support more
> than one zabbix-server. Instead of this we will move monitoring solution 
> (zabbix),
> as an additional component.
> 
> We will introduce additional role 'zabbix-monitoring', assigned to all 
> servers with
> lowest priority in serializer (run puppet after every other roles) when 
> zabbix is
> enabled.
> 'Zabbix-monitoring' role will be assigned automatically.
> 
> When zabbix component is enabled, we will install zabbix-server on all 
> controllers
> in active-backup mode (pacemaker+haproxy).
> 
> In next stage, we can allow users to deploy zabbix-server on dedicated node OR
> on controllers for performance reasons.
> But for now we should force zabbix-server to be deployed on controllers.
> 
> BP is in initial phase, but code is ready and working with Fuel 5.1.
> Now im checking if it works with master.
> 
> Any comments are welcome!
> 
> BP link: https://blueprints.launchpad.net/fuel/+spec/zabbix-ha
> 
> Best Regards,
> Bartosz Kupidura
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 45bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com
> www.mirantis.ru
> vkuk...@mirantis.com
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-25 Thread Thomas Goirand
On 11/22/2014 04:39 AM, Fox, Kevin M wrote:
> Simply having a git repository does not imply that its source.
> 
> In fact, if its considered compiled (minified), I'm thinking the debian rules 
> would prevent sourcing from it?
> 
> Thanks,
> Kevin

Indeed, we don't package minified sources.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm

2014-11-25 Thread Dmitry Pyzhov
Thank you all for your feedback. Request postponed to the next release. We
will compare available solutions.

On Mon, Nov 24, 2014 at 2:36 PM, Vladimir Kuklin 
wrote:

> guys, there is already pxz utility in ubuntu repos. let's test it
>
> On Mon, Nov 24, 2014 at 2:32 PM, Bartłomiej Piotrowski <
> bpiotrow...@mirantis.com> wrote:
>
>> On 24 Nov 2014, at 12:25, Matthew Mosesohn 
>> wrote:
>> > I did this exercise over many iterations during Docker container
>> > packing and found that as long as the data is under 1gb, it's going to
>> > compress really well with xz. Over 1gb and lrzip looks more attractive
>> > (but only on high memory systems). In reality, we're looking at log
>> > footprints from OpenStack environments on the order of 500mb to 2gb.
>> >
>> > xz is very slow on single-core systems with 1.5gb of memory, but it's
>> > quite a bit faster if you run it on a more powerful system. I've found
>> > level 4 compression to be the best compromise that works well enough
>> > that it's still far better than gzip. If increasing compression time
>> > by 3-5x is too much for you guys, why not just go to bzip? You'll
>> > still improve compression but be able to cut back on time.
>> >
>> > Best Regards,
>> > Matthew Mosesohn
>>
>> Alpha release of xz supports multithreading via -T (or —threads)
>> parameter.
>> We could also use pbzip2 instead of regular bzip to cut some time on
>> multi-core
>> systems.
>>
>> Regards,
>> Bartłomiej Piotrowski
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 45bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No PROTOCOL_SSLv3 in Python 2.7 in Sid since 3 days

2014-11-25 Thread Thomas Goirand
On 11/23/2014 06:01 AM, Jeremy Stanley wrote:
> but we shouldn't
> backport a patch which suddenly breaks someone's cloud because they
> made a conscious decision to configure it to use SSLv3 for RPC
> communication.

I'm having a hard time figuring out in which case it would make sense to
do so. However...

On 11/23/2014 06:01 AM, Jeremy Stanley wrote:
> My point is that suggesting there's a vulnerability here without
> looking at how the code is used is sort of like shouting "fire" in a
> crowded theater.

I agree with that point, but also with your point about anticipation of
future issues. I think it would be a good idea to strengthen things, in
advance of possible downgrade attacks that may occur if we keep support
for SSLv3.

On 11/24/2014 01:09 AM, Doug Hellmann wrote:
> The only place things will be breaking is on the version of Python
> shipped by Debian where the constant used to set up the validation
> logic is no longer present in the SSL library. Let’s start by making
> the smallest change we can to fix that problem, and then move on.

Yes please! And I need this backported to Icehouse ASAP (as that's we're
shipping in Jessie). At this point, I prefer to let others who are
better than me at this sorts (sensitive) of patches do the work.

On 11/24/2014 01:09 AM, Doug Hellmann wrote:
> hat’s an easy patch for us to land, and I hope Thomas will update the
> patch he has already submitted based on feedback on that review.

Could someone take over my patch? :)
I'm quite busy doing other things, and it isn't my role to work on such
things directly. I often send a patch here and there when I see fit, but
here, I don't think I'm the best person to do so.

>> I don't really mind if we continue to allow it, but at least we
>> should move fast to have oslo-incubator fixed. I will need to do
>> something fast for Icehouse in Sid/Jessie, as we're in freeze mode.
>> Best would be to have the issue resolved before the next point
>> release (currently set for May 14 2015).
>
> Sure. See my comments on your current review for what I think we need
> to do to handle the backwards-compatibility issues more clearly.
>
> Doug

Hum... git review -d  ? :)

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] why do image properties control per-instance settings?

2014-11-25 Thread Solly Ross
Hi Blairo,

Like you mention, I think one of the major reasons for have properties
set at the image level is that certain properties depend on an OS
which support the features involved.  In this regard, being able to say
that an image "supports" a particular feature, and then being able to set
it on a per instance basis, would be useful.

On the other hand, having properties set at the image level makes sense,
since you could configure the OS to support or require the feature in
question, and then attach that feature to the image so that it was always
there for instances booted with that image.  Have properties set at the
image level also aligns with the general idea of not specifying too many
special things about a VM at boot time -- you specify a flavor, an image,
and an SSH key pair (or use the default).  Instead of having to say
"what's the appropriate boot setup for the XYZ app", you just say "use the
XYZ image" and you're all set (although this could be an argument for using
Heat templates as well).

Best Regards,
Solly Ross

- Original Message -
> From: "Blair Bethwaite" 
> To: openstack-dev@lists.openstack.org
> Sent: Tuesday, November 25, 2014 5:15:37 AM
> Subject: [openstack-dev] [nova][glance] why do image properties control   
> per-instance settings?
> 
> Hi all,
> 
> I've just been doing some user consultation and pondering a case for
> use of the Qemu Guest Agent in order to get quiesced backups.
> 
> In doing so I found myself wondering why on earth I need to set an
> image property in Glance (hw_qemu_guest_agent) to toggle such a thing
> for any particular instance, it doesn't make any sense that what
> should be an instance boot parameter (or possibly even runtime
> dynamic) is controlled through the cloud's image registry. There is no
> shortage of similar metadata properties, probably everything prefixed
> "hw_" for a start. It looks like this has even come up on reviews
> before, e.g.
> https://review.openstack.org/#/c/43513/
> The last comment from DanielB is:
> "For setting per-instance, rather than doing something that only works
> for passing kernel command line, it would be desirable to have a way
> to pass in arbitrary key,value attribute pairs to the 'boot' API call,
> because I can see this being useful for things beyond just the kernel
> command line."
> 
> In some cases I imagine image properties could be useful to indicate
> that the image has a certain *capability*, which could be used as a
> step to verify it can support some requested feature (e.g., qemu-ga)
> for any particular instance launch.
> 
> Is there similar work underway? Would it make sense to build such
> functionality via the existing instance metadata API?
> 
> --
> Cheers,
> ~Blairo
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sec] [ossg] Introducing Bandit code security analyzer

2014-11-25 Thread McPeak, Travis
Hi all - 
 

Bandit is a Python AST-based static analyzer from the OpenStack
Security Group.  Unlike other static code analysis tools in the
OpenStack ecosystem such as hacking and flake8, Bandit was
purpose-built to help find security vulnerabilities.

Bandit has a wiki page at:
https://wiki.openstack.org/wiki/Security/Projects/Bandit

and is available on Stackforge, at:
https://git.openstack.org/stackforge/bandit.git.
 

Instructions for installation and usage are in the README
(http://git.openstack.org/cgit/stackforge/bandit/tree/README.md).
 


How does it work? 

Bandit parses Python source into AST nodes and then executes a node
visitor function for each node.  Bandit tests are declared based on the
type of AST node they inspect.  For each such node that is encountered,
Bandit calls all of the tests that inspect that node type.  For example
any time a function is called, Bandit runs all of the tests that
inspect function calls.
 

What type of issues can it find?

Bandit currently has tests to find hardcoded SQL query strings, files
created with bad permissions, crypto requests without certificate
validation, insecure temp file usage, the use of unsafe functions, and
much more.
 

What's next?

We're working on getting Bandit integrated in gate tests in a few
projects.  If you are a contributor on a project and want to get
started using Bandit please get in touch with us.  We're also expanding
Bandit's functionality with new tests and capabilities.  Stay tuned.

How can I get involved?

We always love to hear feedback, run it against your project and let us
know what you find!  Also we're looking for new ideas for features and
tests.  If you'd like to get involved writing tests for Bandit or
improving Bandit itself, please drop us a line in #openstack-security
on Freenode IRC or send something on the mailing list.
 

Thank you,
 - The Bandit Team


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][Neutron] Fork() safety and oslo.messaging

2014-11-25 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256




Mmmm... I don't think it's that clear (re: an application issue).  I
mean, yes - the application is doing the os.fork() at the 'wrong'
time, but where is this made clear in the oslo.messaging API
documentation?
I think this is the real issue here:  what is the "official" guidance
for using os.fork() and its interaction with oslo libraries?

In the case of oslo.messaging, I can't find any mention of os.fork()
in the API docs (I may have missed it - please correct me if so).
That would imply - at least to me - that there is _no_ restrictions of
using os.fork() together with oslo.messaging.


Yes, I agree we should add a note on that in oslo.messaging (and perhaps 
in oslo.db too).


And also the os.fork() is done by the service.ProcessLauncher of 
oslo-incubator,
and it's not (yet) documented. But once oslo.service library will be 
released, it will.



But in the case of qpid, that is definitely _not_ the case.

The legacy qpid driver - impl_qpid - imports a 3rd party library, the
qpid.messaging API.   This library uses threading.Thread internally,
we (consumers of this library) have no control over how that thread is
managed.  So for impl_qpid, os.fork()'ing after the driver is loaded
can't be guaranteed to work.   In fact, I'd say os.fork() and
impl_qpid will not work - full stop.


Yes, I have tried it, and I have catch what happen and I can confirm 
that too now, unfortunately :( And this can occurs with any driver if 
the 3rd party library

doesn't work when we use os.fork()

For the amqp1 driver case, I think this is the same things, it seems 
to do lazy creation of the connection too.




We have more flexibility here, since the driver directly controls when
the thread is spawned.  But the very fact that the thread is used
places a restriction on how oslo.messaging and os.fork() can be used
together, which isn't made clear in the documentation for the library.

I'm not familiar with the rabbit driver - I've seen some patch for
heatbeating in rabbit introduce threading, so there may also be an
implication there as well.


Yes, we need to check that.

Personally, I don't like this API, because the behavior difference 
between

'__init__' and 'start' is too implicit.



That's true, but I'd say that the problem of implicitness re:
os.fork() needs to be clarified at the library level as well.



I agree.

I will write the documentation patch for oslo.messaging.

Cheers,

- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht
-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJUdKtPCRAYkrQvzqrryAAAqCIP/RvONngQ1MOKXEcktcku
Ok1Lr9O12O3j/zESkXaKInJbqak0EjM0FXnoXeah4qPbSoJYZIfztmCOtX4u
CSdhkAWhFqXdpHUtngXImHeB3P/eRH0Vo7R3PAmUbv5VWkn+lwcO+Ym1g79Z
vCJbcwVpldGiTDRJRDAPPb14UakJbZJGnkRDgYscNlBG+alzLw0MsaqnJ7LS
8Yj4YkXSgthpHLF2N8Yem9r7Lh7CbYLKzlhOylgAJTd8gpGGtncwWMwYJvKc
lsMJNY34PMiNkPk1a+VSlrWcPJpafBl3aOBbrIpmMSpMe9pXC/yHW2nrtGez
VXxliFpqQ7kA5827AuhPAM8EzeMUDetLhZvLxzqY7f/SlaoQ/s/9VhfemmHv
d4wT8uiayrWSMdXVUJZcMUdM2XlJDdObokMI0ZQKQYX8OhKQL8LdaHR2xr6B
OjS4Mp4+/W4Y9wMUFqlRyGnW1LLwCFYWHpyKlhXKmYSSdKTn5L7Pcvmmfw8d
JzDcMxfKCBnM4mNRzlBqYV4/ysb6FNMUZwu+D1YxCVHmAH2O1/oODujNJFkZ
gSWAmh9kYawJKbbS0Lh7nkOJs1iOnYxG0IQmz61sffg8T2FrpbH4FNWh1/+a
mQhmYWH2L5noJIwncVQSloSRuoSWLj9rfeiTIHjq2ZnTUD5tbXK6S5dTvv4m
4bij
=G9oX
-END PGP SIGNATURE-


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-25 Thread Mathieu Rohon
On Tue, Nov 25, 2014 at 9:59 AM, henry hly  wrote:
> Hi Armando,
>
> Indeed agent-less solution like external controller is very
> interesting, and in some certain cases it has advantage over agent
> solution, e.g. software installation is prohibited on Compute Node.
>
> However, Neutron Agent has its irreplaceable benefits: multiple
> backends support like SRIOV, macvtap, vhost-user snabbswitch, hybrid
> vswitch solution like NIC offloading or VDP based TOR offloading...All
> these backend can not be easily controlled by an remote OF controller.

Moreover, this solution is tested by the gate (at least ovs), and is
simpler for small deployments

> Also considering DVR (maybe with upcoming FW for W-E), and Security
> Group, W-E traffic control capability gap still exists between linux
> stack and OF flowtable, whether features like advanced netfilter, or
> performance for webserver app which incur huge concurrent sessions
> (because of basic OF upcall model, the more complex flow rule, the
> less megaflow aggregation might take effect)
>
> Thanks to L2pop and DVR, now many customer give the feedback that
> Neutron has made great progressing, and already meet nearly all their
> L2/L3 connectivity W-E control directing (The next big expectation is
> N-S traffic directing like dynamic routing agent), without forcing
> them to learn and integrate another huge platform like external SDN
> controller.

+100. Note that Dynamic routing is in progress.

> No attention to argue on agent vs. agentless, built-in reference vs.
> external controller, Openstack is an open community. But, I just want
> to say that modularized agent re-factoring does make a lot of sense,
> while forcing customer to piggyback an extra SDN controller on their
> Cloud solution is not the only future direction of Neutron.
>
> Best Regard
> Henry
>
> On Wed, Nov 19, 2014 at 5:45 AM, Armando M.  wrote:
>> Hi Carl,
>>
>> Thanks for kicking this off. I am also willing to help as a core reviewer of
>> blueprints and code
>> submissions only.
>>
>> As for the ML2 agent, we all know that for historic reasons Neutron has
>> grown to be not only a networking orchestration project but also a reference
>> implementation that is resembling what some might call an SDN controller.
>>
>> I think that most of the Neutron folks realize that we need to move away
>> from this model and rely on a strong open source SDN alternative; for these
>> reasons, I don't think that pursuing an ML2 agent would be a path we should
>> go down to anymore. It's time and energy that could be more effectively
>> spent elsewhere, especially on the refactoring. Now if the refactoring
>> effort ends up being labelled ML2 Agent, I would be okay with it, but my gut
>> feeling tells me that any attempt at consolidating code to embrace more than
>> one agent logic at once is gonna derail the major goal of paying down the so
>> called agent debt.
>>
>> My 2c
>> Armando
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] [qa] which test configs does the swift team find useful

2014-11-25 Thread Matthew Treinish
On Tue, Nov 25, 2014 at 11:01:04AM -0500, Sean Dague wrote:
> On 11/25/2014 10:54 AM, Matthew Treinish wrote:
> > On Tue, Nov 25, 2014 at 10:03:41AM -0500, Sean Dague wrote:
> >> As we are trying to do smart disaggregation of tests in the gate, I
> >> think it's important to figure out which test configurations seem to be
> >> actually helping, and which aren't. As the swift team has long had a
> >> functional test job, this seems like a good place to start. (Also the
> >> field deploy / upgrade story on Swift is probably one of the best of any
> >> OpenStack project, so removing friction is probably in order.)
> >>
> >> gate-swift-pep8SUCCESS in 1m 16s
> >> gate-swift-docsSUCCESS in 1m 48s
> >> gate-swift-python27SUCCESS in 3m 24s
> >> check-tempest-dsvm-fullSUCCESS in 56m 51s
> >> check-tempest-dsvm-postgres-full   SUCCESS in 54m 53s
> >> check-tempest-dsvm-neutron-fullSUCCESS in 1h 06m 09s
> >> check-tempest-dsvm-neutron-heat-slow   SUCCESS in 31m 18s
> >> check-grenade-dsvm SUCCESS in 39m 33s
> >> gate-tempest-dsvm-large-opsSUCCESS in 29m 34s
> >> gate-tempest-dsvm-neutron-large-opsSUCCESS in 22m 11s
> >> gate-swift-tox-funcSUCCESS in 2m 50s (non-voting)
> >> check-swift-dsvm-functionalSUCCESS in 17m 12s
> >> check-devstack-dsvm-cells  SUCCESS in 15m 18s
> >>
> >>
> >> I think in looking at that it's obvious that:
> >> * check-devstack-dsvm-cells
> >> * check-tempest-dsvm-postgres-full
> >> * gate-tempest-dsvm-large-ops
> >> * gate-tempest-dsvm-neutron-large-ops
> >> * check-tempest-dsvm-neutron-full
> >>
> >> Provide nothing new to swift, the access patterns on the glance => swift
> >> interaction aren't impacted on any of those, neither is the heat / swift
> >> resource tests or volumes / swift backup tests.
> > 
> > So I agree with all of this and think removing the jobs is fine, except the
> > postgres job and the neutron jobs do test the glance->swift access pattern, 
> > and
> > do run the heat swift tests. But, it does raise the bigger question which 
> > was
> > brought up in Darmstadt and again at summit on having a single gating
> > configuration. Maybe we should just switch to doing that now and finally 
> > drop
> > the postgres job completely.
> 
> I'm not saying it doesn't test it. I'm saying it doesn't test it in any
> way which is different from the mysql job.
> 
> "Provide nothing new to swift"...

Sure, but I think it raises the bigger question around the postgres jobs in
general, because I think if we're having this conversation around swift it
really applies to all of the postgres jobs. I think it's probably time that we
just move postgres jobs to periodic/experimental everywhere and be done with it.

> 
> >> check-tempest-dsvm-neutron-heat-slow   doesn't touch swift either (it's
> >> actually remarkably sparse of any content).
> > 
> > I think we probably should be removing this job from everywhere, we've 
> > slowly
> > been whittling away the job because it doesn't seem to be capable of being 
> > run
> > reliably. This also doesn't run any swift resource tests, in it's current 
> > form
> > it runs 6 neutron resource tests and thats it.
> 
> There is a separate thread on that as well.
> 
> >> Which kind of leaves us with 1 full stack run, and the grenade job. Have
> >> those caught real bugs? Does there remain value in them? Have other
> >> teams that rely on swift found those to block regressions?
> > 
> > So I think we'll need these at a minimum for the time being. Giving our 
> > current
> > project structure (and governance requirements) having jobs that test things
> > work together I feel is important. I know we've caught issues with 
> > glance->swift
> > layer with these jobs in the past as well as other bugs as well as bugs in 
> > swift
> > before. (although they're very infrequent compared to other projects)
> > 
> >>
> >> Let's figure out what's helpful, and what's not, and purge out all the
> >> non helpful stuff.
> >>
 
-Matt Treinish
 
 
 


pgpJm1miWqaKr.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova host-update gives error 'Virt driver does not implement host disabled status'

2014-11-25 Thread Vineet Menon
Hi,

I'm trying to reproduce the bug https://bugs.launchpad.net/nova/+bug/1259535.

While trying to issue the command, nova host-update --status disable
machine1, an error is thrown saying,

ERROR (HTTPNotImplemented): Virt driver does not implement host disabled
> status. (HTTP 501) (Request-ID: req-1f58feda-93af-42e0-b7b6-bcdd095f7d8c)
>


What is this error about?

Regards,
Vineet Menon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [qa] gate-tempest-dsvm-neutron-heat-slow future

2014-11-25 Thread Matthew Treinish
On Tue, Nov 25, 2014 at 10:28:52AM -0500, Sean Dague wrote:
> So as I was waiting for other tests to return, I started looking through
> our existing test lists.
> 
> gate-tempest-dsvm-neutron-heat-slow has been slowly evaporating, and I'm
> no longer convinced that it does anything useful (and just burns test
> nodes).
> 
> The entire output of the job is currently as follows:
> 
> 2014-11-25 14:43:13.801 | heat-slow runtests: commands[0] | bash
> tools/pretty_tox.sh
> (?=.*\[.*\bslow\b.*\])(^tempest\.(api|scenario)\.orchestration)
> --concurrency=4
> 2014-11-25 14:43:21.313 | {1}
> tempest.scenario.orchestration.test_server_cfn_init.CfnInitScenarioTest.test_server_cfn_init
> ... SKIPPED: Skipped until Bug: 1374175 is resolved.
> 2014-11-25 14:47:36.271 | {0}
> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_network
> [0.169736s] ... ok
> 2014-11-25 14:47:36.271 | {0}
> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_resources
> [0.000508s] ... ok
> 2014-11-25 14:47:36.291 | {0}
> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router
> [0.019679s] ... ok
> 2014-11-25 14:47:36.313 | {0}
> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router_interface
> [0.020597s] ... ok
> 2014-11-25 14:47:36.564 | {0}
> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_server
> [0.250788s] ... ok
> 2014-11-25 14:47:36.580 | {0}
> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_subnet
> [0.015410s] ... ok
> 2014-11-25 14:47:44.113 |
> 2014-11-25 14:47:44.113 | ==
> 2014-11-25 14:47:44.113 | Totals
> 2014-11-25 14:47:44.113 | ==
> 2014-11-25 14:47:44.114 | Run: 7 in 0.478173 sec.
> 2014-11-25 14:47:44.114 |  - Passed: 6
> 2014-11-25 14:47:44.114 |  - Skipped: 1
> 2014-11-25 14:47:44.115 |  - Failed: 0
> 2014-11-25 14:47:44.115 |
> 2014-11-25 14:47:44.116 | ==
> 2014-11-25 14:47:44.116 | Worker Balance
> 2014-11-25 14:47:44.116 | ==
> 2014-11-25 14:47:44.117 |  - Worker 0 (6 tests) => 0:00:00.480677s
> 2014-11-25 14:47:44.117 |  - Worker 1 (1 tests) => 0:00:00.001455s
> 
> So we are running about 1s worth of work, no longer in parallel (as
> there aren't enough classes to even do parallel runs).
> 
> Given the emergence of the heat functional job, and the fact that this
> is really not testing anything any more, I'd like to propose we just
> remove it entirely at this stage and get the test nodes back.
> 

I agree and think we should delete this job. I was actually planning to bring
this topic up myself. These jobs which have always had arguably questionable
value, and have slowly been decreasing what they run as bugs are popping up. I
think since heat has their functional testing job now and is willing to own more
of the testing there is no reason to keep this job around anymore.

-Matt Treinish


pgpUBr1WTG8RW.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] [qa] which test configs does the swift team find useful

2014-11-25 Thread Sean Dague
On 11/25/2014 10:54 AM, Matthew Treinish wrote:
> On Tue, Nov 25, 2014 at 10:03:41AM -0500, Sean Dague wrote:
>> As we are trying to do smart disaggregation of tests in the gate, I
>> think it's important to figure out which test configurations seem to be
>> actually helping, and which aren't. As the swift team has long had a
>> functional test job, this seems like a good place to start. (Also the
>> field deploy / upgrade story on Swift is probably one of the best of any
>> OpenStack project, so removing friction is probably in order.)
>>
>> gate-swift-pep8  SUCCESS in 1m 16s
>> gate-swift-docs  SUCCESS in 1m 48s
>> gate-swift-python27  SUCCESS in 3m 24s
>> check-tempest-dsvm-full  SUCCESS in 56m 51s
>> check-tempest-dsvm-postgres-full SUCCESS in 54m 53s
>> check-tempest-dsvm-neutron-full  SUCCESS in 1h 06m 09s
>> check-tempest-dsvm-neutron-heat-slow SUCCESS in 31m 18s
>> check-grenade-dsvm   SUCCESS in 39m 33s
>> gate-tempest-dsvm-large-ops  SUCCESS in 29m 34s
>> gate-tempest-dsvm-neutron-large-ops  SUCCESS in 22m 11s
>> gate-swift-tox-func  SUCCESS in 2m 50s (non-voting)
>> check-swift-dsvm-functional  SUCCESS in 17m 12s
>> check-devstack-dsvm-cellsSUCCESS in 15m 18s
>>
>>
>> I think in looking at that it's obvious that:
>> * check-devstack-dsvm-cells
>> * check-tempest-dsvm-postgres-full
>> * gate-tempest-dsvm-large-ops
>> * gate-tempest-dsvm-neutron-large-ops
>> * check-tempest-dsvm-neutron-full
>>
>> Provide nothing new to swift, the access patterns on the glance => swift
>> interaction aren't impacted on any of those, neither is the heat / swift
>> resource tests or volumes / swift backup tests.
> 
> So I agree with all of this and think removing the jobs is fine, except the
> postgres job and the neutron jobs do test the glance->swift access pattern, 
> and
> do run the heat swift tests. But, it does raise the bigger question which was
> brought up in Darmstadt and again at summit on having a single gating
> configuration. Maybe we should just switch to doing that now and finally drop
> the postgres job completely.

I'm not saying it doesn't test it. I'm saying it doesn't test it in any
way which is different from the mysql job.

"Provide nothing new to swift"...

>> check-tempest-dsvm-neutron-heat-slow doesn't touch swift either (it's
>> actually remarkably sparse of any content).
> 
> I think we probably should be removing this job from everywhere, we've slowly
> been whittling away the job because it doesn't seem to be capable of being run
> reliably. This also doesn't run any swift resource tests, in it's current form
> it runs 6 neutron resource tests and thats it.

There is a separate thread on that as well.

>> Which kind of leaves us with 1 full stack run, and the grenade job. Have
>> those caught real bugs? Does there remain value in them? Have other
>> teams that rely on swift found those to block regressions?
> 
> So I think we'll need these at a minimum for the time being. Giving our 
> current
> project structure (and governance requirements) having jobs that test things
> work together I feel is important. I know we've caught issues with 
> glance->swift
> layer with these jobs in the past as well as other bugs as well as bugs in 
> swift
> before. (although they're very infrequent compared to other projects)
> 
>>
>> Let's figure out what's helpful, and what's not, and purge out all the
>> non helpful stuff.
>>
> 
> -Matt Treinish
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [qa] gate-tempest-dsvm-neutron-heat-slow future

2014-11-25 Thread Kyle Mestery
On Tue, Nov 25, 2014 at 9:28 AM, Sean Dague  wrote:
> So as I was waiting for other tests to return, I started looking through
> our existing test lists.
>
> gate-tempest-dsvm-neutron-heat-slow has been slowly evaporating, and I'm
> no longer convinced that it does anything useful (and just burns test
> nodes).
>
> The entire output of the job is currently as follows:
>
> 2014-11-25 14:43:13.801 | heat-slow runtests: commands[0] | bash
> tools/pretty_tox.sh
> (?=.*\[.*\bslow\b.*\])(^tempest\.(api|scenario)\.orchestration)
> --concurrency=4
> 2014-11-25 14:43:21.313 | {1}
> tempest.scenario.orchestration.test_server_cfn_init.CfnInitScenarioTest.test_server_cfn_init
> ... SKIPPED: Skipped until Bug: 1374175 is resolved.
> 2014-11-25 14:47:36.271 | {0}
> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_network
> [0.169736s] ... ok
> 2014-11-25 14:47:36.271 | {0}
> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_resources
> [0.000508s] ... ok
> 2014-11-25 14:47:36.291 | {0}
> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router
> [0.019679s] ... ok
> 2014-11-25 14:47:36.313 | {0}
> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router_interface
> [0.020597s] ... ok
> 2014-11-25 14:47:36.564 | {0}
> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_server
> [0.250788s] ... ok
> 2014-11-25 14:47:36.580 | {0}
> tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_subnet
> [0.015410s] ... ok
> 2014-11-25 14:47:44.113 |
> 2014-11-25 14:47:44.113 | ==
> 2014-11-25 14:47:44.113 | Totals
> 2014-11-25 14:47:44.113 | ==
> 2014-11-25 14:47:44.114 | Run: 7 in 0.478173 sec.
> 2014-11-25 14:47:44.114 |  - Passed: 6
> 2014-11-25 14:47:44.114 |  - Skipped: 1
> 2014-11-25 14:47:44.115 |  - Failed: 0
> 2014-11-25 14:47:44.115 |
> 2014-11-25 14:47:44.116 | ==
> 2014-11-25 14:47:44.116 | Worker Balance
> 2014-11-25 14:47:44.116 | ==
> 2014-11-25 14:47:44.117 |  - Worker 0 (6 tests) => 0:00:00.480677s
> 2014-11-25 14:47:44.117 |  - Worker 1 (1 tests) => 0:00:00.001455s
>
> So we are running about 1s worth of work, no longer in parallel (as
> there aren't enough classes to even do parallel runs).
>
> Given the emergence of the heat functional job, and the fact that this
> is really not testing anything any more, I'd like to propose we just
> remove it entirely at this stage and get the test nodes back.
>
+1 from me.

> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][Neutron] Fork() safety and oslo.messaging

2014-11-25 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,


I will take a look to the neutron code, if I found a rpc usage
before the os.fork().


With a basic devstack that use the ml2 plugin, I found at least two rpc 
usage before the os.fork() in neutron:


https://github.com/openstack/neutron/blob/master/neutron/services/l3_router/l3_router_plugin.py#L63
https://github.com/openstack/neutron/blob/master/neutron/services/loadbalancer/drivers/common/agent_driver_base.py#L341


It looks like all services plugins run into the parent thread, and only 
the core plugin (ml2) run in children ... rpc_workers apply only on the 
core plugin rpc code... not for other plugins.


If I correctly understand, neutron server does:
- - load and start wsgi server (in a eventlet thread)
- - load neutron manager that:
  * load core plugin
  * load services plugins
  * start services plugins (some rpc connections are created)
- - fork neutron-server (by using the ProcessLauncher)
- - drop the DB connections of children to ensure oslo.db created new 
ones (looks like releases connection already opened by the parent 
process)

- - start core plugin rpc connections of children

It seems the wsgi code and service plugins live in each children but do 
nothing...


I think ProcessLauncher is more designed to bootstrap many identical 
processes with a parent process that do nothing (except child 
start/stop).


For oslo.messaging, perhaps we needs to document when we can fork and 
when it's too late to fork the process when it use oslo.messaging.



Cheers,

- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht
-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJUdKaqCRAYkrQvzqrryAAAi0AP/1d4SefHSQV/iX3tJtIs
uIxlalJqkicRKYmvoNP29S+uqpVrSvKbno92f4lygeuruJ5h/jt5KMgKD1WP
rCROoZlsOWYgnQ6nx+C2YsXire6cPu+hv8rqNSX9qZJGKkwBfRS/gNtHXfeL
Jm6CNgft18Nj1PB+RykhZf+gB1bjJT0lSYzi9z3se1d9R6AFEi9tcEQq4BsA
gXA5Qm6lBRuHflFL1h9XbTPrKxPRpxEvDfHJeu2rv8HlEL1zyXjJ/JIFO87x
P6i+H7FVIntvumdGthMJnfnqp+O96l2OW1KZwb0SFH34DgMbY3COY0mHXBV6
+ZuTcQvflDa7EZfHGhuTUn2YsXRdUuY+Fopds2wUYrgi5BK+5aTdIiPXsTk0
1Ju68PB4PHXngP8pu+mcqh+54XDQRlMAPBfT6kOAy1RtQ1K7U9zqI7qR6Znu
PyYvRhNo6Z9Hg0qzFPbYWL0GpESGN0A6bQ8s0iPrlOGzrZxvOoo2ynxx2aHu
ArOrzuJBiPgBXY+5QyeHDBePfMSumbU/wtwlAY8H5ecRDxbqG+X9bEPzWClF
NAOaDTyn0mao7myZh81+oUaMhIY7W0eYVcO9Gum07RoKNTUWd++eQjP0soeh
2z+zS9JlmYUaE5VZmi2EC0uwfrm/KkvJVA2rFE+F9mBjk1XxuJRLGK/MShPu
m+jk
=lfg+
-END PGP SIGNATURE-


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] [qa] which test configs does the swift team find useful

2014-11-25 Thread Matthew Treinish
On Tue, Nov 25, 2014 at 10:03:41AM -0500, Sean Dague wrote:
> As we are trying to do smart disaggregation of tests in the gate, I
> think it's important to figure out which test configurations seem to be
> actually helping, and which aren't. As the swift team has long had a
> functional test job, this seems like a good place to start. (Also the
> field deploy / upgrade story on Swift is probably one of the best of any
> OpenStack project, so removing friction is probably in order.)
> 
> gate-swift-pep8   SUCCESS in 1m 16s
> gate-swift-docs   SUCCESS in 1m 48s
> gate-swift-python27   SUCCESS in 3m 24s
> check-tempest-dsvm-full   SUCCESS in 56m 51s
> check-tempest-dsvm-postgres-full  SUCCESS in 54m 53s
> check-tempest-dsvm-neutron-full   SUCCESS in 1h 06m 09s
> check-tempest-dsvm-neutron-heat-slow  SUCCESS in 31m 18s
> check-grenade-dsvmSUCCESS in 39m 33s
> gate-tempest-dsvm-large-ops   SUCCESS in 29m 34s
> gate-tempest-dsvm-neutron-large-ops   SUCCESS in 22m 11s
> gate-swift-tox-func   SUCCESS in 2m 50s (non-voting)
> check-swift-dsvm-functional   SUCCESS in 17m 12s
> check-devstack-dsvm-cells SUCCESS in 15m 18s
> 
> 
> I think in looking at that it's obvious that:
> * check-devstack-dsvm-cells
> * check-tempest-dsvm-postgres-full
> * gate-tempest-dsvm-large-ops
> * gate-tempest-dsvm-neutron-large-ops
> * check-tempest-dsvm-neutron-full
> 
> Provide nothing new to swift, the access patterns on the glance => swift
> interaction aren't impacted on any of those, neither is the heat / swift
> resource tests or volumes / swift backup tests.

So I agree with all of this and think removing the jobs is fine, except the
postgres job and the neutron jobs do test the glance->swift access pattern, and
do run the heat swift tests. But, it does raise the bigger question which was
brought up in Darmstadt and again at summit on having a single gating
configuration. Maybe we should just switch to doing that now and finally drop
the postgres job completely.

> 
> check-tempest-dsvm-neutron-heat-slow  doesn't touch swift either (it's
> actually remarkably sparse of any content).

I think we probably should be removing this job from everywhere, we've slowly
been whittling away the job because it doesn't seem to be capable of being run
reliably. This also doesn't run any swift resource tests, in it's current form
it runs 6 neutron resource tests and thats it.

> 
> Which kind of leaves us with 1 full stack run, and the grenade job. Have
> those caught real bugs? Does there remain value in them? Have other
> teams that rely on swift found those to block regressions?

So I think we'll need these at a minimum for the time being. Giving our current
project structure (and governance requirements) having jobs that test things
work together I feel is important. I know we've caught issues with glance->swift
layer with these jobs in the past as well as other bugs as well as bugs in swift
before. (although they're very infrequent compared to other projects)

> 
> Let's figure out what's helpful, and what's not, and purge out all the
> non helpful stuff.
> 

-Matt Treinish


pgpRu_dsPCbCn.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][Neutron] Fork() safety and oslo.messaging

2014-11-25 Thread Ken Giusti
Hi Mehdi

On Tue, Nov 25, 2014 at 5:38 AM, Mehdi Abaakouk  wrote:
>
> Hi,
>
> I think the main issue is the behavior of the API
> of oslo-incubator/openstack/common/service.py, specially:
>
>  * ProcessLauncher.launch_service(MyService())
>
> And then the MyService have this behavior:
>
> class MyService:
>def __init__(self):
># CODE DONE BEFORE os.fork()
>
>def start(self):
># CODE DONE AFTER os.fork()
>
> So if an application created a FD inside MyService.__init__ or before 
> ProcessLauncher.launch_service, it will be shared between
> processes and we got this kind of issues...
>
> For the rabbitmq/qpid driver, the first connection is created when the rpc 
> server is started or when the first rpc call/cast/... is done.
>
> So if the application doesn't do that inside MyService.__init__ or before 
> ProcessLauncher.launch_service everything works as expected.
>
> But if the issue is raised I think this is an application issue (rpc stuff 
> done before the os.fork())
>

Mmmm... I don't think it's that clear (re: an application issue).  I
mean, yes - the application is doing the os.fork() at the 'wrong'
time, but where is this made clear in the oslo.messaging API
documentation?

I think this is the real issue here:  what is the "official" guidance
for using os.fork() and its interaction with oslo libraries?

In the case of oslo.messaging, I can't find any mention of os.fork()
in the API docs (I may have missed it - please correct me if so).
That would imply - at least to me - that there is _no_ restrictions of
using os.fork() together with oslo.messaging.

But in the case of qpid, that is definitely _not_ the case.

The legacy qpid driver - impl_qpid - imports a 3rd party library, the
qpid.messaging API.   This library uses threading.Thread internally,
we (consumers of this library) have no control over how that thread is
managed.  So for impl_qpid, os.fork()'ing after the driver is loaded
can't be guaranteed to work.   In fact, I'd say os.fork() and
impl_qpid will not work - full stop.

> For the amqp1 driver case, I think this is the same things, it seems to do 
> lazy creation of the connection too.
>

We have more flexibility here, since the driver directly controls when
the thread is spawned.  But the very fact that the thread is used
places a restriction on how oslo.messaging and os.fork() can be used
together, which isn't made clear in the documentation for the library.

I'm not familiar with the rabbit driver - I've seen some patch for
heatbeating in rabbit introduce threading, so there may also be an
implication there as well.


> I will take a look to the neutron code, if I found a rpc usage
> before the os.fork().
>

I've done some tracing of neutron-server's behavior in this case - you
may want to take a look at

 https://bugs.launchpad.net/neutron/+bug/1330199/comments/8

>
> Personally, I don't like this API, because the behavior difference between
> '__init__' and 'start' is too implicit.
>

That's true, but I'd say that the problem of implicitness re:
os.fork() needs to be clarified at the library level as well.

thanks,

-K

> Cheers,
>
> ---
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>
>
> Le 2014-11-24 20:27, Ken Giusti a écrit :
>
>> Hi all,
>>
>> As far as oslo.messaging is concerned, should it be possible for the
>> main application to safely os.fork() when there is already an active
>> connection to a messaging broker?
>>
>> I ask because I'm hitting what appears to be fork-related issues with
>> the new AMQP 1.0 driver.  I think the same problems have been seen
>> with the older impl_qpid driver as well [0]
>>
>> Both drivers utilize a background threading.Thread that handles all
>> async socket I/O and protocol timers.
>>
>> In the particular case I'm trying to debug, rpc_workers is set to 4 in
>> neutron.conf.  As far as I can tell, this causes neutron.service to
>> os.fork() four workers, but does so after it has created a listener
>> (and therefore a connection to the broker).
>>
>> This results in multiple processes all select()'ing the same set of
>> networks sockets, and stuff breaks :(
>>
>> Even without the background process, wouldn't this use still result in
>> sockets being shared across the parent/child processes?   Seems
>> dangerous.
>>
>> Thoughts?
>>
>> [0] https://bugs.launchpad.net/oslo.messaging/+bug/1330199




-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] licenses

2014-11-25 Thread Steve Gordon
- Original Message -
> From: "Arkady Kanevsky" 
> To: openstack-dev@lists.openstack.org
> 
> What is the license that should be used for specs and for the code?
> While all the code I had seen is under Apache 2 license, many of the specs
> are under CCPL-3 license.
> Is that the guidance?
> Thanks,
> Arkady

The licenses should be in the *-specs trees, e.g.:

http://git.openstack.org/cgit/openstack/nova-specs/tree/LICENSE

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] request_id deprecation strategy question

2014-11-25 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 21/10/14 11:52, Steven Hardy wrote:
> On Mon, Oct 20, 2014 at 03:27:19PM -0700, Joe Gordon wrote:
>> On Mon, Oct 20, 2014 at 11:12 AM, gordon chung 
>> wrote:
>> 
>>> The issue I'm highlighting is that those projects using the
>>> code now
>> have
>>> to update their api-paste.ini files to import from the new
>>> location, presumably while giving some warning to operators
>>> about the impending removal of the old code.
>> This was the issue i ran into when trying to switch projects to 
>> oslo.middleware where i couldn't get jenkins to pass -- grenade
>> tests successfully did their job. we had a discussion on
>> openstack-qa and it was suggested to add a upgrade script to
>> grenade to handle the new reference and document the switch. [1] 
>> if there's any issue with this solution, feel free to let us
>> know.
>> 
>> Going down this route means every deployment that wishes to
>> upgrade now has an extra step, and should be avoided whenever
>> possible. Why not just have a wrapper in project.openstack.common
>> pointing to the new oslo.middleware library. If that is not a
>> viable solution, we should give operators one full cycle where
>> the oslo-incubator version is deprecated and they can migrate to
>> the new copy outside of the upgrade process itself. Since there
>> is no deprecation warning in Juno [0], We can deprecate the
>> oslo-incubator copy in Kilo and remove in L.
> 
> 
> I've proposed a patch with a compatibility shim which may provide
> one way to resolve this:
> 
> https://review.openstack.org/129858

FYI some projects (e.g. neutron or swift) also utilize catch_errors
filter, so I've requested another shim for the middleware module in
oslo-incubator: https://review.openstack.org/136999

> 
> Steve
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUdKOOAAoJEC5aWaUY1u578K8H/jUb3eLE4bItGfkuzZ9uAes9
7ekva40wgNK3u96DMGbBIKOFxbljvXt5tlIPKjw26cW1G834VeGsuyB+3ql53aGD
Y743L4NOvzRwAnlorh35CbjEXu+RSfCEw5Y7w3K7N4kVfsQ8Zw2NObAq4hk1wKJ7
bc6maJZa6D8nptY7q8bCsLsrXudnpRLsfqbei7NurQawBGGhikaSkB1Vk/8eHuNL
PA8T9m4Ya70zueInhPnNgB7v3YagV0fLgYE+SMQF66HL7Vak3vrmfgD03pI1QtWV
NQkEiccAS2teEf5jijznFC2t4DtJ5vclIfKNWaoMUEz4pg01oE/85eBE+34h22o=
=NkCt
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]Flushing trusts

2014-11-25 Thread Udi Kalifon
Hi Yasuo.

Did you file an RFE for this?

Udi.

- Original Message -
From: "Yasuo Kodera" 
To: "openstack-dev@lists.openstack.org" 
Sent: Friday, November 21, 2014 2:00:42 AM
Subject: [openstack-dev] [Keystone]Flushing trusts

Hi,

We can use "keystone-manage token_flush" to DELETE db-records of expired tokens.

Similarly, expired or deleted trust should be flushed to avoid wasting db.
But I don't know the way to do so.
Is there any tools or patches?

If there are some reasons that these records must not be deleted easily, tell 
me please.


Regards,

Yasuo Kodera


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Discussion on cm_api for global requirement

2014-11-25 Thread Matthew Farrellee

On 11/24/2014 01:22 PM, Trevor McKay wrote:

Hello all,

   at our last Sahara IRC meeting we started discussing whether or not to add
a global requirement for cm_api.py https://review.openstack.org/#/c/130153/

   One issue (but not the only issue) is that cm_api is not packaged for Fedora,
Centos, or Ubuntu currently. The global requirements README points out that 
adding
requirements for a new dependency more or less forces the distros to package the
dependency for the next OS release.

   Given that cm_api is needed for a plugin, but not for core Sahara 
functionality,
should we request that the global requirement be added, or should we seek to add
a global requirement only if/when cm_api is packaged?

   Alternatively, can we support the plugin with additional documentation (ie, 
how
to install cm_api on the Sahara node)?

   Those present at the meeting agreed that it was probably better to defer a 
global
requirement until/unless cm_api is packaged to avoid a burden on the distros.

   Thoughts?

Best,

Trevor

Minutes: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.html
Logs: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.log.html
https://github.com/openstack/requirements


we should keep the global requirements request, use it to track 
satisfying all the "global requirements" requirements, even if that 
involves including cm_api in relevant distros. additionally, in 
parallel, provide documentation the addresses how to install & use the 
CDH plugin.


i don't think we should officially enable/support the plugin in sahara 
until it can be deployed w/o external/3rd-party steps.


best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V Meeting

2014-11-25 Thread Peter Pouliot
Hi All,
Due to the upcoming holidays and lack of quorum I'm going to cancel the meeting 
for this week.

We will resume with the usual Hyper-V meeting next week.

Cheers,

p


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [qa] which core team members are diving into - http://status.openstack.org/elastic-recheck/#1373513

2014-11-25 Thread Matt Riedemann



On 11/25/2014 9:03 AM, Matt Riedemann wrote:



On 11/25/2014 8:11 AM, Sean Dague wrote:

There is currently a review stream coming into Tempest to add Cinder v2
tests in addition to the Cinder v1 tests. At the same time the currently
biggest race fail in the gate related to the projects is
http://status.openstack.org/elastic-recheck/#1373513 - which is cinder
related.

I believe these 2 facts are coupled. The number of volume tests we have
in tempest is somewhat small, and as such the likelihood of them running
simultaneously is also small. However the fact that as the # of tests
with volumes goes up we are getting more of these race fails typically
means that what's actually happening is 2 vol ops that aren't safe to
run at the same time, are.

This remains critical - https://bugs.launchpad.net/cinder/+bug/1373513 -
with no assignee.

So we really needs dedicated diving on this (last bug update with any
code was a month ago), otherwise we need to stop adding these tests to
Tempest, and honestly start skipping the volume tests if we can't have a
repeatable success.

-Sean



I just put up an e-r query for a newly opened bug
https://bugs.launchpad.net/cinder/+bug/1396186 this morning, it looks
similar to bug 1373513 but without the blocked task error in syslog.

There is a three minute gap between when the volume is being deleted in
c-vol logs and when we see the volume uuid logged again, at which point
tempest has already timed out waiting for the delete to complete.

We should at least get some patches to add diagnostic logging in these
delete flows (or periodic tasks that use the same locks/low-level i/o
bound commands?) to try and pinpoint these failures.

I think I'm going to propose a skip patch for test_volume_boot_pattern
since that just seems to be a never ending cause of pain until these
root issues get fixed.



I marked 1396186 as a duplicate of 1373513 since the e-r query for 
1373513 had an OR message which was the same as 1396186.


I went ahead and proposed a skip for test_volume_boot_pattern due to bug 
1373513 [1] until people get on top of debugging it.


I added some notes to bug 1396186, the 3 minute hang seems to be due to 
a vgs call taking ~1 minute and an lvs call taking ~2 minutes.


I'm not sure if those are hit in the volume delete flow or in some 
periodic task, but if there are multiple concurrent worker processes 
that could be hitting those commands at the same time can we look at 
off-loading one of them to a separate thread or something?


[1] https://review.openstack.org/#/c/137096/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] [qa] gate-tempest-dsvm-neutron-heat-slow future

2014-11-25 Thread Sean Dague
So as I was waiting for other tests to return, I started looking through
our existing test lists.

gate-tempest-dsvm-neutron-heat-slow has been slowly evaporating, and I'm
no longer convinced that it does anything useful (and just burns test
nodes).

The entire output of the job is currently as follows:

2014-11-25 14:43:13.801 | heat-slow runtests: commands[0] | bash
tools/pretty_tox.sh
(?=.*\[.*\bslow\b.*\])(^tempest\.(api|scenario)\.orchestration)
--concurrency=4
2014-11-25 14:43:21.313 | {1}
tempest.scenario.orchestration.test_server_cfn_init.CfnInitScenarioTest.test_server_cfn_init
... SKIPPED: Skipped until Bug: 1374175 is resolved.
2014-11-25 14:47:36.271 | {0}
tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_network
[0.169736s] ... ok
2014-11-25 14:47:36.271 | {0}
tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_resources
[0.000508s] ... ok
2014-11-25 14:47:36.291 | {0}
tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router
[0.019679s] ... ok
2014-11-25 14:47:36.313 | {0}
tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_router_interface
[0.020597s] ... ok
2014-11-25 14:47:36.564 | {0}
tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_server
[0.250788s] ... ok
2014-11-25 14:47:36.580 | {0}
tempest.api.orchestration.stacks.test_neutron_resources.NeutronResourcesTestJSON.test_created_subnet
[0.015410s] ... ok
2014-11-25 14:47:44.113 |
2014-11-25 14:47:44.113 | ==
2014-11-25 14:47:44.113 | Totals
2014-11-25 14:47:44.113 | ==
2014-11-25 14:47:44.114 | Run: 7 in 0.478173 sec.
2014-11-25 14:47:44.114 |  - Passed: 6
2014-11-25 14:47:44.114 |  - Skipped: 1
2014-11-25 14:47:44.115 |  - Failed: 0
2014-11-25 14:47:44.115 |
2014-11-25 14:47:44.116 | ==
2014-11-25 14:47:44.116 | Worker Balance
2014-11-25 14:47:44.116 | ==
2014-11-25 14:47:44.117 |  - Worker 0 (6 tests) => 0:00:00.480677s
2014-11-25 14:47:44.117 |  - Worker 1 (1 tests) => 0:00:00.001455s

So we are running about 1s worth of work, no longer in parallel (as
there aren't enough classes to even do parallel runs).

Given the emergence of the heat functional job, and the fact that this
is really not testing anything any more, I'd like to propose we just
remove it entirely at this stage and get the test nodes back.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Asia friendly IRC meeting time

2014-11-25 Thread Matthew Farrellee

On 11/25/2014 02:37 AM, Zhidong Yu wrote:


Current meeting time:
 18:00UTC: Moscow (9pm)China(2am) US West(10am)

My proposal:
 18:00UTC: Moscow (9pm)China(2am) US West(10am)
 00:00UTC: Moscow (3am)China(8am) US West(4pm)


fyi, a number of us are UW East (US West + 3 hours), so...

current meeting time:
 18:00UTC: Moscow (9pm)  China(2am)  US West(10am)/US East (1pm)

and during daylight savings it's US West(11am)/US East(2pm)

so the proposal is:
 18:00UTC: Moscow (9pm)  China(2am)  US (W 10am / E 1pm)
 00:00UTC: Moscow (3am)  China(8am)  US (W 4pm / E 7pm)

given it's literally impossible to schedule a meeting during business 
hours across saratov, china and the us, that's a pretty reasonable 
proposal. my concern is that 00:00UTC may be thin on saratov & US 
participants.


also consider alternating the existing schedule w/ something that's ~4 
hours earlier...

 14:00UTC: Moscow (5pm)  China(10pm)  US (W 6am / E 9am)

best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [ironic] how to remove check-tempest-dsvm-ironic-pxe_ssh on Nova check

2014-11-25 Thread Sean Dague
On 11/25/2014 10:07 AM, Jim Rollenhagen wrote:
> On Tue, Nov 25, 2014 at 08:02:56AM -0500, Sean Dague wrote:
>> When at Summit I discovered that check-tempest-dsvm-ironic-pxe_ssh is
>> now voting on Nova check queue. The reasons given is that the Nova team
>> ignored the interface contract that was being provided to Ironic, broke
>> them, so the Ironic team pushed for co-gating (which basically means the
>> interface contract is now enforced by a 3rd party outside of Nova / Ironic).
>>
>> However, this was all in vague term, and I think is exactly the kind of
>> thing we don't want to do. Which is use the gate as a proxy fight over
>> teams breaking contracts with other teams.
>>
>> So I'd like to dive into what changes happened and what actually broke,
>> so that we can get back to doing this smarter.
>>
>> Because if we are going to continue to grow as a community, we have to
>> co-gate less. It has become a crutch to not think about interfaces and
>> implications of changes, and is something we need to be doing a lot less of.
> 
> Completely agree here; I would love to not gate Nova on Ironic.
> 
> The major problem is that Nova's driver API is explicitly not stable.[0]
> If the driver API becomes stable and properly versioned, Nova should be
> able to change the driver API without breaking Ironic.
> 
> Now, this won't fix all cases where Nova could break Ironic. The
> resource tracker or scheduler could change in a breaking way. However, I
> think the driver API has been the most common cause of Ironic breakage,
> so I think it's a great first step.
> 
> // jim
> 
> [0] 
> http://docs.openstack.org/developer/nova/devref/policies.html#public-contractual-apis

So we actually had a test in tree for that part of the contract...

We don't need the surface to be public contractual, we just need to know
what things Ironic is depending on and realize that can't be changed
without some compatibility code put in place.

Again... without knowing exactly what happened (I was on leave) it's
hard to come up with a solution. However, I think the co-gate was an
elephant gun that we don't actually want.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [ironic] how to remove check-tempest-dsvm-ironic-pxe_ssh on Nova check

2014-11-25 Thread Jim Rollenhagen
On Tue, Nov 25, 2014 at 08:02:56AM -0500, Sean Dague wrote:
> When at Summit I discovered that check-tempest-dsvm-ironic-pxe_ssh is
> now voting on Nova check queue. The reasons given is that the Nova team
> ignored the interface contract that was being provided to Ironic, broke
> them, so the Ironic team pushed for co-gating (which basically means the
> interface contract is now enforced by a 3rd party outside of Nova / Ironic).
> 
> However, this was all in vague term, and I think is exactly the kind of
> thing we don't want to do. Which is use the gate as a proxy fight over
> teams breaking contracts with other teams.
> 
> So I'd like to dive into what changes happened and what actually broke,
> so that we can get back to doing this smarter.
> 
> Because if we are going to continue to grow as a community, we have to
> co-gate less. It has become a crutch to not think about interfaces and
> implications of changes, and is something we need to be doing a lot less of.

Completely agree here; I would love to not gate Nova on Ironic.

The major problem is that Nova's driver API is explicitly not stable.[0]
If the driver API becomes stable and properly versioned, Nova should be
able to change the driver API without breaking Ironic.

Now, this won't fix all cases where Nova could break Ironic. The
resource tracker or scheduler could change in a breaking way. However, I
think the driver API has been the most common cause of Ironic breakage,
so I think it's a great first step.

// jim

[0] 
http://docs.openstack.org/developer/nova/devref/policies.html#public-contractual-apis

> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Asia friendly IRC meeting time

2014-11-25 Thread Trevor McKay
+1 for me, too.  But I agree with Mike -- we need input from Saratov and
similar time zones.

I think though that the folks in Saratov generally try to shift hours
anyway, to better align with the rest of us.  So I think something that
works for the US will likely work for them.

Trev

On Tue, 2014-11-25 at 09:54 -0500, michael mccune wrote:
> On 11/25/2014 02:37 AM, Zhidong Yu wrote:
> > I know it's difficult to find a time good for both US, Europe and Asia.
> > I suggest we could have two different meeting series with different
> > time, i.e. US/Europe meeting this week, US/Asia meeting next week, and
> > so on.
> 
> i'd be ok with alternating week schedules
> 
> > My proposal:
> >  18:00UTC: Moscow (9pm)China(2am) US West(10am)
> >  00:00UTC: Moscow (3am)China(8am) US West(4pm)
> >
> 
> this works for me, but i realize it might be difficult for the folks in 
> Russia. not sure if there is a better option though.
> 
> thanks for putting this forward Zhidong
> 
> regards,
> mike
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] [qa] which test configs does the swift team find useful

2014-11-25 Thread Sean Dague
As we are trying to do smart disaggregation of tests in the gate, I
think it's important to figure out which test configurations seem to be
actually helping, and which aren't. As the swift team has long had a
functional test job, this seems like a good place to start. (Also the
field deploy / upgrade story on Swift is probably one of the best of any
OpenStack project, so removing friction is probably in order.)

gate-swift-pep8 SUCCESS in 1m 16s
gate-swift-docs SUCCESS in 1m 48s
gate-swift-python27 SUCCESS in 3m 24s
check-tempest-dsvm-full SUCCESS in 56m 51s
check-tempest-dsvm-postgres-fullSUCCESS in 54m 53s
check-tempest-dsvm-neutron-full SUCCESS in 1h 06m 09s
check-tempest-dsvm-neutron-heat-slowSUCCESS in 31m 18s
check-grenade-dsvm  SUCCESS in 39m 33s
gate-tempest-dsvm-large-ops SUCCESS in 29m 34s
gate-tempest-dsvm-neutron-large-ops SUCCESS in 22m 11s
gate-swift-tox-func SUCCESS in 2m 50s (non-voting)
check-swift-dsvm-functional SUCCESS in 17m 12s
check-devstack-dsvm-cells   SUCCESS in 15m 18s


I think in looking at that it's obvious that:
* check-devstack-dsvm-cells
* check-tempest-dsvm-postgres-full
* gate-tempest-dsvm-large-ops
* gate-tempest-dsvm-neutron-large-ops
* check-tempest-dsvm-neutron-full

Provide nothing new to swift, the access patterns on the glance => swift
interaction aren't impacted on any of those, neither is the heat / swift
resource tests or volumes / swift backup tests.

check-tempest-dsvm-neutron-heat-slowdoesn't touch swift either (it's
actually remarkably sparse of any content).

Which kind of leaves us with 1 full stack run, and the grenade job. Have
those caught real bugs? Does there remain value in them? Have other
teams that rely on swift found those to block regressions?

Let's figure out what's helpful, and what's not, and purge out all the
non helpful stuff.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [qa] which core team members are diving into - http://status.openstack.org/elastic-recheck/#1373513

2014-11-25 Thread Matt Riedemann



On 11/25/2014 8:11 AM, Sean Dague wrote:

There is currently a review stream coming into Tempest to add Cinder v2
tests in addition to the Cinder v1 tests. At the same time the currently
biggest race fail in the gate related to the projects is
http://status.openstack.org/elastic-recheck/#1373513 - which is cinder
related.

I believe these 2 facts are coupled. The number of volume tests we have
in tempest is somewhat small, and as such the likelihood of them running
simultaneously is also small. However the fact that as the # of tests
with volumes goes up we are getting more of these race fails typically
means that what's actually happening is 2 vol ops that aren't safe to
run at the same time, are.

This remains critical - https://bugs.launchpad.net/cinder/+bug/1373513 -
with no assignee.

So we really needs dedicated diving on this (last bug update with any
code was a month ago), otherwise we need to stop adding these tests to
Tempest, and honestly start skipping the volume tests if we can't have a
repeatable success.

-Sean



I just put up an e-r query for a newly opened bug 
https://bugs.launchpad.net/cinder/+bug/1396186 this morning, it looks 
similar to bug 1373513 but without the blocked task error in syslog.


There is a three minute gap between when the volume is being deleted in 
c-vol logs and when we see the volume uuid logged again, at which point 
tempest has already timed out waiting for the delete to complete.


We should at least get some patches to add diagnostic logging in these 
delete flows (or periodic tasks that use the same locks/low-level i/o 
bound commands?) to try and pinpoint these failures.


I think I'm going to propose a skip patch for test_volume_boot_pattern 
since that just seems to be a never ending cause of pain until these 
root issues get fixed.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Asia friendly IRC meeting time

2014-11-25 Thread michael mccune

On 11/25/2014 02:37 AM, Zhidong Yu wrote:

I know it's difficult to find a time good for both US, Europe and Asia.
I suggest we could have two different meeting series with different
time, i.e. US/Europe meeting this week, US/Asia meeting next week, and
so on.


i'd be ok with alternating week schedules


My proposal:
 18:00UTC: Moscow (9pm)China(2am) US West(10am)
 00:00UTC: Moscow (3am)China(8am) US West(4pm)



this works for me, but i realize it might be difficult for the folks in 
Russia. not sure if there is a better option though.


thanks for putting this forward Zhidong

regards,
mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] licenses

2014-11-25 Thread Arkady_Kanevsky
What is the license that should be used for specs and for the code?
While all the code I had seen is under Apache 2 license, many of the specs are 
under CCPL-3 license.
Is that the guidance?
Thanks,
Arkady
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Zabbix in HA mode

2014-11-25 Thread Vladimir Kuklin
Bartosz,

It is obviously possible to install zabbix on the master nodes and put it
under pacemaker control. But it seems very strange for me to monitor
something with software located on the nodes that you are monitoring.

On Tue, Nov 25, 2014 at 4:21 PM, Bartosz Kupidura 
wrote:

> Hello All,
>
> Im working on Zabbix implementation which include HA support.
>
> Zabbix server should be deployed on all controllers in HA mode.
>
> Currently we have dedicated role 'zabbix-server', which does not support
> more
> than one zabbix-server. Instead of this we will move monitoring solution
> (zabbix),
> as an additional component.
>
> We will introduce additional role 'zabbix-monitoring', assigned to all
> servers with
> lowest priority in serializer (run puppet after every other roles) when
> zabbix is
> enabled.
> 'Zabbix-monitoring' role will be assigned automatically.
>
> When zabbix component is enabled, we will install zabbix-server on all
> controllers
> in active-backup mode (pacemaker+haproxy).
>
> In next stage, we can allow users to deploy zabbix-server on dedicated
> node OR
> on controllers for performance reasons.
> But for now we should force zabbix-server to be deployed on controllers.
>
> BP is in initial phase, but code is ready and working with Fuel 5.1.
> Now im checking if it works with master.
>
> Any comments are welcome!
>
> BP link: https://blueprints.launchpad.net/fuel/+spec/zabbix-ha
>
> Best Regards,
> Bartosz Kupidura
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Reactive enforcement specs

2014-11-25 Thread Zhipeng Huang
Hi All,

Since the meeting time would be 1:00am in the morning in China, I'm afraid
I probably won't make it at IRC. As I have left comments on
policy-event-trigger and explicit-reactive-enforcement, my colleague and I
are interested in these specs, and would like to provide work on coding
accordingly.

Thanks !

On Fri, Nov 21, 2014 at 12:50 AM, Tim Hinrichs  wrote:

>  Hi all,
>
>  Recently there’s been quite a bit of interest in adding reactive
> enforcement to Congress: the ability to write policies that tell Congress
> to execute actions to correct policy violations.  We’re planning to add
> this feature in the next release.  I wrote a few specs that split this work
> into several bite-sized pieces (one was accidentally merged
> prematurely—it’s still up for discussion).
>
>  Let’s discuss these over Gerrit (the usual spec process).  We’re trying
> to finalize these specs by the middle of next week (a little behind the
> usual OpenStack schedule).  For those of you who haven’t left comments via
> Gerrit, you need to ...
>
>  1) log in to Gerrit using your Launchpad ID,
> 2) leave comments on specific lines in individual files by double-clicking
> the line you’d like to comment on,
> 3) click the Review button on the initial page
> 4) click the Publish Comments button.
>
>  Add triggers to policy engine (a programmatic interface useful for
> implementing reactive enforcement)
> https://review.openstack.org/#/c/130010/
>
>  Add modal operators to policy language (how we might express reactive
> enforcement policies within Datalog)
> https://review.openstack.org/#/c/134376/
>
>  Action-execution interface (how we might modify data-source drivers so
> they can execute actions)
> https://review.openstack.org/#/c/134417/
>
> Explicit reactive enforcement (pulling all the pieces together)
> https://review.openstack.org/#/c/134418/
>
>
>  There are a number of additional specs generated since the summit.  Feel
> free to chime in on those too.
>
> https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z
>
>  Tim
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OpenDaylight, OpenCompute affcienado
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telco] [NFV] [Heat] Telco Orchestration

2014-11-25 Thread Marc Koderer
Hi Angus,

Am 25.11.2014 um 12:48 schrieb Angus Salkeld :

> On Tue, Nov 25, 2014 at 7:27 PM, Marc Koderer wrote:
> Hi all,
> 
> as discussed during our summit sessions we would like to expand the scope
> of the Telco WG (aka OpenStack NFV group) and start working
> on the orchestration topic (ETSI MANO).
> 
> Therefore we started with an etherpad [1] to collect ideas, use-cases and
> requirements.
> 
> Hi Marc,
> 
> You have quite a high acronym per sentence ratio going on that etherpad;)

Haha, welcome to the telco world :)

> 
> From Heat's perspective, we have a lot going on already, but we would love to 
> support
> what you are doing.

That’s exactly what we are planning. What we have is a long list of use-cases 
and
requirements. We need to transform them into specs for the OpenStack projects.
Many of those specs won’t be NFV specify, for instance a Telco cloud will be 
highly
distributed. So what we need is a multi-region heat support (which is already a 
planned
feature for Heat as I learned today).

> 
> You need to start getting specific about what you need and what the missing 
> gaps are.
> I see you are already looking at higher layers (TOSCA) also check out Murano 
> as well.
> 

Yep, I will check Murano.. I never had a closer look to it.

Regards
Marc

> 
> Regards
> -Angus
> 
> 
> Goal is to discuss this document and move it onto the Telco WG wiki [2] when
> it becomes stable.
> 
> Feedback welcome ;)
> 
> Regards
> Marc
> Deutsche Telekom
> 
> [1] https://etherpad.openstack.org/p/telco_orchestration
> [2] https://wiki.openstack.org/wiki/TelcoWorkingGroup
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [qa] which core team members are diving into - http://status.openstack.org/elastic-recheck/#1373513

2014-11-25 Thread Sean Dague
There is currently a review stream coming into Tempest to add Cinder v2
tests in addition to the Cinder v1 tests. At the same time the currently
biggest race fail in the gate related to the projects is
http://status.openstack.org/elastic-recheck/#1373513 - which is cinder
related.

I believe these 2 facts are coupled. The number of volume tests we have
in tempest is somewhat small, and as such the likelihood of them running
simultaneously is also small. However the fact that as the # of tests
with volumes goes up we are getting more of these race fails typically
means that what's actually happening is 2 vol ops that aren't safe to
run at the same time, are.

This remains critical - https://bugs.launchpad.net/cinder/+bug/1373513 -
with no assignee.

So we really needs dedicated diving on this (last bug update with any
code was a month ago), otherwise we need to stop adding these tests to
Tempest, and honestly start skipping the volume tests if we can't have a
repeatable success.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] New config options, no default change

2014-11-25 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 16/11/14 20:17, Jay S. Bryant wrote:
> All,
> 
> This is a question I have been struggling with for Cinder recently.
> Where do we draw the line on backports.  How do we handle config changes?
> 
> One thing for Cinder I am also considering, in addition to whether it
> changes the default functionality, is whether it is specific to the
> submitter's driver.  If it is only going to affect the driver I am more
> likely to consider it than something that is going to impact all of Cinder.
> 
> What is the text that should be included in the commit messages to make
> sure that it is picked up for release notes?  I want to make sure that
> people use that.

I'm not sure anyone tracks commit messages to create release notes. A
better way to handle this is to create a draft, post it in review
comments, and copy to release notes draft right before/after pushing the
patch into gate.

> 
> Thanks!
> Jay
> 
> 
> On 11/11/2014 06:50 AM, Alan Pevec wrote:
>>> New config options may not change behavior (if default value preserves
>>> behavior), they still make documentation more incomplete (doc, books,
>>> and/or blogposts about Juno won't mention that option).
>> That's why we definitely need such changes described clearly in stable
>> release notes.
>> I also lean to accept this as an exception for stable/juno, I'll
>> request relnote text in the review.
>>
>> Cheers,
>> Alan
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUdI3AAAoJEC5aWaUY1u57YDkH/1LL/Y0gL2egRs1F6pgoaVbk
bEREvavZMyV6JFrojfJz2HKVxeo//0AdCcr3+W7KH5pXtposhg3Xf5v6bhF+n0gO
NX8u23z2zBLh6xdYcJHiRtMz1zhXT66xDhZso4bMNAL98glGOv1rrbkmkj43pR2L
TKSgRyes75nEBOlvPi79Co+2Ti3Z60HbS1NwgqCTGb9yRV3o0JDMZ3+zdFKlrTTf
0ZkrqEHtDaS0wEJmi7vqDAflNBPPn4lo8mAcju9k80lwrCs7g6VdqYJec0Nb/1gJ
Foj6vWRPHDH1ftph3am4yhY6Gs+dXQ1nmhEK0zFucDeLXz01Gql3vKX0xK18Rho=
=6ABy
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] Zabbix in HA mode

2014-11-25 Thread Rob Basham
Rob Basham

Cloud Systems Software Architecture
971-344-1999


Bartosz Kupidura  wrote on 11/25/2014 05:21:59 AM:

> From: Bartosz Kupidura 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 11/25/2014 05:26 AM
> Subject: [openstack-dev] [FUEL] Zabbix in HA mode
> 
> Hello All,
> 
> Im working on Zabbix implementation which include HA support.
> 
> Zabbix server should be deployed on all controllers in HA mode.
> 
> Currently we have dedicated role 'zabbix-server', which does not support 
more 
> than one zabbix-server. Instead of this we will move monitoring 
> solution (zabbix), 
> as an additional component.
> 
> We will introduce additional role 'zabbix-monitoring', assigned to 
> all servers with 
> lowest priority in serializer (run puppet after every other roles) 
> when zabbix is 
> enabled.
> 'Zabbix-monitoring' role will be assigned automatically.
> 
> When zabbix component is enabled, we will install zabbix-server on 
> all controllers 
> in active-backup mode (pacemaker+haproxy).
> 
> In next stage, we can allow users to deploy zabbix-server on dedicated 
node OR
> on controllers for performance reasons.
> But for now we should force zabbix-server to be deployed on controllers.
> 
> BP is in initial phase, but code is ready and working with Fuel 5.1. 
> Now im checking if it works with master.
> 
> Any comments are welcome!

This is of limited value to my business due to the GPL license -- so my 
company's lawyers tell me.  I will be unable to take advantage of what 
looks to be a solid solution from what I can see of Zabbix.  Are there any 
risks to Fuel (open source contamination) from this approach?  I doubt it 
but I want to make sure you are considering this.

> 
> BP link: https://blueprints.launchpad.net/fuel/+spec/zabbix-ha
> 
> Best Regards,
> Bartosz Kupidura
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telco] [NFV] [Heat] Telco Orchestration

2014-11-25 Thread Yuriy.Babenko

Angus,
do you have specific questions?
We could describe in detail what we mean with all these abbreviations.. )
This is kind of telco slang, but surely there is a need to describe this in 
more detail for OpenStack Community should the need be.

Kind regards/Mit freundlichen Grüßen
Yuriy Babenko

Von: Angus Salkeld [mailto:asalk...@mirantis.com]
Gesendet: Dienstag, 25. November 2014 12:48
An: OpenStack Development Mailing List (not for usage questions)
Betreff: Re: [openstack-dev] [Telco] [NFV] [Heat] Telco Orchestration

On Tue, Nov 25, 2014 at 7:27 PM, Marc Koderer 
mailto:m...@koderer.com>> wrote:
Hi all,

as discussed during our summit sessions we would like to expand the scope
of the Telco WG (aka OpenStack NFV group) and start working
on the orchestration topic (ETSI MANO).

Therefore we started with an etherpad [1] to collect ideas, use-cases and
requirements.

Hi Marc,
You have quite a high acronym per sentence ratio going on that etherpad;)
From Heat's perspective, we have a lot going on already, but we would love to 
support
what you are doing.
You need to start getting specific about what you need and what the missing 
gaps are.
I see you are already looking at higher layers (TOSCA) also check out Murano as 
well.

Regards
-Angus


Goal is to discuss this document and move it onto the Telco WG wiki [2] when
it becomes stable.

Feedback welcome ;)

Regards
Marc
Deutsche Telekom

[1] https://etherpad.openstack.org/p/telco_orchestration
[2] https://wiki.openstack.org/wiki/TelcoWorkingGroup

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Integration tests on the gate

2014-11-25 Thread Julie Pichon
Hi folks,

You may have noticed a new job in the check queue for Horizon patches,
gate-horizon-dsvm-integration. This job runs the integration tests suite
from our repository. [1]

The job is marked as non-voting but *it is meant to pass.* The plan is
to leave it that way for a couple of weeks to make sure that it is
stable. After that it will become a voting job.

What to do if the job fails
---

If you notice a failure, please look at the logs and make sure that it
wasn't caused by your patch. If it doesn't look related or if you're not
sure how to interpret the results, please ask on #openstack-horizon or
reply to this thread. We really want to avoid people getting used to the
job failing, getting annoyed at it and/or blindly rechecking. If there
are any intermittent or strange issue we'll postpone making the job
voting, but we need to know about it so we can investigate and fix them.

How to help
---

If you'd like to help, you're very welcome to do so either by reviewing
new tests [2] or writing more of them [3]. As with everywhere else, all
help is very welcome and review attention is particularly appreciated!

I'm really looking forward to having the integration tests be part of
the main voting gate and for us improve the coverage. I'd really like to
thank in particular Daniel Korn and Tomáš Nováčik for their huge efforts
on these tests over the past year.

Thanks,

Julie


[1]
https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/integration_tests
[2] https://wiki.openstack.org/wiki/Horizon/Testing/UI#Writing_a_test
[3]
https://review.openstack.org/#/q/project:openstack/horizon+file:%255E.*/integration_tests/.*+status:open,n,z

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FUEL] Zabbix in HA mode

2014-11-25 Thread Bartosz Kupidura
Hello All,

Im working on Zabbix implementation which include HA support.

Zabbix server should be deployed on all controllers in HA mode.

Currently we have dedicated role 'zabbix-server', which does not support more 
than one zabbix-server. Instead of this we will move monitoring solution 
(zabbix), 
as an additional component.

We will introduce additional role 'zabbix-monitoring', assigned to all servers 
with 
lowest priority in serializer (run puppet after every other roles) when zabbix 
is 
enabled.
'Zabbix-monitoring' role will be assigned automatically.

When zabbix component is enabled, we will install zabbix-server on all 
controllers 
in active-backup mode (pacemaker+haproxy).

In next stage, we can allow users to deploy zabbix-server on dedicated node OR
on controllers for performance reasons.
But for now we should force zabbix-server to be deployed on controllers.

BP is in initial phase, but code is ready and working with Fuel 5.1. 
Now im checking if it works with master.

Any comments are welcome!

BP link: https://blueprints.launchpad.net/fuel/+spec/zabbix-ha

Best Regards,
Bartosz Kupidura
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [ironic] how to remove check-tempest-dsvm-ironic-pxe_ssh on Nova check

2014-11-25 Thread Sean Dague
When at Summit I discovered that check-tempest-dsvm-ironic-pxe_ssh is
now voting on Nova check queue. The reasons given is that the Nova team
ignored the interface contract that was being provided to Ironic, broke
them, so the Ironic team pushed for co-gating (which basically means the
interface contract is now enforced by a 3rd party outside of Nova / Ironic).

However, this was all in vague term, and I think is exactly the kind of
thing we don't want to do. Which is use the gate as a proxy fight over
teams breaking contracts with other teams.

So I'd like to dive into what changes happened and what actually broke,
so that we can get back to doing this smarter.

Because if we are going to continue to grow as a community, we have to
co-gate less. It has become a crutch to not think about interfaces and
implications of changes, and is something we need to be doing a lot less of.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Versioned objects cross project sessions next steps

2014-11-25 Thread Doug Hellmann

On Nov 24, 2014, at 4:06 PM, Jay Pipes  wrote:

> On 11/24/2014 03:11 PM, Joshua Harlow wrote:
>> Dan Smith wrote:
 3. vish brought up one draw back of versioned objects: the difficulty in
 cherry picking commits for stable branches - Is this a show stopper?.
>>> 
>>> After some discussion with some of the interested parties, we're
>>> planning to add a third .z element to the version numbers and use that
>>> to handle backports in the same way that we do for RPC:
>>> 
>>> https://review.openstack.org/#/c/134623/
>>> 
 Next steps:
 - Jay suggested making a second spec that would lay out what it would
 look like if we used google protocol buffers.
 - Dan: do you need some help in making this happen, do we need some
 volunteers?
>>> 
>>> I'm not planning to look into this, especially since we discussed it a
>>> couple years ago when deciding to do what we're currently doing. If
>>> someone else does, creates a thing that is demonstrably more useful than
>>> what we have, and provides a migration plan, then cool. Otherwise, I'm
>>> not really planning to stop what I'm doing at the moment.
>>> 
 - Are there any other concrete things we can do to get this usable by
 other projects in a timely manner?
>>> 
>>> To be honest, since the summit, I've not done anything with the current
>>> oslo spec, given the potential for doing something different that was
>>> raised. I know that cinder folks (at least) are planning to start
>>> copying code into their tree to get moving.
>>> 
>>> I think we need a decision to either (a) dump what we've got into the
>>> proposed library (or incubator) and plan to move forward incrementally
>>> or (b) each continue doing our own thing(s) in our own trees while we
>>> wait for someone to create something based on GPB that does what we want.
>> 
>> I'd prefer (a); although I hope there is a owner/lead for this library
>> (dan?) and it's not just dumped on the oslo folks as that won't work out
>> so well I think. It'd be nice if said owner could also look into (b) but
>> that's at there own (or other library supporter) time I suppose (I
>> personally think (b) would probably allow for a larger community of
>> folks to get involved in this library, would potentially reduce the
>> amount of custom/overlapping code and other similar benefits...).
> 
> I gave some comments at the very end of the summit session on this, and I 
> want to be clear about something. I definitely like GPB, and there's definite 
> overlap with some things that GPB does and things that nova.objects does.
> 
> That said, I don't think it's wise to make oslo-versionedobjects be a totally 
> new thing. I think we should use nova.objects as the base of a new 
> oslo-versionedobjects library, and we should evolve oslo-versionedobjects 
> slowly over time, eventually allowing for nova, ironic, and whomever else is 
> currently using nova/objects, to align with an Oslo library vision for this.
> 
> So, in short, I also think a) is the appropriate path to take.

+1

When Dan and I talked about this, I said I would take care of exporting the 
nova objects git history into a new repository. We’ve had some other things 
blocking work in Oslo that I needed to handle, but I expect to be able to pick 
up this work soon. If someone else wants to jump in, I’ll be happy to give a 
brain dump of what I planned to do for the export, since the existing Oslo tool 
that we use on the incubator isn’t quite right for the job.

Doug

> 
> Best,
> -jay
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon]

2014-11-25 Thread Ana Krivokapic

On 25/11/14 00:09, David Lyle wrote:
> I am pleased to nominate Thai Tran and Cindy Lu to horizon-core. 
>
> Both Thai and Cindy have been contributing significant numbers of high
> quality reviews during Juno and Kilo cycles. They are consistently
> among the top non-core reviewers. They are also responsible for a
> significant number of patches to Horizon. Both have a strong
> understanding of the Horizon code base and the direction of the project.
>
> Horizon core team members please vote +1 or -1 to the
> nominations either in reply or by private communication. Voting will
> close on Friday unless I hear from everyone before that.
>
> Thanks,
> David
>
>

+1 for both.

Cindy and Thai, thanks for your hard work!

-- 
Regards,

Ana Krivokapic
Software Engineer
OpenStack team
Red Hat Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues regarding Jenkins

2014-11-25 Thread Sean Dague
On 11/25/2014 07:30 AM, Andreas Jaeger wrote:
> On 11/25/2014 01:22 PM, Abhishek Talwar/HYD/TCS wrote:
>> Hi All,
>>
>> I am facing some issues regarding Jenkins. Actually Jenkins is failing
>> and it is giving "NOT_REGISTERED" in some tests.
>> So, kindly let me know what does "NOT_REGISTERED" mean here.
>>
>> Build failed (check pipeline). For information on how to proceed, see
>> https://wiki.openstack.org/wiki/GerritJenkinsGit#Test_Failures
> 
> Which patch was this? There was some trouble yesterday.
> 
> Andreas--
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF:Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Right now all icehouse jobs are showing up as NOT_REGISTERED. It's cross
project and current.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues regarding Jenkins

2014-11-25 Thread Andreas Jaeger
On 11/25/2014 01:22 PM, Abhishek Talwar/HYD/TCS wrote:
> Hi All,
> 
> I am facing some issues regarding Jenkins. Actually Jenkins is failing
> and it is giving "NOT_REGISTERED" in some tests.
> So, kindly let me know what does "NOT_REGISTERED" mean here.
> 
> Build failed (check pipeline). For information on how to proceed, see
> https://wiki.openstack.org/wiki/GerritJenkinsGit#Test_Failures

Which patch was this? There was some trouble yesterday.

Andreas--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF:Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ODL Policy Driver Specs

2014-11-25 Thread Sachi Gupta
Hey All,

I need to understand the interaction between the Openstack GBP and the 
Opendaylight GBP project which will be done by ODL Policy driver.

Can someone provide me with specs of ODL Policy driver for making my 
understanding on call flow.


Thanks & Regards
Sachi Gupta
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >