Re: [openstack-dev] [Fuel] Fuel Plugins, First look; Whats Next?

2014-11-24 Thread Andrew Woodward
On Mon, Nov 24, 2014 at 4:40 AM, Evgeniy L  wrote:
> Hi Andrew,
>
> Comments inline.
> Also could you please provide a link on OpenStack upgrade feature?
> It's not clear why do you need it as a plugin and how you are going
> to deliver this feature.
>
> On Sat, Nov 22, 2014 at 4:23 AM, Andrew Woodward  wrote:
>>
>> So as part of the pumphouse integration, I've started poking around
>> the Plugin Arch implementation as an attempt to plug it into the fuel
>> master.
>>
>> This would require that the plugin install a container, and some
>> scripts into the master node.
>>
>> First look:
>> I've looked over the fuel plugins spec [1] and see that the install
>> script was removed from rev 15 ->16 (line 134) This creates problems
>> do to the need of installing the container, and scripts so I've
>> created a bug [2] for this so that we can allow for an install script
>> to be executed prior to HCF for 6.0.
>
>
> Yes, it was removed, but nothing stops you from creating the install
> script and putting it in tarball, you don't need any changes in the
> current implementation.

how would it be executed? the plugin loading done by fuel-client
doesn't cover this.

>
> The reasons why it was done this way, see in separate mailing thread [1].
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-October/049073.html
>
>>
>>
>> Looking into the implementation of the install routine [3] to
>> implement [2], I see that the fuelclient is extracting the tar blindly
>> (more on that at #3) on the executor system that fuelclient is being
>> executed from. Problems with this include 1) the fuelclient may not
>> root be privileged (like in Mirantis OpenStack Express) 2) the
>> fuelclient may not be running on the same system as nailgun 3) we are
>> just calling .extractall on the tarball, this means that we haven't
>> done any validation on the files coming out of the tarball. We need to
>> validate that 3.a) the tarball was actually encoded with the right
>> base path 3.b) that the tasks.yaml file is validated and all the noted
>> scripts are found. Really, the install of the plugin should be handled
>> by the nailgun side to help with 1,2.
>
>
> 1. if you have custom installation you have to provide custom permissions
> for /var/www/nailgun/plugins directory
> 2. you are absolutely right, see the thread from above why we decided to add
> this feature even if it was a wrong decision from architecture point of
> view
> 3. "haven't done any validation" - not exactly, validation is done on plugin
> building stage, also we have simple validation on plugin installation
> stage on
> Nailgun side (that data are consistent from nailgun point of view).
> There are
> several reasons why it was done mainly on fuel-plugin-builder side:
>   a. plugin is validated before it's installed (it dramatically
> simplifies development)
>   b. also you can check that plugin is valid without plugin building,
>   use 'fpb --check fuel_plugin_name' parameter
>   c. faster fixes delivery, if there is a bug in validation (we had
> several of them
>   during the development in fuel-plugin-builder), we cannot just
> release new
>   version of fuel, but we can do it with fuel-plugin-builder, we had
> 2 releases [1].
>   For more complicated structures you will have bugs in validation
> for sure.
>   d. if we decide to support validations on both sides, we will come up
> with a lot of bugs
>   which are related to desynchronization of validators between
> Nailgun and fuel-plugin-builder

the main validation points that should be done by nailgun is to verify
that the paths are correct. i.e.
* the tar ./ == metadata.yaml['name']
* tasks.yaml + metadata.yaml refer to valid paths for "cmd",
"deployment_scripts_path", "repository_path"

Rright now there is no contract between the user building the plugin
with fpb, vs adding all the files to a tarball. if fpb is supposed to
be doing this, then there should be some form of signature that can be
parsed to ensure that these items have been pre-validated and the
package wasn't modified, or built by hand. Something that would be
easy, and cheap would be something like 'cat metdata.yaml tasks.yaml |
md5sum >md5sum' and validate this when we load the package. It also
gives us a starting point for other signers.

Alternatly, we would use fpb to validate the package prior to
installing it into nailgun.

>
> [1]
> https://github.com/stackforge/fuel-plugins/blob/master/fuel_plugin_builder/CHANGELOG.md
>
>>
>>
>> Whats next?
>> There are many parts of PA that need to be extended, I think that
>> these are the ones that we must tackle next to cover the most cases
>> a) plugin packaging: it appears that non of the "core plugins" (those
>> in fuel-plugins) are bundled into the iso.
>> b) plugin signing: we cant have "core plugins" with out some method of
>> testing, certifying, and signing them so that we can know that they
>> are trusted.
>>

[openstack-dev] [sahara] Asia friendly IRC meeting time

2014-11-24 Thread Zhidong Yu
Hi All,

I'd like to propose a time change to the weekly IRC meeting to make it more
convenient for people from Asia to attend. As you might be aware, we here
at Intel in China have a few people started working on Sahara actively
since Juno release. I also noticed that there are contributions from NTT
Japan as well. So I think an Asia-friendly time could allow us to
participate in the community more efficiently and may potentially attract
more contributors from Asia.

I know it's difficult to find a time good for both US, Europe and Asia. I
suggest we could have two different meeting series with different time,
i.e. US/Europe meeting this week, US/Asia meeting next week, and so on.

Current meeting time:
18:00UTC: Moscow (9pm)China(2am) US West(10am)

My proposal:
18:00UTC: Moscow (9pm)China(2am) US West(10am)
00:00UTC: Moscow (3am)China(8am) US West(4pm)

Thanks, Zhidong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] a way of checking replicate completion on swift cluster

2014-11-24 Thread Matsuda, Kenichiro
Hi,

Thanks for your advice.

I understood that the logs are necessary to judge whether no failure on 
object-replicator.
And also, I thought that the recon info of object-replicator having failure 
(just like the recon info of account-replicator and container-replicator) 
is useful.
Are there any reason to not included failure in recon?

Kenichiro Matsuda.

> -Original Message-
> From: Clay Gerrard [mailto:clay.gerr...@gmail.com] 
> Sent: Tuesday, November 25, 2014 5:53 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [swift] a way of checking replicate completion 
> on swift cluster
>
> replication logs
>
> On Thu, Nov 20, 2014 at 9:32 PM, Matsuda, Kenichiro 
>  wrote:
> Hi,
>
> Thank you for the info.
>
> I was able to get replication info easily by swift-recon API.
> But, I wasn't able to judge whether no failure from recon info of 
> object-replicator.
>
> Could you please advise me for a way of get object-replicator's failure info?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Weekly-meeting time change.

2014-11-24 Thread Nikhil Komawar
Hi all,

Happy Thanksgiving (in advance)! The meeting on Nov. 27th has been cancelled 
duly. So, we will be having the next meeting on Thursday, December 4th at 
1400UTC. The meeting after will be on Dec 11th at 1500 UTC. (channel: 
#openstack-meeting-4 )

Please find the new alternating time slots and channel change for Glance weekly 
meetings here [0]. The weekly meeting details page is here [1]. Reference poll 
[2] as well as input from certain members have been used to decide on the time 
change.

This change has been made effective to help collaborators from different parts 
of the world be able to participate in the meetings. Hope it helps increase the 
participation which was going low in the past few weeks for the later time slot.

Please let me know if anyone has any concerns.

[0] https://wiki.openstack.org/wiki/Meetings#Glance_Team_meeting
[1] https://wiki.openstack.org/wiki/Meetings/Glance
[2] http://doodle.com/nwc26k8satuyvvmz

Thanks,
-Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 11/25

2014-11-24 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)





1) Status on cleanup work - 
https://wiki.openstack.org/wiki/Gantt/kilo


--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-24 Thread Mike Bayer

> On Nov 24, 2014, at 7:32 PM, Michael Still  wrote:
> 
> Interesting. I hadn't seen consistency between the two databases as
> trumping doing this less horribly, but it sounds like its more of a
> thing that I thought.

it really depends on what you need to do.  if you need to get a result set of 
all entities, deleted or not, consider the difference between a SELECT for all 
rows from a single table, easy, vs. doing a UNION from primary table to history 
table, matching up all the columns that hopefully do in fact match up 
(awkward), and then dealing with joining out to related tables if you need that 
as well (very awkward from a UNION).

if you have any plans to consume these rows in the app i’d advise just doing it 
like all the other tables.  if we want to change that approach, we’d do it 
en-masse at some point and you’d get it for free.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Discussion on cm_api for global requirement

2014-11-24 Thread Zhidong Yu
Hi Trevor,

Thank you for bringing this to the IRC meeting! I agree with you guys'
conclusion. Patch 136697 has been submitted to address it in Sahara
document.

Thanks, Zhidong

On Tue, Nov 25, 2014 at 2:22 AM, Trevor McKay  wrote:

> Hello all,
>
>   at our last Sahara IRC meeting we started discussing whether or not to
> add
> a global requirement for cm_api.py
> https://review.openstack.org/#/c/130153/
>
>   One issue (but not the only issue) is that cm_api is not packaged for
> Fedora,
> Centos, or Ubuntu currently. The global requirements README points out
> that adding
> requirements for a new dependency more or less forces the distros to
> package the
> dependency for the next OS release.
>
>   Given that cm_api is needed for a plugin, but not for core Sahara
> functionality,
> should we request that the global requirement be added, or should we seek
> to add
> a global requirement only if/when cm_api is packaged?
>
>   Alternatively, can we support the plugin with additional documentation
> (ie, how
> to install cm_api on the Sahara node)?
>
>   Those present at the meeting agreed that it was probably better to defer
> a global
> requirement until/unless cm_api is packaged to avoid a burden on the
> distros.
>
>   Thoughts?
>
> Best,
>
> Trevor
>
> Minutes:
> http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.html
> Logs:
> http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.log.html
> https://github.com/openstack/requirements
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] removing XML testing completely from Tempest

2014-11-24 Thread Kenichi Oomichi

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Monday, November 24, 2014 10:57 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [all] removing XML testing completely from Tempest
> 
> Having XML payloads was never a universal part of OpenStack services.
> During the Icehouse release the TC declared that being an OpenStack
> service requires having a JSON REST API. Projects could do what they
> wanted beyond that. Lots of them deprecated and have been removing the
> XML cruft since then.
> 
> Tempest is a tool to test the OpenStack API. OpenStack hasn't had an XML
> API for a long time.
> 
> Given that current branchless Tempest only supports as far back as
> Icehouse anyway, after these changes were made, I'd like to propose that
> all the XML code in Tempest should be removed. If a project wants to
> support something else beyond a JSON API that's on that project to test
> and document on their own.
> 
> We've definitively blocked adding new XML tests in Tempest anyway, but
> getting rid of the XML debt in the project will simplify it quite a bit,
> make it easier for contributors to join in, and seems consistent with
> the direction of OpenStack as a whole.

+1 for removing XML tests from Tempest.

Now Tempest contains XML tests for Nova, Keystone, Cinder, Neutron and
Ceilometer. On the other hand, it contains JSON tests for all projects.

XML supports of both Nova and Keystone have been marked as deprecated in
Icehouse. and the XML support of Neutron has been removed already.
So the remaining projects are Cinder and Ceilometer only for Tempest XML tests.

I don't think it is easy that clients switch JSON/XML for each project,
and I think it is a right direction to concentrate on JSON.

Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-24 Thread Robert Collins
On 19 November 2014 at 11:31, Mark McClain  wrote:
> All-
>
> Over the last several months, the members of the Networking Program have
> been discussing ways to improve the management of our program.  When the
> Quantum project was initially launched, we envisioned a combined service
> that included all things network related.  This vision served us well in the
> early days as the team mostly focused on building out layers 2 and 3;
> however, we’ve run into growth challenges as the project started building
> out layers 4 through 7.  Initially, we thought that development would float
> across all layers of the networking stack, but the reality is that the
> development concentrates around either layer 2 and 3 or layers 4 through 7.
> In the last few cycles, we’ve also discovered that these concentrations have
> different velocities and a single core team forces one to match the other to
> the detriment of the one forced to slow down.
>
> Going forward we want to divide the Neutron repository into two separate
> repositories lead by a common Networking PTL.  The current mission of the
> program will remain unchanged [1].  The split would be as follows:
>
> Neutron (Layer 2 and 3)
> - Provides REST service and technology agnostic abstractions for layer 2 and
> layer 3 services.
>
> Neutron Advanced Services Library (Layers 4 through 7)
> - A python library which is co-released with Neutron
> - The advance service library provides controllers that can be configured to
> manage the abstractions for layer 4 through 7 services.

Just wondering- have you considered a deeper decomposition? This has
turned up e.g. during discussions in the Ironic context where having
an API driven DHCP environment would be great, but all the virtual
network management of Neutron is irrelevant.

E.g - having a collection of minimal do-one-thing-well APIs with a
common model. More SOA than we are today, but easier to reason about
for individual deployments, and providing an arguably clearer place
for vendors that have entirely different backends for various services
to plug into.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Tracking Kilo priorities

2014-11-24 Thread Alessandro Pilotti


> On 25 Nov 2014, at 02:35, Michael Still  wrote:
> 
> On Tue, Nov 25, 2014 at 11:23 AM, Alessandro Pilotti
>  wrote:
>> Hi Michael,
>> 
>>> On 25 Nov 2014, at 01:57, Michael Still  wrote:
>>> 
>>> First off, sorry for the slow reply. I was on vacation last week.
>>> 
>>> The project priority list was produced as part of the design summit,
>>> and reflects nova's need to pay down technical debt in order to keep
>>> meeting our users needs. So, whilst driver changes are important, they
>>> doesn't belong on that etherpad.
>>> 
>>> That said, I think the best way to help us keep up with our code
>>> review requirements is to be an active reviewer. I know we say this a
>>> lot, but many cores optimize for patches which have already been
>>> reviewed and +1'ed by a non-core. So... Code review even with a +1
>>> makes a real difference to us being able to keep up.
>>> 
>> 
>> Thanks for your reply, we actually do quite a lot of cross reviews, with the
>> general rule being that every patch produced by one of the Hyper-V team 
>> members
>> needs to be reviewed by at least another two.
>> 
>> The main issue is that reviews get lost at every rebase and keeping track of
>> this becomes not trivial when there are a lot of open patches under review,
>> mostly interdependent. It's not easy to keep people motivated to do this, but
>> we do our best!
> 
> This is good news, and I will admit that I don't track the review
> throughput of sub teams in nova.
> 
> I feel like focusing on the start of each review chain is useful here.
> If you have the first two reviews in a chain with a couple of +1s on
> them already, then I think that's a reasonable set of reviews to bring
> up at a nova meeting. I sometimes see a +2 or two on reviews now at
> the beginning of a chain, and that's wasted effort.
> 

Great, later today we have our Hyper-V team meeting, so we’ll re-apply any
missing review and I'll get back to you with the top priorities.
P.S.: danpb and johnthetubaguy have been really helpful on the Nova core side.

Thanks,

Alessandro


> Michael
> 
> -- 
> Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Tracking Kilo priorities

2014-11-24 Thread Michael Still
On Tue, Nov 25, 2014 at 11:23 AM, Alessandro Pilotti
 wrote:
> Hi Michael,
>
>> On 25 Nov 2014, at 01:57, Michael Still  wrote:
>>
>> First off, sorry for the slow reply. I was on vacation last week.
>>
>> The project priority list was produced as part of the design summit,
>> and reflects nova's need to pay down technical debt in order to keep
>> meeting our users needs. So, whilst driver changes are important, they
>> doesn't belong on that etherpad.
>>
>> That said, I think the best way to help us keep up with our code
>> review requirements is to be an active reviewer. I know we say this a
>> lot, but many cores optimize for patches which have already been
>> reviewed and +1'ed by a non-core. So... Code review even with a +1
>> makes a real difference to us being able to keep up.
>>
>
> Thanks for your reply, we actually do quite a lot of cross reviews, with the
> general rule being that every patch produced by one of the Hyper-V team 
> members
> needs to be reviewed by at least another two.
>
> The main issue is that reviews get lost at every rebase and keeping track of
> this becomes not trivial when there are a lot of open patches under review,
> mostly interdependent. It's not easy to keep people motivated to do this, but
> we do our best!

This is good news, and I will admit that I don't track the review
throughput of sub teams in nova.

I feel like focusing on the start of each review chain is useful here.
If you have the first two reviews in a chain with a couple of +1s on
them already, then I think that's a reasonable set of reviews to bring
up at a nova meeting. I sometimes see a +2 or two on reviews now at
the beginning of a chain, and that's wasted effort.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] opnfv proposal on DR capability enhancement on OpenStack Nova

2014-11-24 Thread Zhipeng Huang
Hi all,

We have updated our proposal to reflect more details on our proposal. The
opnfv wiki link should be public for now:
https://wiki.opnfv.org/collaborative_development_projects/rescuer

We currently limit the scope of the project to provide VM DR state, so that
we could make a K release target with a small amount of work. But if anyone
is interested to broaden the effort, you are more than welcomed to join.
Please feel free to contact me if there is any questions.


Thanks :)

On Fri, Nov 14, 2014 at 8:10 AM, Zhipeng Huang 
wrote:

> Hi Keshava,
>
> What we want to achieve is to enable Nova to provide the visibility of its
> DR state during the whole DR procedure.
>
> At the moment we wrote the first version of the proposal, we only consider
> the VM states on both the production site as well as the standby site.
>
> However we are considering and working on a more expanded and detailed
> version as we speak, and if you are interested you are welcomed to join the
> effort :)
>
> On Fri, Nov 14, 2014 at 1:52 AM, A, Keshava  wrote:
>
>> Zhipeng Huang,
>>
>> When multiple  Datacenters are interconnected over WAN/Internet if the
>> remote the Datacenter goes down, expect the 'native VM status' to get
>> changed accordingly ?
>> Is this the requirement ? This requirement is  from NFV Service VM (like
>> routing VM ? )
>> Then is not it is  NFV routing (BGP/IGP) /MPLS signaling (LDP/RSVP)
>> protocol to handle  ? Does the OpenStack needs to handle that ?
>>
>> Please correct me if my understanding on this problem  is not correct.
>>
>> Thanks & regards,
>> keshava
>>
>> -Original Message-
>> From: Steve Gordon [mailto:sgor...@redhat.com]
>> Sent: Wednesday, November 12, 2014 6:24 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Nova][DR][NFV] opnfv proposal on DR
>> capability enhancement on OpenStack Nova
>>
>> - Original Message -
>> > From: "Zhipeng Huang" 
>> > To: "OpenStack Development Mailing List (not for usage questions)"
>> > 
>> >
>> > Hi Team,
>> >
>> > I knew we didn't propose this in the design summit and it is kinda
>> > rude in this way to jam a topic into the schedule. We were really
>> > stretched thin during the summit and didn't make it to the Nova
>> > discussion. Full apologies here :)
>> >
>> > What we want to discuss here is that we proposed a project in opnfv (
>> > https://wiki.opnfv.org/collaborative_development_projects/rescuer),
>> > which in fact is to enhance inter-DC DR capabilities in Nova. We hope
>> > we could achieve this in the K cycle, since there is no "HUGE" changes
>> > required to be done in Nova. We just propose to add certain DR status
>> > in Nova so operators could see what DR state the OpenStack is
>> > currently in, therefore when disaster occurs they won't cut off the
>> wrong stuff.
>> >
>> > Sorry again if we kinda barge in here, and we sincerely hope the Nova
>> > community could take a look at our proposal. Feel free to contact me
>> > if anyone got any questions :)
>> >
>> > --
>> > Zhipeng Huang
>>
>> Hi Zhipeng,
>>
>> I would just like to echo the comments from the opnfv-tech-discuss list
>> (which I notice is still private?) in saying that there is very little
>> detail on the wiki page describing what you actually intend to do. Given
>> this, it's very hard to provide any meaningful feedback. A lot more detail
>> is required, particularly if you intend to propose a specification based on
>> this idea.
>>
>> Thanks,
>>
>> Steve
>>
>> [1] https://wiki.opnfv.org/collaborative_development_projects/rescuer
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Zhipeng Huang
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
> OpenStack, OpenDaylight, OpenCompute affcienado
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OpenDaylight, OpenCompute affcienado
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-24 Thread Michael Still
On Tue, Nov 25, 2014 at 11:14 AM, Mike Bayer  wrote:
>
>> On Nov 24, 2014, at 5:20 PM, Michael Still  wrote:
>>
>> Heya,
>>
>> Review https://review.openstack.org/#/c/135644/4 proposes the addition
>> of a new database for our improved implementation of cells in Nova.
>> However, there's an outstanding question about how to handle soft
>> delete of rows -- we believe that we need to soft delete for forensic
>> purposes.
>
> Everytime I talk to people about the soft delete thing, I hear the usual 
> refrain “we thought we needed it, but we didn’t and now it’s just overbuilt 
> cruft we want to get rid of”.
>
> Not saying you don’t have a need here but you definitely have this need, not 
> just following the herd right?   Soft delete makes things a lot less 
> convenient.

I can't comment on other projects, but Nova definitely needs the soft
delete in the main nova database. Perhaps not for every table, but
there is definitely code in the code base which uses it right now.
Search for read_deleted=True if you're curious.

For this case in particular, the concern is that operators might need
to find where an instance was running once it is deleted to be able to
diagnose issues reported by users. I think that's a valid use case of
this particular data.

>> This is a new database, so its our big chance to get this right. So,
>> ideas welcome...
>>
>> Some initial proposals:
>>
>> - we do what we do in the current nova database -- we have a deleted
>> column, and we set it to true when we delete the instance.
>>
>> - we have shadow tables and we move delete rows to a shadow table.
>
>
> Both approaches are viable, but as the soft-delete column is widespread, it 
> would be thorny for this new app to use some totally different scheme, unless 
> the notion is that all schemes should move to the audit table approach (which 
> I wouldn’t mind, but it would be a big job).FTR, the audit table approach 
> is usually what I prefer for greenfield development, if all that’s needed is 
> forensic capabilities at the database inspection level, and not as much 
> active GUI-based “deleted” flags.   That is, if you really don’t need to 
> query the history tables very often except when debugging an issue offline.  
> The reason its preferable is because those rows are still “deleted” from your 
> main table, and they don’t get in the way of querying.   But if you need to 
> refer to these history rows in context of the application, that means you 
> need to get them mapped in such a way that they behave like the primary rows, 
> which overall is a more difficult approach than just using the soft delete 
> column.
>
> That said, I have a lot of plans to send improvements down the way of the 
> existing approach of “soft delete column” into projects, from the querying 
> POV, so that criteria to filter out soft delete can be done in a much more 
> robust fashion (see 
> https://bitbucket.org/zzzeek/sqlalchemy/issue/3225/query-heuristic-inspector-event).
>But this is still more complex and less performant than if the rows are 
> just gone totally, off in a history table somewhere (again, provided you 
> really don’t need to look at those history rows in an application context, 
> otherwise it gets all complicated again).

Interesting. I hadn't seen consistency between the two databases as
trumping doing this less horribly, but it sounds like its more of a
thing that I thought.

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Tracking Kilo priorities

2014-11-24 Thread Alessandro Pilotti
Hi Michael, 

> On 25 Nov 2014, at 01:57, Michael Still  wrote:
> 
> First off, sorry for the slow reply. I was on vacation last week.
> 
> The project priority list was produced as part of the design summit,
> and reflects nova's need to pay down technical debt in order to keep
> meeting our users needs. So, whilst driver changes are important, they
> doesn't belong on that etherpad.
> 
> That said, I think the best way to help us keep up with our code
> review requirements is to be an active reviewer. I know we say this a
> lot, but many cores optimize for patches which have already been
> reviewed and +1'ed by a non-core. So... Code review even with a +1
> makes a real difference to us being able to keep up.
> 

Thanks for your reply, we actually do quite a lot of cross reviews, with the
general rule being that every patch produced by one of the Hyper-V team members
needs to be reviewed by at least another two.

The main issue is that reviews get lost at every rebase and keeping track of
this becomes not trivial when there are a lot of open patches under review,
mostly interdependent. It's not easy to keep people motivated to do this, but
we do our best!

Alessandro


> If you feel a change has been blocking on review for too long and is
> important, then please do raise it in the "Open Discussion" section of
> the nova meeting.
> 
> Michael
> 
> On Sat, Nov 22, 2014 at 2:02 AM, Alessandro Pilotti
>  wrote:
>> Hi,
>> 
>> Not seeing any driver (Hyper-V, VMWare, etc) related priority in this 
>> etherpad
>> worries me a bit.
>> 
>> 
>> My concern is mostly related to the fact that we have in Nova a significative
>> number of driver related blueprints code already under review for Kilo and we
>> are already drifting into the old familiar “Nova rebase hell” at the very
>> beginning of the cycle. :-)
>> 
>> The Nova core team is obviously doing everything possible to make everybody
>> happy, I surely have no complains in the amount of effort put into the
>> reviewing machine, but the total effort required seems simply hopelessly
>> overwhelming for a single centralized team and lower priority features / bugs
>> will suffer.
>> 
>> Looking at the pending reviews count, a significative non trivial amount of 
>> the
>> patches is related to the drivers (see stats below) but since driver 
>> blueprints
>> and bugs are very rarely prioritized, I suspect that we might end up with
>> another upstream release with inadeguate support for most hypervisors beside
>> the “blessed” libvirt / KVM, resulting in a lot of blueprints and bug fixes
>> postponed to the next release.
>> 
>> The last discussion on this ML [1] on splitting the divers from Nova had a 
>> lot
>> of consensus, no more than two months ago. I wonder why wasn’t this discussed
>> further, for example at the design summit?
>> 
>> In Hyper-V we averted this crisis by simply focusing on the downstream stable
>> branches (e.g. [2] for Juno), where we reached the quality, stability and
>> feature levels that we wanted [3], leaving the upstream driver code as a 
>> simple
>> “take it or leave it” best effort code that we surely don’t advise any of our
>> users to even bother with. Every single line of code that we produce and 
>> merge
>> downstream is obviously also sent upstream for review and eventually merged
>> there as well, but we don’t necessarily worry anymore about the fact that it
>> takes months for this to happen, even if we still put a lot of effort into 
>> it.
>> 
>> At this stage, since the drivers are a completely partitioned and independent
>> subset of Nova, the real umbilical cord that prevents a driver maintainer 
>> team
>> to simply leave the Nova project and continue on StackForge is the third 
>> party
>> CI system support, which with all its limitations it's still an amazing
>> achievement.
>> In particular third party CIs are extremely important from a hypervisor
>> perspective to make sure that Nova changes don't cause regressions in the
>> drivers (more than the other way around). This means that realistically, for 
>> a
>> driver, leaving Nova and even going back through the Stackforge purgatory is
>> simply not an option, unless there is a highly unrealistical consensus in 
>> still
>> mantaining a voting CI in Nova for what would become an external driver
>> resulting from a schism.
>> 
>> Please consider this just as a constructive discussion for the greater good 
>> of
>> the whole OpenStack community [4] :-)
>> 
>> Thanks,
>> 
>> Alessandro
>> 
>> 
>> Quick stats showing open reviews (please forgive the simplifications):
>> 
>> All Nova (master):  657
>> 
>> All nova/virt:  208
>> 
>> nova/virt/hyperv:   31
>> nova/virt/libvirt:  80
>> nova/virt/vmwareapi:63
>> nova/virt/xenapi:   28
>> 
>> Values have been obtained with the following very basic query, not 
>> considering
>> overlapping patches, unit tests, etc:
>> 
>> gerrymander changes -p openstack/nova $PATH --status open --branch master -m 
>

Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-24 Thread Mike Bayer

> On Nov 24, 2014, at 5:20 PM, Michael Still  wrote:
> 
> Heya,
> 
> Review https://review.openstack.org/#/c/135644/4 proposes the addition
> of a new database for our improved implementation of cells in Nova.
> However, there's an outstanding question about how to handle soft
> delete of rows -- we believe that we need to soft delete for forensic
> purposes.

Everytime I talk to people about the soft delete thing, I hear the usual 
refrain “we thought we needed it, but we didn’t and now it’s just overbuilt 
cruft we want to get rid of”.

Not saying you don’t have a need here but you definitely have this need, not 
just following the herd right?   Soft delete makes things a lot less convenient.

> 
> This is a new database, so its our big chance to get this right. So,
> ideas welcome...
> 
> Some initial proposals:
> 
> - we do what we do in the current nova database -- we have a deleted
> column, and we set it to true when we delete the instance.
> 
> - we have shadow tables and we move delete rows to a shadow table.


Both approaches are viable, but as the soft-delete column is widespread, it 
would be thorny for this new app to use some totally different scheme, unless 
the notion is that all schemes should move to the audit table approach (which I 
wouldn’t mind, but it would be a big job).FTR, the audit table approach is 
usually what I prefer for greenfield development, if all that’s needed is 
forensic capabilities at the database inspection level, and not as much active 
GUI-based “deleted” flags.   That is, if you really don’t need to query the 
history tables very often except when debugging an issue offline.  The reason 
its preferable is because those rows are still “deleted” from your main table, 
and they don’t get in the way of querying.   But if you need to refer to these 
history rows in context of the application, that means you need to get them 
mapped in such a way that they behave like the primary rows, which overall is a 
more difficult approach than just using the soft delete column.

That said, I have a lot of plans to send improvements down the way of the 
existing approach of “soft delete column” into projects, from the querying POV, 
so that criteria to filter out soft delete can be done in a much more robust 
fashion (see 
https://bitbucket.org/zzzeek/sqlalchemy/issue/3225/query-heuristic-inspector-event).
   But this is still more complex and less performant than if the rows are just 
gone totally, off in a history table somewhere (again, provided you really 
don’t need to look at those history rows in an application context, otherwise 
it gets all complicated again).



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Tracking Kilo priorities

2014-11-24 Thread Michael Still
First off, sorry for the slow reply. I was on vacation last week.

The project priority list was produced as part of the design summit,
and reflects nova's need to pay down technical debt in order to keep
meeting our users needs. So, whilst driver changes are important, they
doesn't belong on that etherpad.

That said, I think the best way to help us keep up with our code
review requirements is to be an active reviewer. I know we say this a
lot, but many cores optimize for patches which have already been
reviewed and +1'ed by a non-core. So... Code review even with a +1
makes a real difference to us being able to keep up.

If you feel a change has been blocking on review for too long and is
important, then please do raise it in the "Open Discussion" section of
the nova meeting.

Michael

On Sat, Nov 22, 2014 at 2:02 AM, Alessandro Pilotti
 wrote:
> Hi,
>
> Not seeing any driver (Hyper-V, VMWare, etc) related priority in this etherpad
> worries me a bit.
>
>
> My concern is mostly related to the fact that we have in Nova a significative
> number of driver related blueprints code already under review for Kilo and we
> are already drifting into the old familiar “Nova rebase hell” at the very
> beginning of the cycle. :-)
>
> The Nova core team is obviously doing everything possible to make everybody
> happy, I surely have no complains in the amount of effort put into the
> reviewing machine, but the total effort required seems simply hopelessly
> overwhelming for a single centralized team and lower priority features / bugs
> will suffer.
>
> Looking at the pending reviews count, a significative non trivial amount of 
> the
> patches is related to the drivers (see stats below) but since driver 
> blueprints
> and bugs are very rarely prioritized, I suspect that we might end up with
> another upstream release with inadeguate support for most hypervisors beside
> the “blessed” libvirt / KVM, resulting in a lot of blueprints and bug fixes
> postponed to the next release.
>
> The last discussion on this ML [1] on splitting the divers from Nova had a lot
> of consensus, no more than two months ago. I wonder why wasn’t this discussed
> further, for example at the design summit?
>
> In Hyper-V we averted this crisis by simply focusing on the downstream stable
> branches (e.g. [2] for Juno), where we reached the quality, stability and
> feature levels that we wanted [3], leaving the upstream driver code as a 
> simple
> “take it or leave it” best effort code that we surely don’t advise any of our
> users to even bother with. Every single line of code that we produce and merge
> downstream is obviously also sent upstream for review and eventually merged
> there as well, but we don’t necessarily worry anymore about the fact that it
> takes months for this to happen, even if we still put a lot of effort into it.
>
> At this stage, since the drivers are a completely partitioned and independent
> subset of Nova, the real umbilical cord that prevents a driver maintainer team
> to simply leave the Nova project and continue on StackForge is the third party
> CI system support, which with all its limitations it's still an amazing
> achievement.
> In particular third party CIs are extremely important from a hypervisor
> perspective to make sure that Nova changes don't cause regressions in the
> drivers (more than the other way around). This means that realistically, for a
> driver, leaving Nova and even going back through the Stackforge purgatory is
> simply not an option, unless there is a highly unrealistical consensus in 
> still
> mantaining a voting CI in Nova for what would become an external driver
> resulting from a schism.
>
> Please consider this just as a constructive discussion for the greater good of
> the whole OpenStack community [4] :-)
>
> Thanks,
>
> Alessandro
>
>
> Quick stats showing open reviews (please forgive the simplifications):
>
> All Nova (master):  657
>
> All nova/virt:  208
>
> nova/virt/hyperv:   31
> nova/virt/libvirt:  80
> nova/virt/vmwareapi:63
> nova/virt/xenapi:   28
>
> Values have been obtained with the following very basic query, not considering
> overlapping patches, unit tests, etc:
>
> gerrymander changes -p openstack/nova $PATH --status open --branch master -m 
> json \
> python -c "import sys; import json; print 
> len(json.load(sys.stdin)[0]['table']['content'])"
>
>
> [1] Last Nova drivers split discussion: 
> http://lists.openstack.org/pipermail/openstack-dev/2014-September/044872.html
> [2] Stable Hyper-V downstream Juno branch: 
> https://github.com/cloudbase/nova/tree/2014.1.2-cloudbase-release
> [3] Extensive downstream Hyper-V Tempest tests: 
> http://www.cloudbase.it/openstack-on-hyper-v-release-testing/
> [4] http://whatsthepont.files.wordpress.com/2012/02/20120212-223344.jpg
>
>
>
>
>> On 20 Nov 2014, at 11:17, Michael Still  wrote:
>>
>> Hi,
>>
>> as discussed at the summit, we want to do a better job of tracking the
>> progress of work on ou

[openstack-dev] [Horizon]

2014-11-24 Thread David Lyle
I am pleased to nominate Thai Tran and Cindy Lu to horizon-core.

Both Thai and Cindy have been contributing significant numbers of high
quality reviews during Juno and Kilo cycles. They are consistently among
the top non-core reviewers. They are also responsible for a significant
number of patches to Horizon. Both have a strong understanding of the
Horizon code base and the direction of the project.

Horizon core team members please vote +1 or -1 to the nominations either in
reply or by private communication. Voting will close on Friday unless I
hear from everyone before that.

Thanks,
David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-24 Thread Kevin L. Mitchell
On Tue, 2014-11-25 at 09:20 +1100, Michael Still wrote:
>  - we do what we do in the current nova database -- we have a deleted
> column, and we set it to true when we delete the instance.

Actually, current nova uses the
oslo.db.sqlalchemy.models.SoftDeleteMixin class, which defines the
columns 'deleted_at' (a DateTime) and 'deleted' (an *integer*).  It also
defines a 'soft_delete()' method, which sets the 'deleted' column to the
row 'id'.  As I understand it, this is to keep from breaking uniqueness
constraints; you factor in 'deleted' in your uniqueness constraint, and
you can have as many identical deleted records as you want…

>  - we have shadow tables and we move delete rows to a shadow table.
> 
>  - something else super clever I haven't thought of.

Well, one thought might be to create a single 'audit' table with a
couple of columns—a timestamp, say, and some sort of description of the
change, perhaps as a JSON object.  On a 'delete' operation, you could
store the values of the row into this audit table.

From an operator's standpoint, this could provide the required auditing
and perhaps even a limited DR solution, while centralizing the data you
need to monitor in a single location, which makes it easier to trim the
data at intervals as needed.  While I've proposed this as a soft-delete
solution, it would also provide the ability to record other changes to
objects; one could even include a column to record who performed the
change.  And of course I've suggested this as a DB table, but we could
also consider the merits of ditching the table and doing the same thing
as some sort of notification through the notifications system…
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - the setup of a DHCP sub-group

2014-11-24 Thread Carl Baldwin
Don,

Could the spec linked to your BP be moved to the specs repository?
I'm hesitant to start reading it as a google doc when I know I'm going
to want to make comments and ask questions.

Carl

On Thu, Nov 13, 2014 at 9:19 AM, Don Kehn  wrote:
> If this shows up twice sorry for the repeat:
>
> Armando, Carl:
> During the Summit, Armando and I had a very quick conversation concern a
> blue print that I submitted,
> https://blueprints.launchpad.net/neutron/+spec/dhcp-cpnr-integration and
> Armando had mention the possibility of getting together a sub-group tasked
> with DHCP Neutron concerns. I have talk with Infoblox folks (see
> https://blueprints.launchpad.net/neutron/+spec/neutron-ipam), and everyone
> seems to be in agreement that there is synergy especially concerning the
> development of a relay and potentially looking into how DHCP is handled. In
> addition during the Fridays meetup session on DHCP that I gave there seems
> to be some general interest by some of the operators as well.
>
> So what would be the formality in going forth to start a sub-group and
> getting this underway?
>
> DeKehn
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Meeting on Wednesday, Nov. 26th cancelled

2014-11-24 Thread Stephen Balukoff
Hi folks!

Since:

* The agenda contains no earth-shattering updates (ie. just going over
review progress)
* Nearly all the people working on Octavia are US-based
* The meeting is in the afternoon for most folks
* Wednesday is the day before a long holiday weekend and many employers
give their employees a half day off on this day

... we are cancelling Wednesday's Octavia meeting. See y'all again on Dec.
3rd at 20:00 UTC in #openstack-meeting-alt (and keep working on those
gerrit reviews!)

Thanks,
Stephen

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.

2014-11-24 Thread Brandon Logan
My impression is that the statuses of each entity will be shown on a
detailed info request of a loadbalancer.  The root level objects would
not have any statuses.  For example a user makes a GET request
to /loadbalancers/{lb_id} and the status of every child of that load
balancer is show in a "status_tree" json object.  For example:

{"name": "loadbalancer1",
 "status_tree":
  {"listeners": 
[{"name": "listener1", "operating_status": "ACTIVE",
  "default_pool":
{"name": "pool1", "status": "ACTIVE",
 "members":
   [{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}}

Sam, correct me if I am wrong.

I generally like this idea.  I do have a few reservations with this:

1) Creating and updating a load balancer requires a full tree
configuration with the current extension/plugin logic in neutron.  Since
updates will require a full tree, it means the user would have to know
the full tree configuration just to simply update a name.  Solving this
would require nested child resources in the URL, which the current
neutron extension/plugin does not allow.  Maybe the new one will.

2) The status_tree can get quite large depending on the number of
listeners and pools being used.  This is a minor issue really as it will
make horizon's (or any other UI tool's) job easier to show statuses.

Thanks,
Brandon

On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote:
> Hi Samuel,
> 
> 
> We've actually been avoiding having a deeper discussion about status
> in Neutron LBaaS since this can get pretty hairy as the back-end
> implementations get more complicated. I suspect managing that is
> probably one of the bigger reasons we have disagreements around object
> sharing. Perhaps it's time we discussed representing state
> "correctly" (whatever that means), instead of a round-a-bout
> discussion about object sharing (which, I think, is really just
> avoiding this issue)?
> 
> 
> Do you have a proposal about how status should be represented
> (possibly including a description of the state machine) if we collapse
> everything down to be logical objects except the loadbalancer object?
> (From what you're proposing, I suspect it might be too general to, for
> example, represent the UP/DOWN status of members of a given pool.)
> 
> 
> Also, from an haproxy perspective, sharing pools within a single
> listener actually isn't a problem. That is to say, having the same
> L7Policy pointing at the same pool is OK, so I personally don't have a
> problem allowing sharing of objects within the scope of parent
> objects. What do the rest of y'all think?
> 
> 
> Stephen
> 
> 
> 
> On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici
>  wrote:
> Hi Stephen,
> 
>  
> 
> 1.  The issue is that if we do 1:1 and allow status/state
> to proliferate throughout all objects we will then get an
> issue to fix it later, hence even if we do not do sharing, I
> would still like to have all objects besides LB be treated as
> logical.
> 
> 2.  The 3rd use case bellow will not be reasonable without
> pool sharing between different policies. Specifying different
> pools which are the same for each policy make it non-started
> to me. 
> 
>  
> 
> -Sam.
> 
>  
> 
>  
> 
>  
> 
> From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
> Sent: Friday, November 21, 2014 10:26 PM
> To: OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects
> in LBaaS - Use Cases that led us to adopt this.
> 
>  
> 
> I think the idea was to implement 1:1 initially to reduce the
> amount of code and operational complexity we'd have to deal
> with in initial revisions of LBaaS v2. Many to many can be
> simulated in this scenario, though it does shift the burden of
> maintenance to the end user. It does greatly simplify the
> initial code for v2, in any case, though.
> 
> 
>  
> 
> 
> Did we ever agree to allowing listeners to be shared among
> load balancers?  I think that still might be a N:1
> relationship even in our latest models.
> 
>  
> 
> 
> There's also the difficulty introduced by supporting different
> flavors:  Since flavors are essentially an association between
> a load balancer object and a driver (with parameters), once
> flavors are introduced, any sub-objects of a given load
> balancer objects must necessarily be purely logical until they
> are associated with a load balancer.  I know there was talk of
> forcing these objects to be sub-objects of a load balancer
> which can't be accessed ind

Re: [openstack-dev] [api] APIImpact flag for specs

2014-11-24 Thread Everett Toews
I’ve also added APIImpact flag info to the Git Commit Messages page [1].

Everett

[1] 
https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references


On Nov 19, 2014, at 5:23 PM, Everett Toews 
mailto:everett.to...@rackspace.com>> wrote:


On Nov 13, 2014, at 2:06 PM, Everett Toews 
mailto:everett.to...@rackspace.com>> wrote:

On Nov 12, 2014, at 10:45 PM, Angus Salkeld 
mailto:asalk...@mirantis.com>> wrote:

On Sat, Nov 1, 2014 at 6:45 AM, Everett Toews 
mailto:everett.to...@rackspace.com>> wrote:
Hi All,

Chris Yeoh started the use of an APIImpact flag in commit messages for specs in 
Nova. It adds a requirement for an APIImpact flag in the commit message for a 
proposed spec if it proposes changes to the REST API. This will make it much 
easier for people such as the API Working Group who want to review API changes 
across OpenStack to find and review proposed API changes.

For example, specifications with the APIImpact flag can be found with the 
following query:

https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:apiimpact,n,z

Chris also proposed a similar change to many other projects and I did the rest. 
Here’s the complete list if you’d like to review them.

Barbican: https://review.openstack.org/131617
Ceilometer: https://review.openstack.org/131618
Cinder: https://review.openstack.org/131620
Designate: https://review.openstack.org/131621
Glance: https://review.openstack.org/131622
Heat: https://review.openstack.org/132338
Ironic: https://review.openstack.org/132340
Keystone: https://review.openstack.org/132303
Neutron: https://review.openstack.org/131623
Nova: https://review.openstack.org/#/c/129757
Sahara: https://review.openstack.org/132341
Swift: https://review.openstack.org/132342
Trove: https://review.openstack.org/132346
Zaqar: https://review.openstack.org/132348

There are even more projects in stackforge that could use a similar change. If 
you know of a project in stackforge that would benefit from using an APIImapct 
flag in its specs, please propose the change and let us know here.


I seem to have missed this, I'll place my review comment here too.

I like the general idea of getting more consistent/better API. But, is 
reviewing every spec across all projects just going to introduce a new non 
scalable bottle neck into our work flow (given the increasing move away from 
this approach: moving functional tests to projects, getting projects to do more 
of their own docs, etc..). Wouldn't a better approach be to have an API liaison 
in each project that can keep track of new guidelines and catch potential 
problems?

I see have added a new section here: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons

Isn't that enough?

I replied in the review. We’ll continue the discussion there.

The cross project liaisons are big help but the APIImpact flag let’s the API WG 
automate discovery of API changing specs. It's just one more tool in the box to 
help us find changes that impact the API.

Note that the patch says nothing about requiring a review from someone 
associated with the API WG. If you add the APIImpact flag and nobody comes 
along to review it, continue on as normal.

The API WG is not intended to be a gatekeeper of every change to every API. As 
you say that doesn't scale. We don't want to be a bottleneck. However, tools 
such as the APIImpact flag can help us be more effective.

(Angus suggested I give my review comment a bit more visibility. I agree :)

Everett

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-24 Thread Michael Still
Heya,

Review https://review.openstack.org/#/c/135644/4 proposes the addition
of a new database for our improved implementation of cells in Nova.
However, there's an outstanding question about how to handle soft
delete of rows -- we believe that we need to soft delete for forensic
purposes.

This is a new database, so its our big chance to get this right. So,
ideas welcome...

Some initial proposals:

 - we do what we do in the current nova database -- we have a deleted
column, and we set it to true when we delete the instance.

 - we have shadow tables and we move delete rows to a shadow table.

 - something else super clever I haven't thought of.

Ideas?

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Versioned objects cross project sessions next steps

2014-11-24 Thread Angus Salkeld
On Tue, Nov 25, 2014 at 7:06 AM, Jay Pipes  wrote:

> On 11/24/2014 03:11 PM, Joshua Harlow wrote:
>
>> Dan Smith wrote:
>>
>>> 3. vish brought up one draw back of versioned objects: the difficulty in
 cherry picking commits for stable branches - Is this a show stopper?.

>>>
>>> After some discussion with some of the interested parties, we're
>>> planning to add a third .z element to the version numbers and use that
>>> to handle backports in the same way that we do for RPC:
>>>
>>> https://review.openstack.org/#/c/134623/
>>>
>>>  Next steps:
 - Jay suggested making a second spec that would lay out what it would
 look like if we used google protocol buffers.
 - Dan: do you need some help in making this happen, do we need some
 volunteers?

>>>
>>> I'm not planning to look into this, especially since we discussed it a
>>> couple years ago when deciding to do what we're currently doing. If
>>> someone else does, creates a thing that is demonstrably more useful than
>>> what we have, and provides a migration plan, then cool. Otherwise, I'm
>>> not really planning to stop what I'm doing at the moment.
>>>
>>>  - Are there any other concrete things we can do to get this usable by
 other projects in a timely manner?

>>>
>>> To be honest, since the summit, I've not done anything with the current
>>> oslo spec, given the potential for doing something different that was
>>> raised. I know that cinder folks (at least) are planning to start
>>> copying code into their tree to get moving.
>>>
>>> I think we need a decision to either (a) dump what we've got into the
>>> proposed library (or incubator) and plan to move forward incrementally
>>> or (b) each continue doing our own thing(s) in our own trees while we
>>> wait for someone to create something based on GPB that does what we want.
>>>
>>
>> I'd prefer (a); although I hope there is a owner/lead for this library
>> (dan?) and it's not just dumped on the oslo folks as that won't work out
>> so well I think. It'd be nice if said owner could also look into (b) but
>> that's at there own (or other library supporter) time I suppose (I
>> personally think (b) would probably allow for a larger community of
>> folks to get involved in this library, would potentially reduce the
>> amount of custom/overlapping code and other similar benefits...).
>>
>
> I gave some comments at the very end of the summit session on this, and I
> want to be clear about something. I definitely like GPB, and there's
> definite overlap with some things that GPB does and things that
> nova.objects does.
>
> That said, I don't think it's wise to make oslo-versionedobjects be a
> totally new thing. I think we should use nova.objects as the base of a new
> oslo-versionedobjects library, and we should evolve oslo-versionedobjects
> slowly over time, eventually allowing for nova, ironic, and whomever else
> is currently using nova/objects, to align with an Oslo library vision for
> this.
>
> So, in short, I also think a) is the appropriate path to take.
>

Yeah, my concern with "(b)" is the time it will take for other projects to
get to use it, esp. since no one is
jumping to take the work on.

-Angus


>
> Best,
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Versioned objects cross project sessions next steps

2014-11-24 Thread melanie witt
On Nov 24, 2014, at 13:06, Jay Pipes  wrote:

> That said, I don't think it's wise to make oslo-versionedobjects be a totally 
> new thing. I think we should use nova.objects as the base of a new 
> oslo-versionedobjects library, and we should evolve oslo-versionedobjects 
> slowly over time, eventually allowing for nova, ironic, and whomever else is 
> currently using nova/objects, to align with an Oslo library vision for this.
> 
> So, in short, I also think a) is the appropriate path to take.

+1 I'd like to see oslo-versionedobjects start out with nova.objects as the 
implementation, with the ability to add support for protobuf later.

melanie (melwitt)






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Weekly Status Update

2014-11-24 Thread Jim Rollenhagen
Hi all,

Ironic has decided to email a weekly status report for all subteams.
This is the inaugural issue of that. :)

This is and will be a copy and paste from the Ironic whiteboard.[0]

Testing (adam_g)
  (deva)
  - make Ironic voting -- needs to be rebased again, but has been approved

Bugs (dtantsur)
  (As of Fri Nov 21 15:00 UTC)
  Open: 101 (+1).
  3 new (-3), 24 in progress (-5), 0 critical, 10 high and 3 incomplete

Drivers:
  IPA (jroll/JayF/JoshNang)
(JayF)
- Agent tempest jobs now passing! Please do not merge something that
  has real failures in agent jobs. (For IPA or Ironic)
- Merge request to make IPA tempest job voting is here:
  https://review.openstack.org/#/c/134436/ currently pending a fix for
  https://bugs.launchpad.net/openstack-ci/+bug/1393099 to be merged
  (to prevent more Ironic rechecks)
- Lots of pending changes merged now that we have CI; including oslo
  sync (thanks GheRivero), coreos_oem_inject.py being made in line with
  OpenStack standards (flake8+requirements), and the global requirements
  sync

  DRAC (lucas)

  iLO (wanyen)
- Submitted iLO secure boot and ManagementInterface specs
  https://review.openstack.org/#/c/135845/
  https://review.openstack.org/#/c/135228/
- Addressing several spec review comments:
iLO hardware property discovery and ManagementInterface, UEFI-iSO
automated creation, firmware update managementinterface, and
iLO hardware sensor design specs
  https://review.openstack.org/#/c/100951/
  https://review.openstack.org/#/c/109088/
  https://review.openstack.org/#/c/129529/
  https://review.openstack.org/#/c/127378/
  https://review.openstack.org/#/c/130074/
  https://review.openstack.org/#/c/100842/
  https://review.openstack.org/#/c/134022/
- Reviewed a few design specs including Zapping of node and hardware 
capability specs

Oslo (GheRivero)
- Sync pending from oslo.incubator
  - Config file generation https://review.openstack.org/#/c/128005/4
  - Refactoring Policy https://review.openstack.org/#/c/126265/
  - Full sync - Waiting for the two reviews above 
https://review.openstack.org/#/c/128051/
- Updated oslo.* releases this week.  Wait until stable branches are ready. 
Reviewes in progress
  - oslo.utils will be the one with more impact on ironic. New utils included:
* get_my_ip
* uuidutils
* is_int_like
* is_valid_ip_*

// jim

[0] https://etherpad.openstack.org/p/IronicWhiteBoard

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Cleaning up spec review queue.

2014-11-24 Thread Chris K
Thank you all for the reply's to this thread. I have now set all of the
listed spec's to an abandoned state, Please note that any review can be
recovered quickly but clicking the "Restore Change" button. If one of the
listed spec's is restored please be sure to re-target to the Kilo cycle.

Thank you,
Chris Krelle
Nobodycam

On Wed, Nov 19, 2014 at 3:19 AM, Imre Farkas  wrote:

> On 11/19/2014 12:07 PM, Dmitry Tantsur wrote:
>
>> On 11/18/2014 06:13 PM, Chris K wrote:
>>
>>> Hi all,
>>>
>>> In an effort to keep the Ironic specs review queue as up to date as
>>> possible, I have identified several specs that were proposed in the Juno
>>> cycle and have not been updated to reflect the changes to the current
>>> Kilo cycle.
>>>
>>> I would like to set a deadline to either update them to reflect the Kilo
>>> cycle or abandon them if they are no longer relevant.
>>> If there are no objections I will abandon any specs on the list below
>>> that have not been updated to reflect the Kilo cycle after the end of
>>> the next Ironic meeting (Nov. 24th 2014).
>>>
>>> Below is the list of specs I have identified that would be affected:
>>> https://review.openstack.org/#/c/107344 - *Generic Hardware Discovery
>>> Bits*
>>>
>> Killed it with fire :D
>>
>>  https://review.openstack.org/#/c/102557 - *Driver for NetApp storage
>>> arrays*
>>> https://review.openstack.org/#/c/108324 - *DRAC hardware discovery*
>>>
>> Imre, are you going to work on it?
>>
>
> I think it's replaced by Lucas' proposal: https://review.openstack.org/#
> /c/125920
> I will discuss it with him and abandon one of them.
>
> Imre
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Versioned objects cross project sessions next steps

2014-11-24 Thread Jay Pipes

On 11/24/2014 03:11 PM, Joshua Harlow wrote:

Dan Smith wrote:

3. vish brought up one draw back of versioned objects: the difficulty in
cherry picking commits for stable branches - Is this a show stopper?.


After some discussion with some of the interested parties, we're
planning to add a third .z element to the version numbers and use that
to handle backports in the same way that we do for RPC:

https://review.openstack.org/#/c/134623/


Next steps:
- Jay suggested making a second spec that would lay out what it would
look like if we used google protocol buffers.
- Dan: do you need some help in making this happen, do we need some
volunteers?


I'm not planning to look into this, especially since we discussed it a
couple years ago when deciding to do what we're currently doing. If
someone else does, creates a thing that is demonstrably more useful than
what we have, and provides a migration plan, then cool. Otherwise, I'm
not really planning to stop what I'm doing at the moment.


- Are there any other concrete things we can do to get this usable by
other projects in a timely manner?


To be honest, since the summit, I've not done anything with the current
oslo spec, given the potential for doing something different that was
raised. I know that cinder folks (at least) are planning to start
copying code into their tree to get moving.

I think we need a decision to either (a) dump what we've got into the
proposed library (or incubator) and plan to move forward incrementally
or (b) each continue doing our own thing(s) in our own trees while we
wait for someone to create something based on GPB that does what we want.


I'd prefer (a); although I hope there is a owner/lead for this library
(dan?) and it's not just dumped on the oslo folks as that won't work out
so well I think. It'd be nice if said owner could also look into (b) but
that's at there own (or other library supporter) time I suppose (I
personally think (b) would probably allow for a larger community of
folks to get involved in this library, would potentially reduce the
amount of custom/overlapping code and other similar benefits...).


I gave some comments at the very end of the summit session on this, and 
I want to be clear about something. I definitely like GPB, and there's 
definite overlap with some things that GPB does and things that 
nova.objects does.


That said, I don't think it's wise to make oslo-versionedobjects be a 
totally new thing. I think we should use nova.objects as the base of a 
new oslo-versionedobjects library, and we should evolve 
oslo-versionedobjects slowly over time, eventually allowing for nova, 
ironic, and whomever else is currently using nova/objects, to align with 
an Oslo library vision for this.


So, in short, I also think a) is the appropriate path to take.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] a way of checking replicate completion on swift cluster

2014-11-24 Thread Clay Gerrard
replication logs

On Thu, Nov 20, 2014 at 9:32 PM, Matsuda, Kenichiro <
matsuda_keni...@jp.fujitsu.com> wrote:

> Hi,
>
> Thank you for the info.
>
> I was able to get replication info easily by swift-recon API.
> But, I wasn't able to judge whether no failure from recon info of
> object-replicator.
>
> Could you please advise me for a way of get object-replicator's failure
> info?
>
> [replication info from recon]
> * account
>
> --
> # curl http://192.168.1.11:6002/recon/replication/account | python
> -mjson.tool
> {
> "replication_last": 1416354262.7157061,
> "replication_stats": {
> "attempted": 20,
> "diff": 0,
> "diff_capped": 0,
> "empty": 0,
> "failure": 20,
> "hashmatch": 0,
> "no_change": 40,
> "remote_merge": 0,
> "remove": 0,
> "rsync": 0,
> "start": 1416354240.9761429,
> "success": 40,
> "ts_repl": 0
> },
> "replication_time": 21.739563226699829
> }
>
> --
>
> * container
>
> --
> # curl http://192.168.1.11:6002/recon/replication/container | python
> -mjson.tool
> {
> "replication_last": 1416353436.9448521,
> "replication_stats": {
> "attempted": 13346,
> "diff": 0,
> "diff_capped": 0,
> "empty": 0,
> "failure": 870,
> "hashmatch": 0,
> "no_change": 1908,
> "remote_merge": 0,
> "remove": 0,
> "rsync": 0,
> "start": 1416349377.3627851,
> "success": 1908,
> "ts_repl": 0
> },
> "replication_time": 4059.5820670127869
> }
>
> --
>
> * object
>
> --
> # curl http://192.168.1.11:6002/recon/replication | python -mjson.tool
> {
> "object_replication_last": 1416334368.60865,
> "object_replication_time": 2316.5563162644703
> }
> # curl http://192.168.1.11:6002/recon/replication/object | python
> -mjson.tool
> {
> "object_replication_last": 1416334368.60865,
> "object_replication_time": 2316.5563162644703
> }
>
> --
>
> Best Regards,
> Kenichiro Matsuda.
>
>
> From: Clay Gerrard [mailto:clay.gerr...@gmail.com]
> Sent: Friday, November 21, 2014 4:22 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [swift] a way of checking replicate
> completion on swift cluster
>
> You might check if the swift-recon tool has the data you're looking for.
> It can report the last completed replication pass time across nodes in the
> ring.
>
> On Thu, Nov 20, 2014 at 1:28 AM, Matsuda, Kenichiro <
> matsuda_keni...@jp.fujitsu.com> wrote:
> Hi,
>
> I would like to know about a way of checking replicate completion on swift
> cluster.
> (e.g. after rebalanced Ring)
>
> I found the way of using swift-dispersion-report from Administrator's
> Guide.
> But, this way is not enough, because swift-dispersion-report can't checking
> replicate completion for other data that made by not
> swift-dispersion-populate.
>
> And also, I found the way of using replicator's logs from Q&A.
> But, I would like to more easy way, because check of below logs is very
> heavy.
>
>   (account/container/object)-replicator * All storage node on swift cluster
>
> Could you please advise me for it?
>
> Findings:
>   Administrator's Guide  Cluster Health
>
> http://docs.openstack.org/developer/swift/admin_guide.html#cluster-health
>   how to check replicator work complete
>
> https://ask.openstack.org/en/question/18654/how-to-check-replicator-work-complete/
>
> Best Regards,
> Kenichiro Matsuda.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.

2014-11-24 Thread Stephen Balukoff
Hi Samuel,

We've actually been avoiding having a deeper discussion about status in
Neutron LBaaS since this can get pretty hairy as the back-end
implementations get more complicated. I suspect managing that is probably
one of the bigger reasons we have disagreements around object sharing.
Perhaps it's time we discussed representing state "correctly" (whatever
that means), instead of a round-a-bout discussion about object sharing
(which, I think, is really just avoiding this issue)?

Do you have a proposal about how status should be represented (possibly
including a description of the state machine) if we collapse everything
down to be logical objects except the loadbalancer object? (From what
you're proposing, I suspect it might be too general to, for example,
represent the UP/DOWN status of members of a given pool.)

Also, from an haproxy perspective, sharing pools within a single listener
actually isn't a problem. That is to say, having the same L7Policy pointing
at the same pool is OK, so I personally don't have a problem allowing
sharing of objects within the scope of parent objects. What do the rest of
y'all think?

Stephen


On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici 
wrote:

>  Hi Stephen,
>
>
>
> 1.   The issue is that if we do 1:1 and allow status/state to
> proliferate throughout all objects we will then get an issue to fix it
> later, hence even if we do not do sharing, I would still like to have all
> objects besides LB be treated as logical.
>
> 2.   The 3rd use case bellow will not be reasonable without pool
> sharing between different policies. Specifying different pools which are
> the same for each policy make it non-started to me.
>
>
>
> -Sam.
>
>
>
>
>
>
>
> *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
> *Sent:* Friday, November 21, 2014 10:26 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS -
> Use Cases that led us to adopt this.
>
>
>
> I think the idea was to implement 1:1 initially to reduce the amount of
> code and operational complexity we'd have to deal with in initial revisions
> of LBaaS v2. Many to many can be simulated in this scenario, though it does
> shift the burden of maintenance to the end user. It does greatly simplify
> the initial code for v2, in any case, though.
>
>
>
> Did we ever agree to allowing listeners to be shared among load
> balancers?  I think that still might be a N:1 relationship even in our
> latest models.
>
>
>
> There's also the difficulty introduced by supporting different flavors:
>  Since flavors are essentially an association between a load balancer
> object and a driver (with parameters), once flavors are introduced, any
> sub-objects of a given load balancer objects must necessarily be purely
> logical until they are associated with a load balancer.  I know there was
> talk of forcing these objects to be sub-objects of a load balancer which
> can't be accessed independently of the load balancer (which would have much
> the same effect as what you discuss: State / status only make sense once
> logical objects have an instantiation somewhere.) However, the currently
> proposed API treats most objects as root objects, which breaks this
> paradigm.
>
>
>
> How we handle status and updates once there's an instantiation of these
> logical objects is where we start getting into real complexity.
>
>
>
> It seems to me there's a lot of complexity introduced when we allow a lot
> of many to many relationships without a whole lot of benefit in real-world
> deployment scenarios. In most cases, objects are not going to be shared,
> and in those cases with sufficiently complicated deployments in which
> shared objects could be used, the user is likely to be sophisticated enough
> and skilled enough to manage updating what are essentially "copies" of
> objects, and would likely have an opinion about how individual failures
> should be handled which wouldn't necessarily coincide with what we
> developers of the system would assume. That is to say, allowing too many
> many to many relationships feels like a solution to a problem that doesn't
> really exist, and introduces a lot of unnecessary complexity.
>
>
>
> In any case, though, I feel like we should walk before we run:
> Implementing 1:1 initially is a good idea to get us rolling. Whether we
> then implement 1:N or M:N after that is another question entirely. But in
> any case, it seems like a bad idea to try to start with M:N.
>
>
>
> Stephen
>
>
>
>
>
> On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici 
> wrote:
>
> Hi,
>
> Per discussion I had at OpenStack Summit/Paris with Brandon and Doug, I
> would like to remind everyone why we choose to follow a model where pools
> and listeners are shared (many to many relationships).
>
> Use Cases:
> 1. The same application is being exposed via different LB objects.
> For example: users coming from the internal "private" organization
> n

Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-24 Thread Joshua Harlow

Doug Hellmann wrote:

On Nov 24, 2014, at 12:57 PM, Mike Bayer  wrote:


On Nov 24, 2014, at 12:40 PM, Doug Hellmann  wrote:


This is a good point. I’m not sure we can say “we’ll only use explicit/implicit 
async in certain cases" because most of our apps actually mix the cases. We 
have WSGI apps that send RPC messages and we have other apps that receive RPC 
messages and operate on the database. Can we mix explicit and implicit operating 
models, or are we going to have to pick one way? If we have to pick one, the 
implicit model we’re currently using seems more compatible with all of the various 
libraries and services we depend on, but maybe I’m wrong?

IMHO, in the ideal case, a single method shouldn’t be mixing calls to a set of 
database objects as well as calls to RPC APIs at the same time, there should be 
some kind of method boundary to cross.   There’s a lot of ways to achieve that.


The database calls are inside the method invoked through RPC. System 1 sends an RPC 
message (call or cast) to system 2 which receives that message and then does 
something with the database. Frequently “system 1” is an API layer service (mixing 
WSGI and RPC) and "system 2” is something like the conductor (mixing RPC and DB 
access).


What is really needed is some way that code can switch between explicit yields 
and implicit IO on a per-function basis.   Like a decorator for one or the 
other.

The approach that Twisted takes of just using thread pools for those IO-bound 
elements that aren’t compatible with explicit yields is one way to do this. 
This might be the best way to go, if there are in fact issues with mixing in 
implicit async systems like eventlet.  I can imagine, vaguely, that the 
eventlet approach of monkey patching might get in the way of things in this 
more complicated setup.

Part of what makes this confusing for me is that there’s a lack of clarity over 
what benefits we’re trying to get from the async work.  If the idea is, the GIL 
is evil so we need to ban the use of all threads, and therefore must use defer 
for all IO, then that includes database IO which means we theoretically benefit 
from eventlet monkeypatching  - in the absence of truly async DBAPIs, this is 
the only way to have deferrable database IO.

If the idea instead is, the code we write that deals with messaging would be 
easier to produce, organize, and understand given an asyncio style approach, 
but otherwise we aren’t terribly concerned what highly sequential code like 
database code has to do, then a thread pool may be fine.



A lot of the motivation behind the explicit async changes started as a way to 
drop our dependency on eventlet because we saw it as blocking our move to 
Python 3. It is also true that a lot of people don’t like that eventlet 
monkeypatches system libraries, frequently inconsistently or incorrectly.

Apparently the state of python 3 support for eventlet is a little better than 
it was when we started talking about this a few years ago, but the 
monkeypatching is somewhat broken. lifeless suggested trying to fix the 
monkeypatching, which makes sense. At the summit I think we agreed to continue 
down the path of supporting both approaches. The issues you’ve raised with 
using ORMs (or indeed, any IO-based libraries that don’t support explicit 
async) make me think we should reconsider that discussion with the additional 
information that didn’t come up in the summit conversation.



I think victor is proposing fixes here recently,

https://lists.secondlife.com/pipermail/eventletdev/2014-November/001195.html

So that seems to be ongoing to fix up that support (the eventlet 
community is smaller and takes more time to accept pull requests and 
such, from what I've seen, but this is just how it works).



Doug




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-24 Thread Doug Hellmann

On Nov 24, 2014, at 12:57 PM, Mike Bayer  wrote:

> 
>> On Nov 24, 2014, at 12:40 PM, Doug Hellmann  wrote:
>> 
>> 
>> This is a good point. I’m not sure we can say “we’ll only use 
>> explicit/implicit async in certain cases" because most of our apps actually 
>> mix the cases. We have WSGI apps that send RPC messages and we have other 
>> apps that receive RPC messages and operate on the database. Can we mix 
>> explicit and implicit operating models, or are we going to have to pick one 
>> way? If we have to pick one, the implicit model we’re currently using seems 
>> more compatible with all of the various libraries and services we depend on, 
>> but maybe I’m wrong?
> 
> IMHO, in the ideal case, a single method shouldn’t be mixing calls to a set 
> of database objects as well as calls to RPC APIs at the same time, there 
> should be some kind of method boundary to cross.   There’s a lot of ways to 
> achieve that.

The database calls are inside the method invoked through RPC. System 1 sends an 
RPC message (call or cast) to system 2 which receives that message and then 
does something with the database. Frequently “system 1” is an API layer service 
(mixing WSGI and RPC) and "system 2” is something like the conductor (mixing 
RPC and DB access).

> 
> What is really needed is some way that code can switch between explicit 
> yields and implicit IO on a per-function basis.   Like a decorator for one or 
> the other.
> 
> The approach that Twisted takes of just using thread pools for those IO-bound 
> elements that aren’t compatible with explicit yields is one way to do this.   
>   This might be the best way to go, if there are in fact issues with mixing 
> in implicit async systems like eventlet.  I can imagine, vaguely, that the 
> eventlet approach of monkey patching might get in the way of things in this 
> more complicated setup.
> 
> Part of what makes this confusing for me is that there’s a lack of clarity 
> over what benefits we’re trying to get from the async work.  If the idea is, 
> the GIL is evil so we need to ban the use of all threads, and therefore must 
> use defer for all IO, then that includes database IO which means we 
> theoretically benefit from eventlet monkeypatching  - in the absence of truly 
> async DBAPIs, this is the only way to have deferrable database IO.
> 
> If the idea instead is, the code we write that deals with messaging would be 
> easier to produce, organize, and understand given an asyncio style approach, 
> but otherwise we aren’t terribly concerned what highly sequential code like 
> database code has to do, then a thread pool may be fine.


A lot of the motivation behind the explicit async changes started as a way to 
drop our dependency on eventlet because we saw it as blocking our move to 
Python 3. It is also true that a lot of people don’t like that eventlet 
monkeypatches system libraries, frequently inconsistently or incorrectly.

Apparently the state of python 3 support for eventlet is a little better than 
it was when we started talking about this a few years ago, but the 
monkeypatching is somewhat broken. lifeless suggested trying to fix the 
monkeypatching, which makes sense. At the summit I think we agreed to continue 
down the path of supporting both approaches. The issues you’ve raised with 
using ORMs (or indeed, any IO-based libraries that don’t support explicit 
async) make me think we should reconsider that discussion with the additional 
information that didn’t come up in the summit conversation.

Doug

> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] deleting the pylint test job

2014-11-24 Thread Michael Still
This job was always experimental IIRC, and the original author hasn't
been around in a while. I agree we should remove it.

Michael

On Tue, Nov 25, 2014 at 4:52 AM, Sean Dague  wrote:
> The pylint test job has been broken for weeks, no one seemed to care.
> While waiting for other tests to return today I looked into it and
> figured out the fix.
>
> However, because of nova objects pylint is progressively less and less
> useful. So the fact that no one else looked at it means that people
> didn't seem to care that it was provably broken. I think it's better
> that we just delete the jobs and save a node on every nova patch instead.
>
> Project Config Proposed here - https://review.openstack.org/#/c/136846/
>
> If you -1 that you own fixing it, and making nova objects patches
> sensible in pylint.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Versioned objects cross project sessions next steps

2014-11-24 Thread Joshua Harlow

Dan Smith wrote:

3. vish brought up one draw back of versioned objects: the difficulty in
cherry picking commits for stable branches - Is this a show stopper?.


After some discussion with some of the interested parties, we're
planning to add a third .z element to the version numbers and use that
to handle backports in the same way that we do for RPC:

https://review.openstack.org/#/c/134623/


Next steps:
- Jay suggested making a second spec that would lay out what it would
look like if we used google protocol buffers.
- Dan: do you need some help in making this happen, do we need some
volunteers?


I'm not planning to look into this, especially since we discussed it a
couple years ago when deciding to do what we're currently doing. If
someone else does, creates a thing that is demonstrably more useful than
what we have, and provides a migration plan, then cool. Otherwise, I'm
not really planning to stop what I'm doing at the moment.


- Are there any other concrete things we can do to get this usable by
other projects in a timely manner?


To be honest, since the summit, I've not done anything with the current
oslo spec, given the potential for doing something different that was
raised. I know that cinder folks (at least) are planning to start
copying code into their tree to get moving.

I think we need a decision to either (a) dump what we've got into the
proposed library (or incubator) and plan to move forward incrementally
or (b) each continue doing our own thing(s) in our own trees while we
wait for someone to create something based on GPB that does what we want.



I'd prefer (a); although I hope there is a owner/lead for this library 
(dan?) and it's not just dumped on the oslo folks as that won't work out 
so well I think. It'd be nice if said owner could also look into (b) but 
that's at there own (or other library supporter) time I suppose (I 
personally think (b) would probably allow for a larger community of 
folks to get involved in this library, would potentially reduce the 
amount of custom/overlapping code and other similar benefits...).



--Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-24 Thread Fox, Kevin M
One of the selling points of tripleo is to reuse as much as possible from the 
cloud, to make it easier to deploy. While monasca may be more complicated, if 
it ends up being a component everyone learns, then its not as bad as needing to 
learn two different monitoring technologies. You could say the same thing 
cobbler vs ironic. the whole Ironic stack is much more complicated. But for an 
openstack admin, its easier since a lot of existing knowlege applies. Just 
something to consider.

Thanks,
Kevin


From: Tomasz Napierala
Sent: Monday, November 24, 2014 6:42:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Fuel] fuel master monitoring


> On 24 Nov 2014, at 11:09, Sergii Golovatiuk  wrote:
>
> Hi,
>
> monasca looks overcomplicated for the purposes we need. Also it requires 
> Kafka which is Java based transport protocol.
> I am proposing Sensu. It's architecture is tiny and elegant. Also it uses 
> rabbitmq as transport so we won't need to introduce new protocol.

Do we really need such complicated stuff? Sensu is huge project, and it's 
footprint is quite large. Monit can alert using scripts, can we use it instead 
of API?

Regards,
--
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo][Neutron] Fork() safety and oslo.messaging

2014-11-24 Thread Ken Giusti
Hi all,

As far as oslo.messaging is concerned, should it be possible for the
main application to safely os.fork() when there is already an active
connection to a messaging broker?

I ask because I'm hitting what appears to be fork-related issues with
the new AMQP 1.0 driver.  I think the same problems have been seen
with the older impl_qpid driver as well [0]

Both drivers utilize a background threading.Thread that handles all
async socket I/O and protocol timers.

In the particular case I'm trying to debug, rpc_workers is set to 4 in
neutron.conf.  As far as I can tell, this causes neutron.service to
os.fork() four workers, but does so after it has created a listener
(and therefore a connection to the broker).

This results in multiple processes all select()'ing the same set of
networks sockets, and stuff breaks :(

Even without the background process, wouldn't this use still result in
sockets being shared across the parent/child processes?   Seems
dangerous.

Thoughts?

[0] https://bugs.launchpad.net/oslo.messaging/+bug/1330199

-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Versioned objects cross project sessions next steps

2014-11-24 Thread Dan Smith
> 3. vish brought up one draw back of versioned objects: the difficulty in
> cherry picking commits for stable branches - Is this a show stopper?.

After some discussion with some of the interested parties, we're
planning to add a third .z element to the version numbers and use that
to handle backports in the same way that we do for RPC:

https://review.openstack.org/#/c/134623/

> Next steps:
> - Jay suggested making a second spec that would lay out what it would
> look like if we used google protocol buffers.
> - Dan: do you need some help in making this happen, do we need some
> volunteers?

I'm not planning to look into this, especially since we discussed it a
couple years ago when deciding to do what we're currently doing. If
someone else does, creates a thing that is demonstrably more useful than
what we have, and provides a migration plan, then cool. Otherwise, I'm
not really planning to stop what I'm doing at the moment.

> - Are there any other concrete things we can do to get this usable by
> other projects in a timely manner?

To be honest, since the summit, I've not done anything with the current
oslo spec, given the potential for doing something different that was
raised. I know that cinder folks (at least) are planning to start
copying code into their tree to get moving.

I think we need a decision to either (a) dump what we've got into the
proposed library (or incubator) and plan to move forward incrementally
or (b) each continue doing our own thing(s) in our own trees while we
wait for someone to create something based on GPB that does what we want.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday November 25th at 19:00 UTC

2014-11-24 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday November 25th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

And in case you missed it, meeting log and minutes from the last
meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-11-18-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-11-18-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-11-18-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Launching VM in multi-node setup

2014-11-24 Thread Dugger, Donald D
Danny-

Availability zones and host aggregates are your friend.  For more detail check 
out:

http://docs.openstack.org/openstack-ops/content/scaling.html

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

From: Danny Choi (dannchoi) [mailto:dannc...@cisco.com]
Sent: Monday, November 24, 2014 11:07 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Launching VM in multi-node setup

Hi,

In a multi-node setup with multiple Compute nodes, is there a way to control 
where a VM will reside when launching a VM?
E.g. I would like to have VM-1 at Compute-1, VM-2 at Compute-2, etc...

Thanks,
Danny
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Updating template spec to include "IPV6 Impact"

2014-11-24 Thread Kyle Mestery
I proposed a template change today [1] which adds an "IPV6 Impact"
section into the template specification. This was done after some
discussion on the "Kilo Priorities" spec [2] around IPV6. Given how
important IPV6 is, it's my opinion we should ensure specifications for
Kilo and beyond think about how they integrate with IPV6.

I'm sending this email to the list because if this lands, it will
require in-flight specs to respin since they won't pass unit tests
anymore.

I'll also highlight this in the meeting today.

Thanks!
Kyle

[1] https://review.openstack.org/#/c/136823/
[2] https://review.openstack.org/#/c/136514/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Versioned objects cross project sessions next steps

2014-11-24 Thread Jay Pipes

On 11/13/2014 01:12 AM, Robert Collins wrote:

On 11 November 2014 13:30, Angus Salkeld  wrote:

Hi all

I just wanted to make sure we are all under the same understanding of the
outcomes and what the next steps for the versioned objects session are.

1. There is a lot of interest in other projects using oslo versioned objects
and it is worth progressing with this
(https://review.openstack.org/#/c/127532).
2. jpipes and jharlow suggested experimenting/investigating google protocol
buffers (https://developers.google.com/protocol-buffers/) instead of  the
custom serialization and version code. This *could* be an implementation
detail, but also could make the adoption by nova more complicated (as it has
a different mechanism in place).
3. vish brought up one draw back of versioned objects: the difficulty in
cherry picking commits for stable branches - Is this a show stopper?.

Next steps:
- Jay suggested making a second spec that would lay out what it would look
like if we used google protocol buffers.
- Dan: do you need some help in making this happen, do we need some
volunteers?
- Are there any other concrete things we can do to get this usable by other
projects in a timely manner?


+1 on protocol buffers, but perhaps
http://kentonv.github.io/capnproto/ could be considered: its protocol
buffers v2 basically - from one of the originators of protocol
buffers. It has Python support available too, just like protocol
buffers.


Very nice indeed. Been reading through the Cap'n'proto documentation and 
it looks like a great improvement over GPB.


Definitely something to look into.

I sent an email privately to Angus and Dan this morning saying that I 
personally don't have the bandwidth to do a PoC that would use GPB as 
implementation of the serialization, schema representation, and 
versioning engine. I support the idea of using GPB, but I also recognize 
it's a large amount of work.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Discussion on cm_api for global requirement

2014-11-24 Thread Trevor McKay
Hello all,

  at our last Sahara IRC meeting we started discussing whether or not to add
a global requirement for cm_api.py https://review.openstack.org/#/c/130153/

  One issue (but not the only issue) is that cm_api is not packaged for Fedora,
Centos, or Ubuntu currently. The global requirements README points out that 
adding
requirements for a new dependency more or less forces the distros to package 
the 
dependency for the next OS release.

  Given that cm_api is needed for a plugin, but not for core Sahara 
functionality,
should we request that the global requirement be added, or should we seek to add
a global requirement only if/when cm_api is packaged?

  Alternatively, can we support the plugin with additional documentation (ie, 
how
to install cm_api on the Sahara node)?

  Those present at the meeting agreed that it was probably better to defer a 
global
requirement until/unless cm_api is packaged to avoid a burden on the distros.

  Thoughts?

Best,

Trevor

Minutes: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.html
Logs: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.log.html
https://github.com/openstack/requirements




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-24 Thread Jay Pipes

On 11/24/2014 01:02 PM, pcrews wrote:

On 11/24/2014 09:40 AM, Ben Nemec wrote:

On 11/24/2014 08:50 AM, Matthew Gilliard wrote:

1/ assertFalse() vs assertEqual(x, False) - these are semantically
different because of python's notion of truthiness, so I don't think
we ought to make this a rule.

2/ expected/actual - incorrect failure messages have cost me more time
than I should admit to. I don't see any reason not to try to improve
in this area, even if it's difficult to automate.


Personally I'd rather kill the expected, actual ordering and just have
first, second or something that doesn't imply which value is which.
Because it can't be automatically enforced, we'll _never_ fix all of the
expected, actual mistakes (and will continually introduce new ones), so
I'd prefer to eliminate the confusion by not requiring a specific
ordering.


++.  It should be a part of review to ensure that the test (including
error messages) makes sense.  Simply having a (seemingly costly to
implement and enforce) rule stating that something must adhere to a
pattern does not guarantee that.


So, as a proponent of the (expected, actual) parameter order thing, I'll 
just say that I actually agree with Patrick and Ben about NOT having a 
hacking rule for this. The reason is because of what Ben noted: there's 
really no way to programmatically check for this.



assertEqual(expected, actual, msg="nom nom nom cookie cookie yum")
matches the pattern, but the message still doesn't necessarily provide
much worth.

Focusing on making tests informative and clear about what is thought to
be broken on failure seems to be the better target (imo).


Agreed. And for me, I think pointing out that the default failure 
message for testtools.TestCase.assertEqual() uses the terms "reference" 
(expected) and "actual" is a reason why reviewers *should* ask patch 
submitters to use (expected, actual) ordering. I just don't think it's 
something that can be hacking-rule-tested for...


Best,
-jay


Alternatively I suppose we could require kwargs for expected and actual
in assertEqual.  That would at least make it more obvious when someone
has gotten it backward, but again that's a ton of code churn for minimal
gain IMHO.



3/ warn{ing} -
https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L322


On the overarching point: There is no way to get started with
OpenStack, other than starting small.  My first ever patch (a tidy-up)
was rejected for being trivial, and that was confusing and
disheartening. Nova has a lot on its plate, sure, and plenty of
pending code reviews.  But there is also a lot of inconsistency and
unloved code which *is* worth fixing, because a tidy codebase is a joy
to work with, *and* these changes are ideal to bring new reviewers and
developers into the project.

Linus' post on this from the LKML is almost a decade old (!) but
worth reading.
https://lkml.org/lkml/2004/12/20/255

   MG

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Launching VM in multi-node setup

2014-11-24 Thread Danny Choi (dannchoi)
Hi,

In a multi-node setup with multiple Compute nodes, is there a way to control 
where a VM will reside when launching a VM?
E.g. I would like to have VM-1 at Compute-1, VM-2 at Compute-2, etc…

Thanks,
Danny
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] deleting the pylint test job

2014-11-24 Thread Joe Gordon
On Mon, Nov 24, 2014 at 9:52 AM, Sean Dague  wrote:

> The pylint test job has been broken for weeks, no one seemed to care.
> While waiting for other tests to return today I looked into it and
> figured out the fix.
>
> However, because of nova objects pylint is progressively less and less
> useful. So the fact that no one else looked at it means that people
> didn't seem to care that it was provably broken. I think it's better
> that we just delete the jobs and save a node on every nova patch instead.
>

+1


>
> Project Config Proposed here - https://review.openstack.org/#/c/136846/
>
> If you -1 that you own fixing it, and making nova objects patches
> sensible in pylint.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-24 Thread pcrews

On 11/24/2014 09:40 AM, Ben Nemec wrote:

On 11/24/2014 08:50 AM, Matthew Gilliard wrote:

1/ assertFalse() vs assertEqual(x, False) - these are semantically
different because of python's notion of truthiness, so I don't think
we ought to make this a rule.

2/ expected/actual - incorrect failure messages have cost me more time
than I should admit to. I don't see any reason not to try to improve
in this area, even if it's difficult to automate.


Personally I'd rather kill the expected, actual ordering and just have
first, second or something that doesn't imply which value is which.
Because it can't be automatically enforced, we'll _never_ fix all of the
expected, actual mistakes (and will continually introduce new ones), so
I'd prefer to eliminate the confusion by not requiring a specific ordering.


++.  It should be a part of review to ensure that the test (including 
error messages) makes sense.  Simply having a (seemingly costly to 
implement and enforce) rule stating that something must adhere to a 
pattern does not guarantee that.


assertEqual(expected, actual, msg="nom nom nom cookie cookie yum") 
matches the pattern, but the message still doesn't necessarily provide 
much worth.


Focusing on making tests informative and clear about what is thought to 
be broken on failure seems to be the better target (imo).




Alternatively I suppose we could require kwargs for expected and actual
in assertEqual.  That would at least make it more obvious when someone
has gotten it backward, but again that's a ton of code churn for minimal
gain IMHO.



3/ warn{ing} - 
https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L322

On the overarching point: There is no way to get started with
OpenStack, other than starting small.  My first ever patch (a tidy-up)
was rejected for being trivial, and that was confusing and
disheartening. Nova has a lot on its plate, sure, and plenty of
pending code reviews.  But there is also a lot of inconsistency and
unloved code which *is* worth fixing, because a tidy codebase is a joy
to work with, *and* these changes are ideal to bring new reviewers and
developers into the project.

Linus' post on this from the LKML is almost a decade old (!) but worth reading.
https://lkml.org/lkml/2004/12/20/255

   MG

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] deleting the pylint test job

2014-11-24 Thread Dan Smith
> However, because of nova objects pylint is progressively less and less
> useful. So the fact that no one else looked at it means that people
> didn't seem to care that it was provably broken. I think it's better
> that we just delete the jobs and save a node on every nova patch instead.

Agreed.

> Project Config Proposed here - https://review.openstack.org/#/c/136846/

+1'd

> If you -1 that you own fixing it, and making nova objects patches
> sensible in pylint.

Ooh, wait, maybe I want to see someone do that second part :)

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-24 Thread Mike Bayer

> On Nov 24, 2014, at 12:40 PM, Doug Hellmann  wrote:
> 
> 
> This is a good point. I’m not sure we can say “we’ll only use 
> explicit/implicit async in certain cases" because most of our apps actually 
> mix the cases. We have WSGI apps that send RPC messages and we have other 
> apps that receive RPC messages and operate on the database. Can we mix 
> explicit and implicit operating models, or are we going to have to pick one 
> way? If we have to pick one, the implicit model we’re currently using seems 
> more compatible with all of the various libraries and services we depend on, 
> but maybe I’m wrong?

IMHO, in the ideal case, a single method shouldn’t be mixing calls to a set of 
database objects as well as calls to RPC APIs at the same time, there should be 
some kind of method boundary to cross.   There’s a lot of ways to achieve that.

What is really needed is some way that code can switch between explicit yields 
and implicit IO on a per-function basis.   Like a decorator for one or the 
other.

The approach that Twisted takes of just using thread pools for those IO-bound 
elements that aren’t compatible with explicit yields is one way to do this. 
This might be the best way to go, if there are in fact issues with mixing in 
implicit async systems like eventlet.  I can imagine, vaguely, that the 
eventlet approach of monkey patching might get in the way of things in this 
more complicated setup.

Part of what makes this confusing for me is that there’s a lack of clarity over 
what benefits we’re trying to get from the async work.  If the idea is, the GIL 
is evil so we need to ban the use of all threads, and therefore must use defer 
for all IO, then that includes database IO which means we theoretically benefit 
from eventlet monkeypatching  - in the absence of truly async DBAPIs, this is 
the only way to have deferrable database IO.

If the idea instead is, the code we write that deals with messaging would be 
easier to produce, organize, and understand given an asyncio style approach, 
but otherwise we aren’t terribly concerned what highly sequential code like 
database code has to do, then a thread pool may be fine.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] removing XML testing completely from Tempest

2014-11-24 Thread Davanum Srinivas
+1 to cleanup.

-- dims

On Mon, Nov 24, 2014 at 12:50 PM, Monty Taylor  wrote:
> On 11/24/2014 12:36 PM, Lance Bragstad wrote:
>> We are in the process of removing XML support from Keystone [1] and have
>> provided
>> configuration options to Tempest for testing XML in older releases [2].
>> However, the
>> identity client is still tightly coupled to XML test cases. We can either
>> fix the 309 test cases
>> that use the XML identity client or let those cases be removed from
>> Tempest. I'd like to let this
>> air out a bit before I start fixing the identity client XML issues, in case
>> XML testing is completely
>> removed from Tempest.
>
> I fully support and am excited about removing the xml api support.
>
>> [1] https://review.openstack.org/#/c/125738/
>> [2] https://review.openstack.org/#/c/127641/
>> https://review.openstack.org/#/c/130874/
>> https://review.openstack.org/#/c/126564/
>>
>> On Mon, Nov 24, 2014 at 8:03 AM, Jay Pipes  wrote:
>>
>>> On 11/24/2014 08:56 AM, Sean Dague wrote:
>>>
 Having XML payloads was never a universal part of OpenStack services.
 During the Icehouse release the TC declared that being an OpenStack
 service requires having a JSON REST API. Projects could do what they
 wanted beyond that. Lots of them deprecated and have been removing the
 XML cruft since then.

 Tempest is a tool to test the OpenStack API. OpenStack hasn't had an XML
 API for a long time.

 Given that current branchless Tempest only supports as far back as
 Icehouse anyway, after these changes were made, I'd like to propose that
 all the XML code in Tempest should be removed. If a project wants to
 support something else beyond a JSON API that's on that project to test
 and document on their own.

 We've definitively blocked adding new XML tests in Tempest anyway, but
 getting rid of the XML debt in the project will simplify it quite a bit,
 make it easier for contributors to join in, and seems consistent with
 the direction of OpenStack as a whole.

>>>
>>> But Sean, without XML support, we will lose all of our enterprise
>>> customers!
>>>
>>> -jay
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Anyone Using the Open Solaris ZFS Driver?

2014-11-24 Thread Monty Taylor
On 11/24/2014 10:14 AM, Drew Fisher wrote:
> 
> 
> On 11/17/14 10:27 PM, Duncan Thomas wrote:
>> Is the new driver drop-in compatible with the old one? IF not, can
>> existing systems be upgraded to the new driver via some manual steps, or
>> is it basically a completely new driver with similar functionality?

Possibly none of my business- but if the current driver is actually just
flat broken, then upgrading from it to the new solaris ZFS driver seems
unlikely to be possibly, simply because the from case is broken.

> The driver in san/solaris.py focuses entirely on iSCSI.  I don't think
> existing systems can be upgraded manually but I've never really tried.
> We started with a clean slate for Solaris 11 and Cinder and added local
> ZFS support for single-system and demo rigs along with a fibre channel
> and iSCSI drivers.
> 
> The driver is publically viewable here:
> 
> https://java.net/projects/solaris-userland/sources/gate/content/components/openstack/cinder/files/solaris/zfs.py
> 
> Please note that this driver is based on Havana.  We know it's old and
> we're working to get it updated to Juno right now.  I can try to work
> with my team to get a blueprint filed and start working on getting it
> integrated into trunk.
> 
> -Drew
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] deleting the pylint test job

2014-11-24 Thread Sean Dague
The pylint test job has been broken for weeks, no one seemed to care.
While waiting for other tests to return today I looked into it and
figured out the fix.

However, because of nova objects pylint is progressively less and less
useful. So the fact that no one else looked at it means that people
didn't seem to care that it was provably broken. I think it's better
that we just delete the jobs and save a node on every nova patch instead.

Project Config Proposed here - https://review.openstack.org/#/c/136846/

If you -1 that you own fixing it, and making nova objects patches
sensible in pylint.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] removing XML testing completely from Tempest

2014-11-24 Thread Monty Taylor
On 11/24/2014 12:36 PM, Lance Bragstad wrote:
> We are in the process of removing XML support from Keystone [1] and have
> provided
> configuration options to Tempest for testing XML in older releases [2].
> However, the
> identity client is still tightly coupled to XML test cases. We can either
> fix the 309 test cases
> that use the XML identity client or let those cases be removed from
> Tempest. I'd like to let this
> air out a bit before I start fixing the identity client XML issues, in case
> XML testing is completely
> removed from Tempest.

I fully support and am excited about removing the xml api support.

> [1] https://review.openstack.org/#/c/125738/
> [2] https://review.openstack.org/#/c/127641/
> https://review.openstack.org/#/c/130874/
> https://review.openstack.org/#/c/126564/
> 
> On Mon, Nov 24, 2014 at 8:03 AM, Jay Pipes  wrote:
> 
>> On 11/24/2014 08:56 AM, Sean Dague wrote:
>>
>>> Having XML payloads was never a universal part of OpenStack services.
>>> During the Icehouse release the TC declared that being an OpenStack
>>> service requires having a JSON REST API. Projects could do what they
>>> wanted beyond that. Lots of them deprecated and have been removing the
>>> XML cruft since then.
>>>
>>> Tempest is a tool to test the OpenStack API. OpenStack hasn't had an XML
>>> API for a long time.
>>>
>>> Given that current branchless Tempest only supports as far back as
>>> Icehouse anyway, after these changes were made, I'd like to propose that
>>> all the XML code in Tempest should be removed. If a project wants to
>>> support something else beyond a JSON API that's on that project to test
>>> and document on their own.
>>>
>>> We've definitively blocked adding new XML tests in Tempest anyway, but
>>> getting rid of the XML debt in the project will simplify it quite a bit,
>>> make it easier for contributors to join in, and seems consistent with
>>> the direction of OpenStack as a whole.
>>>
>>
>> But Sean, without XML support, we will lose all of our enterprise
>> customers!
>>
>> -jay
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] proposed library releases for next week

2014-11-24 Thread Doug Hellmann
After the Oslo meeting today, most of the folks preparing releases met 
separately and decided to wait to create any new releases until the stable 
branches are ready. We need devstack to install Oslo libs from packages 
(merged) and to cap the Oslo requirements. Sean is going to raise the latter 
issue as a policy discussion in the project meeting tomorrow.

Doug

On Nov 21, 2014, at 11:56 AM, Doug Hellmann  wrote:

> 
> On Nov 21, 2014, at 11:25 AM, Sean Dague  wrote:
> 
>> On 11/21/2014 11:19 AM, Doug Hellmann wrote:
>>> We have a backlog of changes in many of the Oslo libraries, so I would like 
>>> to cut releases early next week. Please look over the list below and speak 
>>> up if there are known issues that would prevent us from releasing these 
>>> libs on Monday or Tuesday of next week. Patches still in the review queue 
>>> can wait for the next batch of releases, so let’s focus on what’s in 
>>> already.
>> 
>> Given that the short change logs are pretty hard to parse, would it be
>> possible to also provide the diffstat of each release, as well as the
>> actual requirements diff (which seems to be a non negligible amount of
>> the changes, and the one with terrible change strings).
>> 
>> I think that with the oslo.db last release the changelog didn't really
>> express clearly enough what was changing.
> 
> Yeah, I’ve been looking for ways to improve the release notes. In this case I 
> expected the library maintainers to know what the changes meant, but more 
> detail is better. The report comes from a script in 
> openstack/oslo-incubator/tools, which I’ve been updating this morning 
> (https://review.openstack.org/#/c/136401/). If anyone has suggestions for 
> other info to add, please let me know.
> 
> Doug
> 
> 
> 
> openstack/cliff  1.8.0..HEAD
> 
> f6e9bbd print the real error cmd argument
> a5fd24d Updated from global requirements
> 
>  diffstat (except test files):
> 
> cliff/commandmanager.py| 3 ++-
> requirements.txt   | 2 +-
> 3 files changed, 5 insertions(+), 2 deletions(-)
> 
>  Requirements updates:
> 
> diff --git a/requirements.txt b/requirements.txt
> index 4d3ccc9..bf06e82 100644
> --- a/requirements.txt
> +++ b/requirements.txt
> @@ -10 +10 @@ six>=1.7.0
> -stevedore>=0.14
> +stevedore>=1.1.0  # Apache-2.0
> 
> openstack/oslo.concurrency  0.2.0..HEAD
> 
> 3bda65c Allow for providing a customized semaphore container
> 656f908 Move locale files to proper place
> faa30f8 Flesh out the README
> bca4a0d Move out of the oslo namespace package
> 58de317 Improve testing in py3 environment
> fa52a63 Only modify autoindex.rst if it exists
> 63e618b Imported Translations from Transifex
> d5ea62c lockutils-wrapper cleanup
> 78ba143 Don't use variables that aren't initialized
> 
>  diffstat (except test files):
> 
> .gitignore |   1 +
> README.rst |   4 +-
> doc/source/conf.py |  23 +-
> .../locale/en_GB/LC_MESSAGES/oslo.concurrency.po   |  16 +-
> oslo.concurrency/locale/oslo.concurrency.pot   |  16 +-
> oslo/concurrency/__init__.py   |  29 ++
> oslo/concurrency/_i18n.py  |  32 --
> oslo/concurrency/fixture/__init__.py   |  13 +
> oslo/concurrency/fixture/lockutils.py  |  51 --
> oslo/concurrency/lockutils.py  | 376 --
> oslo/concurrency/openstack/__init__.py |   0
> oslo/concurrency/openstack/common/__init__.py  |   0
> oslo/concurrency/openstack/common/fileutils.py | 146 --
> oslo/concurrency/opts.py   |  45 --
> oslo/concurrency/processutils.py   | 340 
> oslo_concurrency/__init__.py   |   0
> oslo_concurrency/_i18n.py  |  32 ++
> oslo_concurrency/fixture/__init__.py   |   0
> oslo_concurrency/fixture/lockutils.py  |  51 ++
> oslo_concurrency/lockutils.py  | 423 +++
> oslo_concurrency/openstack/__init__.py |   0
> oslo_concurrency/openstack/common/__init__.py  |   0
> oslo_concurrency/openstack/common/fileutils.py | 146 ++
> oslo_concurrency/opts.py   |  45 ++
> oslo_concurrency/processutils.py   | 340 
> setup.cfg  |   9 +-
> tox.ini|   8 +-
> 40 files changed, 3385 insertions(+), 2135 deletions(-)
> 
>  Requirements updates:
> 
> openstack/oslo.config  1.4.0..HEAD
> 
> 7ab3326 Updated from global requirements
> c81dc30 Updated from global requirements
> 4a15ea3 Fix class constant indentation
> 5d5faeb Updated from global requirements
> d6b0ee6 Activate pep8 check that _ is imported
> 73635ef Updated from global requirements
> cf94a51 Updated from global requirements
> e906e74 Updated from global requirements
> 0

Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-24 Thread Ben Nemec
On 11/24/2014 08:50 AM, Matthew Gilliard wrote:
> 1/ assertFalse() vs assertEqual(x, False) - these are semantically
> different because of python's notion of truthiness, so I don't think
> we ought to make this a rule.
> 
> 2/ expected/actual - incorrect failure messages have cost me more time
> than I should admit to. I don't see any reason not to try to improve
> in this area, even if it's difficult to automate.

Personally I'd rather kill the expected, actual ordering and just have
first, second or something that doesn't imply which value is which.
Because it can't be automatically enforced, we'll _never_ fix all of the
expected, actual mistakes (and will continually introduce new ones), so
I'd prefer to eliminate the confusion by not requiring a specific ordering.

Alternatively I suppose we could require kwargs for expected and actual
in assertEqual.  That would at least make it more obvious when someone
has gotten it backward, but again that's a ton of code churn for minimal
gain IMHO.

> 
> 3/ warn{ing} - 
> https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L322
> 
> On the overarching point: There is no way to get started with
> OpenStack, other than starting small.  My first ever patch (a tidy-up)
> was rejected for being trivial, and that was confusing and
> disheartening. Nova has a lot on its plate, sure, and plenty of
> pending code reviews.  But there is also a lot of inconsistency and
> unloved code which *is* worth fixing, because a tidy codebase is a joy
> to work with, *and* these changes are ideal to bring new reviewers and
> developers into the project.
> 
> Linus' post on this from the LKML is almost a decade old (!) but worth 
> reading.
> https://lkml.org/lkml/2004/12/20/255
> 
>   MG
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-24 Thread Doug Hellmann

On Nov 24, 2014, at 11:30 AM, Jay Pipes  wrote:

> On 11/24/2014 10:43 AM, Mike Bayer wrote:
>>> On Nov 24, 2014, at 9:23 AM, Adam Young  wrote:
>>> For pieces such as the Nova compute that talk almost exclusively on
>>> the Queue, we should work to remove Monkey patching and use a clear
>>> programming model.  If we can do that within the context of
>>> Eventlet, great.  If we need to replace Eventlet with a different
>>> model, it will be painful, but should be done.  What is most
>>> important is that we avoid doing hacks like we've had to do with
>>> calls to Memcached and monkeypatching threading.
>> 
>> Nova compute does a lot of relational database access and I’ve yet to
>> see an explicit-async-compatible DBAPI other than psycopg2’s and
>> Twisted abdbapi.   Twisted adbapi appears just to throw regular
>> DBAPIs into a thread pool in any case (see
>> http://twistedmatrix.com/trac/browser/trunk/twisted/enterprise/adbapi.py),
>> so given that awkwardness and lack of real async, if eventlet is
>> dropped it would be best to use a thread pool for database-related
>> methods directly.
> 
> Hi Mike,
> 
> Note that nova-compute does not do any direct database queries. All database 
> reads and writes actually occur over RPC APIs, via the conductor, either 
> directly over the conductor RPC API or indirectly via nova.objects.
> 
> For the nova-api and nova-conductor services, however, yes, there is 
> direct-to-database communication that occurs, though the goal is to have only 
> the nova-conductor service eventually be the only service that directly 
> communicates with the database.

This is a good point. I’m not sure we can say “we’ll only use explicit/implicit 
async in certain cases" because most of our apps actually mix the cases. We 
have WSGI apps that send RPC messages and we have other apps that receive RPC 
messages and operate on the database. Can we mix explicit and implicit 
operating models, or are we going to have to pick one way? If we have to pick 
one, the implicit model we’re currently using seems more compatible with all of 
the various libraries and services we depend on, but maybe I’m wrong?

Doug

> 
> Best,
> -jay
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] removing XML testing completely from Tempest

2014-11-24 Thread Lance Bragstad
We are in the process of removing XML support from Keystone [1] and have
provided
configuration options to Tempest for testing XML in older releases [2].
However, the
identity client is still tightly coupled to XML test cases. We can either
fix the 309 test cases
that use the XML identity client or let those cases be removed from
Tempest. I'd like to let this
air out a bit before I start fixing the identity client XML issues, in case
XML testing is completely
removed from Tempest.


[1] https://review.openstack.org/#/c/125738/
[2] https://review.openstack.org/#/c/127641/
https://review.openstack.org/#/c/130874/
https://review.openstack.org/#/c/126564/

On Mon, Nov 24, 2014 at 8:03 AM, Jay Pipes  wrote:

> On 11/24/2014 08:56 AM, Sean Dague wrote:
>
>> Having XML payloads was never a universal part of OpenStack services.
>> During the Icehouse release the TC declared that being an OpenStack
>> service requires having a JSON REST API. Projects could do what they
>> wanted beyond that. Lots of them deprecated and have been removing the
>> XML cruft since then.
>>
>> Tempest is a tool to test the OpenStack API. OpenStack hasn't had an XML
>> API for a long time.
>>
>> Given that current branchless Tempest only supports as far back as
>> Icehouse anyway, after these changes were made, I'd like to propose that
>> all the XML code in Tempest should be removed. If a project wants to
>> support something else beyond a JSON API that's on that project to test
>> and document on their own.
>>
>> We've definitively blocked adding new XML tests in Tempest anyway, but
>> getting rid of the XML debt in the project will simplify it quite a bit,
>> make it easier for contributors to join in, and seems consistent with
>> the direction of OpenStack as a whole.
>>
>
> But Sean, without XML support, we will lose all of our enterprise
> customers!
>
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Let's use additional prefixes in threads

2014-11-24 Thread Jay Pipes

On 11/24/2014 12:04 PM, Vladimir Kuklin wrote:

[Fuel][Library] for compatibility with other projects. Let's negotiate
the list of prefixes and populate them on our wiki so that everyone can
configure his filters.


++

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Alembic 0.7.0 - hitting Pypi potentially Sunday night

2014-11-24 Thread Chmouel Boudjnah
Hello,

On Fri, Nov 21, 2014 at 10:10 PM, Mike Bayer  wrote:

> 1. read about the new features, particularly the branch support, and
> please let me know of any red flags/concerns you might have over the coming
> implementation, at
> http://alembic.readthedocs.org/en/latest/tutorial.html#running-batch-migrations-for-sqlite-and-other-databases
> a
>

Great news about the sqlite support, I think that link to the documentation
doesn't work tho.

Thanks,
Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Let's use additional prefixes in threads

2014-11-24 Thread Vladimir Kuklin
[Fuel][Library] for compatibility with other projects. Let's negotiate the
list of prefixes and populate them on our wiki so that everyone can
configure his filters.

On Mon, Nov 24, 2014 at 7:26 PM, Jay Pipes  wrote:

> On 11/24/2014 11:01 AM, Vladimir Kuklin wrote:
>
>> Fuelers
>>
>> I am writing to you to suggest adding prefixes for Fuel subprojects/ as
>> it becomes more and more difficult to read all the emails in mailing
>> lists. Adding these prefixes should significantly improve ability of our
>> engineers to filter out emails they are not interested in, e.g. UI
>> implementation details are almost all the time not so severe for
>> deployment engineers. So I am suggesting to use following prefixes:
>>
>> Library
>> UI
>> Nailgun
>> Orchestration/Astute
>> QA
>> DevOps
>> Misc
>> Release
>>
>
> So, you would have [fuel][library] as the subject, or you would have
> [fuel-library] as the subject?
>
> Best,
> -jay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-24 Thread Jay Pipes

On 11/24/2014 10:43 AM, Mike Bayer wrote:

On Nov 24, 2014, at 9:23 AM, Adam Young  wrote:
For pieces such as the Nova compute that talk almost exclusively on
the Queue, we should work to remove Monkey patching and use a clear
programming model.  If we can do that within the context of
Eventlet, great.  If we need to replace Eventlet with a different
model, it will be painful, but should be done.  What is most
important is that we avoid doing hacks like we've had to do with
calls to Memcached and monkeypatching threading.


Nova compute does a lot of relational database access and I’ve yet to
see an explicit-async-compatible DBAPI other than psycopg2’s and
Twisted abdbapi.   Twisted adbapi appears just to throw regular
DBAPIs into a thread pool in any case (see
http://twistedmatrix.com/trac/browser/trunk/twisted/enterprise/adbapi.py),
so given that awkwardness and lack of real async, if eventlet is
dropped it would be best to use a thread pool for database-related
methods directly.


Hi Mike,

Note that nova-compute does not do any direct database queries. All 
database reads and writes actually occur over RPC APIs, via the 
conductor, either directly over the conductor RPC API or indirectly via 
nova.objects.


For the nova-api and nova-conductor services, however, yes, there is 
direct-to-database communication that occurs, though the goal is to have 
only the nova-conductor service eventually be the only service that 
directly communicates with the database.


Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Let's use additional prefixes in threads

2014-11-24 Thread Jay Pipes

On 11/24/2014 11:01 AM, Vladimir Kuklin wrote:

Fuelers

I am writing to you to suggest adding prefixes for Fuel subprojects/ as
it becomes more and more difficult to read all the emails in mailing
lists. Adding these prefixes should significantly improve ability of our
engineers to filter out emails they are not interested in, e.g. UI
implementation details are almost all the time not so severe for
deployment engineers. So I am suggesting to use following prefixes:

Library
UI
Nailgun
Orchestration/Astute
QA
DevOps
Misc
Release


So, you would have [fuel][library] as the subject, or you would have 
[fuel-library] as the subject?


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes/log - 11/24/2014

2014-11-24 Thread Nikolay Makhotkin
Thanks for joining us today,

Here are the links to the meeting minutes and full log:

   - Minutes -
   
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-11-24-16.00.html
   - Full log -
*http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-11-24-16.00.log.html
   
*


The next meeting will be next Monday Dec 1 at 16.00 UTC.

-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] proposal: alternating weekly meeting time [doodle poll created]

2014-11-24 Thread Yves-Gwenaël Bourhis


Le 24/11/2014 04:20, Richard Jones a écrit :
> Thanks everyone, I've closed the poll. I'm sorry to say that there's no
> combination of two timeslots which allows everyone to attend a meeting.
> Of the 25 respondents, the best we can do is cater for 24 of you.
> 
> Optimising for the maximum number of attendees, the potential meeting
> times are 2000 UTC Tuesday and 1000 UTC on one of Monday, Wednesday or
> Friday. In all three cases the only person who has indicated they cannot
> attend is Lifeless.
> 
> Unfortunately, David has indicated that he can't be present at the
> Tuesday 2000 UTC slot. Optimising for him as a required attendee for
> both meetings means we lose an additional attendee, and gives us the
> Wednesday 2000 UTC slot and a few options:
> 
> - Monday, Wednesday and Thursday at 1200 UTC (Lifeless and ygbo miss)

1200 UTC is perfect for me.
The doodle was proposing 1200 UTC to 1400 UTC and in the 2 hours
bandwidth I can not be sure to be there. but if it's 1200 "on the spot"
I can for sure :-)
Since I couldn't precise this on the doodle I didn't select this slot. A
one hour bandwidth would have allowed more precision but I understand
you concern that the doodle would have been to long to scroll.

-- 
Yves-Gwenaël Bourhis

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Let's use additional prefixes in threads

2014-11-24 Thread Vladimir Kuklin
Fuelers

I am writing to you to suggest adding prefixes for Fuel subprojects/ as it
becomes more and more difficult to read all the emails in mailing lists.
Adding these prefixes should significantly improve ability of our engineers
to filter out emails they are not interested in, e.g. UI implementation
details are almost all the time not so severe for deployment engineers. So
I am suggesting to use following prefixes:

Library
UI
Nailgun
Orchestration/Astute
QA
DevOps
Misc
Release


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting - 11/24/2014

2014-11-24 Thread Nikolay Makhotkin
This is a reminder about our team meeting scheduled for today 16.00 UTC.

Agenda:

   - Review action items
   - Paris Summit results
   - Current status (progress, issues, roadblocks, further plans)
   - Release 0.2 progress
   - Open discussion


-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-24 Thread Przemyslaw Kaminski

And it all started out with simple free disk space monitoring :)

I created a document

https://etherpad.openstack.org/p/fuel-master-monitoring

Let's write what exactly we want to monitor and what actions to take. 
Then it would be easier to decide which system we want.


P.

On 11/24/2014 04:32 PM, Rob Basham wrote:

Rob Basham

Cloud Systems Software Architecture
971-344-1999


Tomasz Napierala  wrote on 11/24/2014 
06:42:39 AM:


> From: Tomasz Napierala 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 11/24/2014 06:46 AM
> Subject: Re: [openstack-dev] [Fuel] fuel master monitoring
>
>
> > On 24 Nov 2014, at 11:09, Sergii Golovatiuk
>  wrote:
> >
> > Hi,
> >
> > monasca looks overcomplicated for the purposes we need. Also it
> requires Kafka which is Java based transport protocol.
What scale are you proposing to support?

> > I am proposing Sensu. It's architecture is tiny and elegant. Also
> it uses rabbitmq as transport so we won't need to introduce new 
protocol.
We use Sensu on our smaller clouds and really like it there, but it 
doesn't scale sufficiently for our bigger clouds.

>
> Do we really need such complicated stuff? Sensu is huge project, and
> it's footprint is quite large. Monit can alert using scripts, can we
> use it instead of API?
I assume you weren't talking about Sensu here and rather about 
Monasca.  I like Monasca for monitoring at large scale.  Kafka and 
Apache Storm are proven technologies at scale.  Do you really think 
you can just pick one monitoring protocol that fits the needs of 
everybody?  Frankly, I'm skeptical of that.


>
> Regards,
> --
> Tomasz 'Zen' Napierala
> Sr. OpenStack Engineer
> tnapier...@mirantis.com
>
>
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-24 Thread Mike Bayer

> On Nov 24, 2014, at 9:23 AM, Adam Young  wrote:
> 
> 
> 
> For pieces such as the Nova compute that talk almost exclusively on the 
> Queue, we should work to remove Monkey patching and use a clear programming 
> model.  If we can do that within the context of Eventlet, great.  If we need 
> to replace Eventlet with a different model, it will be painful, but should be 
> done.  What is most important is that we avoid doing hacks like we've had to 
> do with calls to Memcached and monkeypatching threading.

Nova compute does a lot of relational database access and I’ve yet to see an 
explicit-async-compatible DBAPI other than psycopg2’s and Twisted abdbapi.   
Twisted adbapi appears just to throw regular DBAPIs into a thread pool in any 
case (see 
http://twistedmatrix.com/trac/browser/trunk/twisted/enterprise/adbapi.py), so 
given that awkwardness and lack of real async, if eventlet is dropped it would 
be best to use a thread pool for database-related methods directly.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-24 Thread Mike Bayer

> On Nov 23, 2014, at 9:24 PM, Donald Stufft  wrote:
> 
> 
> There’s a long history of implicit context switches causing buggy software 
> that breaks. As far as I can tell the only downsides to explicit context 
> switches that don’t stem from an inferior interpreter seem to be “some 
> particular API in my head isn’t as easy with it” and “I have to type more 
> letters”. The first one I’d just say that constraints make the system and 
> that there are lots of APIs which aren’t really possible or easy in Python 
> because of one design decision or another. For the second one I’d say that 
> Python isn’t a language which attempts to make code shorter, just easier to 
> understand what is going to happen when.
> 
> Throwing out hyperboles like “mathematically proven” isn’t a particular 
> valuable statement. It is *easier* to reason about what’s going to happen 
> with explicit context switches. Maybe you’re a better programmer than I am 
> and you’re able to keep in your head every place that might do an implicit 
> context switch in an implicit setup and you can look at a function and go “ah 
> yup, things are going to switch here and here”. I certainly can’t. I like my 
> software to maximize the ability to locally reason about a particular chunk 
> of code.

But this is a false choice.  There is a third way.  It is, use explicit async 
for those parts of an application where it is appropriate; when dealing with 
message queues and things where jobs and messages are sent off for any amount 
of time to come back at some indeterminate point later, all of us would 
absolutely benefit from an explicit model w/ coroutines.  If I was trying to 
write code that had to send off messages and then had to wait, but still has 
many more messages to send off, so that without async I’d need to be writing 
thread pools and all that, absolutely, async is a great programming model.

But when the code digs into functions that are oriented around business logic, 
functions that within themselves are doing nothing concurrency-wise against 
anything else within them, and merely need to run step 1, 2, and 3, 
that don’t deal with messaging and instead talk to a single relational database 
connection, where explicit async would mean that a single business logic method 
would need to be exploded with literally many dozens of yields in it (with a 
real async DBAPI; every connection, every execute, every cursor close, every 
transaction start, every transaction end, etc.), it is completely cumbersome 
and unnecessary.  These methods should run in an implicit async context. 

To that degree, the resistance that explicit async advocates have to the 
concept that both approaches should be switchable, and that one may be more 
appropriate than the other in difference cases, remains confusing to me.   We 
from the threading camp are asked to accept that *all* of our programming 
models must change completely, but our suggestion that both models be 
integrated is met with, “well that’s wrong, because in my experience (doing 
this specific kind of programming), your model *never* works”.   




> 
> ---
> Donald Stufft
> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-24 Thread Rob Basham
Rob Basham

Cloud Systems Software Architecture
971-344-1999


Tomasz Napierala  wrote on 11/24/2014 06:42:39 
AM:

> From: Tomasz Napierala 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 11/24/2014 06:46 AM
> Subject: Re: [openstack-dev] [Fuel] fuel master monitoring
> 
> 
> > On 24 Nov 2014, at 11:09, Sergii Golovatiuk 
>  wrote:
> > 
> > Hi,
> > 
> > monasca looks overcomplicated for the purposes we need. Also it 
> requires Kafka which is Java based transport protocol.
What scale are you proposing to support?

> > I am proposing Sensu. It's architecture is tiny and elegant. Also 
> it uses rabbitmq as transport so we won't need to introduce new 
protocol.
We use Sensu on our smaller clouds and really like it there, but it 
doesn't scale sufficiently for our bigger clouds.
> 
> Do we really need such complicated stuff? Sensu is huge project, and
> it's footprint is quite large. Monit can alert using scripts, can we
> use it instead of API?
I assume you weren't talking about Sensu here and rather about Monasca.  I 
like Monasca for monitoring at large scale.  Kafka and Apache Storm are 
proven technologies at scale.  Do you really think you can just pick one 
monitoring protocol that fits the needs of everybody?  Frankly, I'm 
skeptical of that.

> 
> Regards,
> -- 
> Tomasz 'Zen' Napierala
> Sr. OpenStack Engineer
> tnapier...@mirantis.com
> 
> 
> 
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] external plugin support for Devstack

2014-11-24 Thread Sean Dague
On 11/24/2014 10:17 AM, Chmouel Boudjnah wrote:
> 
> On Mon, Nov 24, 2014 at 3:32 PM, Sean Dague  > wrote:
> 
> We should also make this something which is gate friendly. I think the
> idea had been that if projects included a /devstack/ directory in them,
> when assembling devstack gate, that would be automatically dropped into
> devstack's extra.d directory before running.
> 
> 
> 
> +1 you are taking this way forward and it would look even better, even
> for official projects managing their own devstack installation would be
> great! (kinda like we are heading with functional tests)
> 
> Which would let projects keep their devstack support local, make it easy
> to gate their project on it, give those projects the ability to make
> local fixes if something in devstack broke them.
> 
> I think that in general providing this kind of functionality is
> goodness. We should probably get the details hammered out though to
> support local running and automated testing coherently.
> 
> 
> We don't seem to have any specs repo for  devstack where this could be
> worked on ?

devstack is part of the qa program so qa-specs is the place to put it.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] external plugin support for Devstack

2014-11-24 Thread Chmouel Boudjnah
On Mon, Nov 24, 2014 at 3:32 PM, Sean Dague  wrote:

> We should also make this something which is gate friendly. I think the
> idea had been that if projects included a /devstack/ directory in them,
> when assembling devstack gate, that would be automatically dropped into
> devstack's extra.d directory before running.
>


+1 you are taking this way forward and it would look even better, even for
official projects managing their own devstack installation would be great!
(kinda like we are heading with functional tests)

Which would let projects keep their devstack support local, make it easy
> to gate their project on it, give those projects the ability to make
> local fixes if something in devstack broke them.
>
> I think that in general providing this kind of functionality is
> goodness. We should probably get the details hammered out though to
> support local running and automated testing coherently.
>

We don't seem to have any specs repo for  devstack where this could be
worked on ?

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Anyone Using the Open Solaris ZFS Driver?

2014-11-24 Thread Drew Fisher


On 11/17/14 10:27 PM, Duncan Thomas wrote:
> Is the new driver drop-in compatible with the old one? IF not, can
> existing systems be upgraded to the new driver via some manual steps, or
> is it basically a completely new driver with similar functionality?

The driver in san/solaris.py focuses entirely on iSCSI.  I don't think
existing systems can be upgraded manually but I've never really tried.
We started with a clean slate for Solaris 11 and Cinder and added local
ZFS support for single-system and demo rigs along with a fibre channel
and iSCSI drivers.

The driver is publically viewable here:

https://java.net/projects/solaris-userland/sources/gate/content/components/openstack/cinder/files/solaris/zfs.py

Please note that this driver is based on Havana.  We know it's old and
we're working to get it updated to Juno right now.  I can try to work
with my team to get a blueprint filed and start working on getting it
integrated into trunk.

-Drew

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug day

2014-11-24 Thread Edgar Magana
Great progress! Thanks for this huge effort.

Edgar

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, November 24, 2014 at 1:52 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Bug day

Hi again,

I'd like to share some results of the bug day we've conducted on 2014-11-21.
Stats:

  *   73 New 
bugs
  *   795 Open bugs
  *   285 In-progress 
bugs

I personally went over some opened/new bugs with High/Medium importance, trying 
to detect duplicates, get rid of some bugs that were not active for too long 
(like ones that have been filed back in 2013), pinging submitters to provide 
more info and such.

I've also moved some bugs from 'In progress' to 'New' or 'Confirmed' and 
removed assignees if their submitted patches were either abandoned or have not 
been updated for months.
So don't be surprised if I've removed someone. As Russel Bryant has mentioned, 
assignment might potentially discourage people from looking into the bug.

Thanks everyone for helping with this!

Eugene.

On Fri, Nov 21, 2014 at 11:03 AM, Eugene Nikanorov 
mailto:enikano...@mirantis.com>> wrote:
Hi neutron folks!

Today we've decided to conduct bug triaging day.
We have more than one thousand bugs needing their state to be checked.
So everyone is welcome to participate!

The goals of bug triaging day are:
1) Decrease the number of New bugs.
Possible 'resolution' would be:
 - confirm bug. If you see the issue in the code, or you can reproduce it
 - mark as Incomplete. Bug description doesn't contain sufficient information 
to triage the bug.
 - mark as Invalid. Not a bug, or we're not going to fix it.
 - mark as duplicate. If you know that other bug filed earlier is describing 
the same issue.
 - mark as Fix committed if you know that the issue was fixed. It's good if you 
could provide a link to corresponding review.

2) Check the Open and In progress bugs.
If the last activity on the bug happened more than a month ago - it makes sense 
sometimes to bring it back to 'New'.
By activity I mean comments in the bug, actively maintained patch on review, 
and such.

Of course feel free to assign a bug to yourself if you know how and going to 
fix it.

Some statistics:

  *   85 New 
bugs
  *   811 Open bugs
  *   331 In-progress 
bugs

Thanks,
Eugene.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-24 Thread Alexis Lee
Matthew Gilliard said on Mon, Nov 24, 2014 at 02:50:08PM +:
> 1/ assertFalse() vs assertEqual(x, False) - these are semantically
> different because of python's notion of truthiness, so I don't think
> we ought to make this a rule.
> 2/ expected/actual - I don't see any reason not to try to improve
> in this area, even if it's difficult to automate.
> 3/ warn{ing} - 
> https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L322
"N331: Use LOG.warning due to compatibility with py3"
>
> Linus' post on this from the LKML is almost a decade old (!) but worth 
> reading.
> https://lkml.org/lkml/2004/12/20/255

+1 on all points.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal new hacking rules

2014-11-24 Thread Matthew Gilliard
1/ assertFalse() vs assertEqual(x, False) - these are semantically
different because of python's notion of truthiness, so I don't think
we ought to make this a rule.

2/ expected/actual - incorrect failure messages have cost me more time
than I should admit to. I don't see any reason not to try to improve
in this area, even if it's difficult to automate.

3/ warn{ing} - 
https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L322

On the overarching point: There is no way to get started with
OpenStack, other than starting small.  My first ever patch (a tidy-up)
was rejected for being trivial, and that was confusing and
disheartening. Nova has a lot on its plate, sure, and plenty of
pending code reviews.  But there is also a lot of inconsistency and
unloved code which *is* worth fixing, because a tidy codebase is a joy
to work with, *and* these changes are ideal to bring new reviewers and
developers into the project.

Linus' post on this from the LKML is almost a decade old (!) but worth reading.
https://lkml.org/lkml/2004/12/20/255

  MG

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-24 Thread Tomasz Napierala

> On 24 Nov 2014, at 11:09, Sergii Golovatiuk  wrote:
> 
> Hi,
> 
> monasca looks overcomplicated for the purposes we need. Also it requires 
> Kafka which is Java based transport protocol.
> I am proposing Sensu. It's architecture is tiny and elegant. Also it uses 
> rabbitmq as transport so we won't need to introduce new protocol.

Do we really need such complicated stuff? Sensu is huge project, and it's 
footprint is quite large. Monit can alert using scripts, can we use it instead 
of API?

Regards,
-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] external plugin support for Devstack

2014-11-24 Thread Sean Dague
On 11/24/2014 08:56 AM, Chmouel Boudjnah wrote:
> Hello,
> 
> Thanks to the work of Dean and others we have a pretty solid
> plugins/extras support in Devstack. People can add new features in
> devstack within just a single file and that add a whole new feature or
> driver to devstack.
> 
> It seems that there is quite a bit of people who wants to have those
> extras plugins/features in the default devstack core because it's way
> more convenient for their users.
> 
> The policy has been mostly (and correct me if I am wrong) that things
> which are not tested in the gates cannot be in core devstack.
> 
> What about having a plugin structure for devstack assuming a standard
> directory structures which devstack would download and use automatically.
> 
> For the implemention I was thinking about something like this in our
> local.conf :
> 
> enabled_services
> blah_feature:https://git.openstack.org/stackforge/devstack-plugin-feature
> 
> and that repo gets downloaded and used automatically.
> 
> I understand that it is just a shortcut since curl -O
> https://git.openstack.org/stackforge/devstack-plugin-feature/devstack_plugin.sh
> in extras.d would work as well but maybe that would make people more
> confortable not having to be in core devstack and tell their users
> easily how to test a feature/driver with Devstack?

We should also make this something which is gate friendly. I think the
idea had been that if projects included a /devstack/ directory in them,
when assembling devstack gate, that would be automatically dropped into
devstack's extra.d directory before running.

Which would let projects keep their devstack support local, make it easy
to gate their project on it, give those projects the ability to make
local fixes if something in devstack broke them.

I think that in general providing this kind of functionality is
goodness. We should probably get the details hammered out though to
support local running and automated testing coherently.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][api] Extensions, micro-version and the evolution of the API

2014-11-24 Thread Salvatore Orlando
Hi,

As most of you surely know the proposal for micro versioning in Nova [1]
has been approved for Kilo.
I am sure you are aware that a similar proposal has bee discussed for
Neutron at the design summit.

Considering the direction taken by nova, and the fact that neutron
extensions are becoming unmanageable, I would encourage Neutron to move
away from extensions as a way for evolving the API to versioning. Possibly
something along the lines of what has been approved for [1]. Not
necessarily the same, but it's also worth remembering it is of paramount
importance that versioning is done in the same way for all OpenStack API
endpoints.

However, as usual, the story for Neutron is a bit more complex.
- The work in progress in switching the WSGI framework to Pecan will get
into the way of introducing micro versioning. As we've been planning on
removing the home grown WSGI for a while, it makes sense to give it
priority.
- The neutron API is composed of an API for the core service plus a set of
APIs for advanced services. Therefore sorting out advanced service spin-off
is yet another prerequisite for introducing any form of API versioning.
- It is not yet clear how versioning on the API side should be mapped on
the plugin interface. Ideally one would like to keep plugins independent of
API versioning.
- The proposed plugin interface refactor [spec not yet available] is also
likely to clash with API micro-versioning

Considering the above points, introducing API micro versioning in Kilo is
already a stretch. It therefore makes sense to move towards a more
digestible approach along the lines of what has already been proposed for
the plugin split [2].
We are therefore proposing for Kilo to stop evolving the API through
extension, unless the extensions represents something that can eventually
"spin off" neutron core [3]. Furthermore we'll finally stop calling core
things such as l3 an "extension". The details regarding this proposal are
available in [3].

If you believe that moving away from extensions is a decent idea and that
it's about time we redefined what the core neutron API is, but have any
kind of concern, please comment on [3].
On the other hand, if you reckon that Neutron should keep evolving through
extensions, if you feel that this activity will hinder your team roadmap
for this release cycle, (or if you want to start a flame war on Neutron
APIs and extensions), it might be probably better to discuss on the mailing
list to involve a larger audience.

Thanks for reading,
Salvatore


[1]
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/api-microversions.html
[2] https://review.openstack.org/#/c/134680/
[3] https://review.openstack.org/#/c/136760/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Add a new aiogreen executor for Oslo Messaging

2014-11-24 Thread Adam Young

On 11/23/2014 06:13 PM, Robert Collins wrote:

On WSGI - if we're in an asyncio world, I don't think WSGI has any
relevance today - it has no async programming model. While is has
incremental apis and supports generators, thats not close enough to
the same thing: so we're going to have to port our glue code to
whatever container we end up with. As you know I'm pushing on a revamp
of WSGI right now, and I'd be delighted to help put together a
WSGI-for-asyncio PEP, but I think its best thought of as a separate
thing to WSGI per se. It might be a profile of WSGI2 though, since
there is quite some interest in truely async models.

However I've a bigger picture concern. OpenStack only relatively
recently switched away from an explicit async model (Twisted) to
eventlet.

I'm worried that this is switching back to something we switched away
from (in that Twisted and asyncio have much more in common than either
Twisted and eventlet w/magic, or asyncio and eventlet w/magic).



We don't need to use this for WSGI applications.  We need to use this 
for the non-api, message driven portions. WSGI applications should not 
be accepting events/messages.  They already have a messaging model with 
HTTP, and we should use that and only that.


We need to get the Web based services off Eventlet and into Web servers 
where we can make use of Native code for security reasons.


Referencing the fine, if somewhat overused model from Ken Pepple:

http://cdn2.hubspot.net/hub/344789/file-448028030-jpg/images/openstack-arch-grizzly-logical-v2.jpg?t=1414604346389

Only the Nova and Quantum (now Neutron, yes it is dated) API server 
shows an arrow coming out of the message queue.  Those arrows should be 
broken.  If we need to write a micro-service as a listener that receives 
an event off the queue and makes an HTTP call to an API server, let us 
do that.



For pieces such as the Nova compute that talk almost exclusively on the 
Queue, we should work to remove Monkey patching and use a clear 
programming model.  If we can do that within the context of Eventlet, 
great.  If we need to replace Eventlet with a different model, it will 
be painful, but should be done.  What is most important is that we avoid 
doing hacks like we've had to do with calls to Memcached and 
monkeypatching threading.


Having a clear programming model around Messaging calls that scales 
should not compromise system integrity, it should complement it.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] How to run tempest tests

2014-11-24 Thread Boris Pavlovic
Angelo,


One more way to run Tempest is to run it via Rally.
Rally will take care about installation, generating tempest.conf, running
tempest, parsing & storing output results.

Here is manual:
https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/

As a bonus you'll get ability to compare results of 2 runs (if you need
that)

Best regards,
Boris Pavlovic



On Mon, Nov 24, 2014 at 6:05 PM, Vineet Menon 
wrote:

> Hi,
>
> I cannot comment on the best practice.
>
> But I can point to you a few more methods and links.
>
> 1. https://dague.net//presentations/tempest-101/#/
> 2.
> http://www.slideshare.net/kamesh001/open-stack-qa-and-tempest?next_slideshow=1
> 3.
> https://docs.google.com/presentation/d/1M3XhAco_0u7NZQn3Gz53z9VOHHrkQBzEs5gt43ZvhOc/edit#slide=id.p
>
>
>
> Regards,
>
> Vineet Menon
>
>
> On 24 November 2014 at 10:49, Angelo Matarazzo <
> angelo.matara...@dektech.com.au> wrote:
>
>> Sorry for my previous message with wrong subject
>>
>> Hi all,
>> By reading the tempest documentation page [1] a user can run tempest
>> tests by using whether testr or run_tempest.sh or tox.
>>
>> What is the best practice?
>> run_tempest.sh has several options (e.g. ./run_tempest.sh -h) and it is
>> my preferred way, currently.
>> Any thought?
>>
>> BR,
>> Angelo
>>
>> [1] http://docs.openstack.org/developer/tempest/overview.html#quickstart
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] How to run tempest tests

2014-11-24 Thread Vineet Menon
Hi,

I cannot comment on the best practice.

But I can point to you a few more methods and links.

1. https://dague.net//presentations/tempest-101/#/
2.
http://www.slideshare.net/kamesh001/open-stack-qa-and-tempest?next_slideshow=1
3.
https://docs.google.com/presentation/d/1M3XhAco_0u7NZQn3Gz53z9VOHHrkQBzEs5gt43ZvhOc/edit#slide=id.p



Regards,

Vineet Menon


On 24 November 2014 at 10:49, Angelo Matarazzo <
angelo.matara...@dektech.com.au> wrote:

> Sorry for my previous message with wrong subject
>
> Hi all,
> By reading the tempest documentation page [1] a user can run tempest tests
> by using whether testr or run_tempest.sh or tox.
>
> What is the best practice?
> run_tempest.sh has several options (e.g. ./run_tempest.sh -h) and it is my
> preferred way, currently.
> Any thought?
>
> BR,
> Angelo
>
> [1] http://docs.openstack.org/developer/tempest/overview.html#quickstart
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] removing XML testing completely from Tempest

2014-11-24 Thread Jay Pipes

On 11/24/2014 08:56 AM, Sean Dague wrote:

Having XML payloads was never a universal part of OpenStack services.
During the Icehouse release the TC declared that being an OpenStack
service requires having a JSON REST API. Projects could do what they
wanted beyond that. Lots of them deprecated and have been removing the
XML cruft since then.

Tempest is a tool to test the OpenStack API. OpenStack hasn't had an XML
API for a long time.

Given that current branchless Tempest only supports as far back as
Icehouse anyway, after these changes were made, I'd like to propose that
all the XML code in Tempest should be removed. If a project wants to
support something else beyond a JSON API that's on that project to test
and document on their own.

We've definitively blocked adding new XML tests in Tempest anyway, but
getting rid of the XML debt in the project will simplify it quite a bit,
make it easier for contributors to join in, and seems consistent with
the direction of OpenStack as a whole.


But Sean, without XML support, we will lose all of our enterprise customers!

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] external plugin support for Devstack

2014-11-24 Thread Chmouel Boudjnah
Hello,

Thanks to the work of Dean and others we have a pretty solid plugins/extras
support in Devstack. People can add new features in devstack within just a
single file and that add a whole new feature or driver to devstack.

It seems that there is quite a bit of people who wants to have those extras
plugins/features in the default devstack core because it's way more
convenient for their users.

The policy has been mostly (and correct me if I am wrong) that things which
are not tested in the gates cannot be in core devstack.

What about having a plugin structure for devstack assuming a standard
directory structures which devstack would download and use automatically.

For the implemention I was thinking about something like this in our
local.conf :

enabled_services blah_feature:
https://git.openstack.org/stackforge/devstack-plugin-feature

and that repo gets downloaded and used automatically.

I understand that it is just a shortcut since curl -O
https://git.openstack.org/stackforge/devstack-plugin-feature/devstack_plugin.sh
in extras.d would work as well but maybe that would make people more
confortable not having to be in core devstack and tell their users easily
how to test a feature/driver with Devstack?

Cheers,
Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] removing XML testing completely from Tempest

2014-11-24 Thread Sean Dague
Having XML payloads was never a universal part of OpenStack services.
During the Icehouse release the TC declared that being an OpenStack
service requires having a JSON REST API. Projects could do what they
wanted beyond that. Lots of them deprecated and have been removing the
XML cruft since then.

Tempest is a tool to test the OpenStack API. OpenStack hasn't had an XML
API for a long time.

Given that current branchless Tempest only supports as far back as
Icehouse anyway, after these changes were made, I'd like to propose that
all the XML code in Tempest should be removed. If a project wants to
support something else beyond a JSON API that's on that project to test
and document on their own.

We've definitively blocked adding new XML tests in Tempest anyway, but
getting rid of the XML debt in the project will simplify it quite a bit,
make it easier for contributors to join in, and seems consistent with
the direction of OpenStack as a whole.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Organizational changes to support stable branches

2014-11-24 Thread Thierry Carrez
OK, since there was no disagreement I pushed the changes to:
https://wiki.openstack.org/wiki/StableBranch

We'll get started setting up project-specific stable-maint teams ASAP.
Cheers,

Thierry Carrez wrote:
> TL;DR:
> Every project should designate a Stable branch liaison.
> 
> Hi everyone,
> 
> Last week at the summit we discussed evolving the governance around
> stable branches, in order to maintain them more efficiently (and
> hopefully for a longer time) in the future.
> 
> The current situation is the following: there is a single
> stable-maint-core review team that reviews all backports for all
> projects, making sure the stable rules are followed. This does not scale
> that well, so we started adding project-specific people to the single
> group, but they (rightfully) only care about one project. Things had to
> change for Kilo. Here is what we came up with:
> 
> 1. We propose that integrated projects with stable branches designate a
> formal "Stable Branch Liaison" (by default, that would be the PTL, but I
> strongly encourage someone specifically interested in stable branches to
> step up). The Stable Branch Liaison is responsible for making sure
> backports are proposed for critical issues in their project, and make
> sure proposed backports are reviewed. They are also the contact point
> for stable branch release managers around point release times.
> 
> 2. We propose to set up project-specific review groups
> ($PROJECT-stable-core) which would be in charge of reviewing backports
> for a given project, following the stable rules. Originally that group
> should be the Stable Branch Liaison + stable-maint-core. The group is
> managed by stable-maint-core, so that we make sure any addition is well
> aware of the Stable Branch rules before they are added. The Stable
> Branch Liaison should suggest names for addition to the group as needed.
> 
> 3. The current stable-maint-core group would be reduced to stable branch
> release managers and other active cross-project stable branch rules
> custodians. We'll remove project-specific people and PTLs that were
> added in the past. The new group would be responsible for granting
> exceptions for all questionable backports raised by $PROJECT-stable-core
> groups, providing backports reviews help everywhere, maintain the stable
> branch rules (and make sure they are respected), and educate proposed
> $PROJECT-stable-core members on the rules.
> 
> 4. Each stable branch (stable/icehouse, stable/juno...) that we
> concurrently support should have a champion. Stable Branch Champions are
> tasked with championing a specific stable branch support, making sure
> the branch stays in good shape and remains usable at all times. They
> monitor periodic jobs failures and enlist the help of others in order to
> fix the branches in case of breakage. They should also raise flags if
> for some reason they are blocked and don't receive enough support, in
> which case early abandon of the branch will be considered. Adam
> Gandelman volunteered to be the stable/juno champion. Ihar Hrachyshka
> (was) volunteered to be the stable/icehouse champion.
> 
> 5. To set expectations right and evolve the meaning of "stable" over
> time to gradually mean more "not changing", we propose to introduce
> support phases for stable branches. During the first 6 months of life of
> a stable branch (Phase I) any significant bug may be backported. During
> the next 6 months of life  of a stable branch (Phase II), only critical
> issues and security fixes may be backported. After that and until end of
> life (Phase III), only security fixes may be backported. That way, at
> any given time, there is only one stable branch in "Phase I" support.
> 
> 6. In order to raise awareness, all stable branch discussions will now
> happen on the -dev list (with prefix [stable]). The
> openstack-stable-maint list is now only used for periodic jobs reports,
> and is otherwise read-only.
> 
> Let us know if you have any comment, otherwise we'll proceed to set
> those new policies up.
> 


-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins improvement

2014-11-24 Thread Evgeniy L
Hi Dmitry,

Our current validation implementation is based on jsonschema,
we will figure out how to hack/configure it to provide more human
readable message

Thanks,

On Mon, Nov 24, 2014 at 2:34 PM, Dmitry Ukov  wrote:

> That was my fault. I did not expect that timeout parameter is a mandatory
> requirement for task. Every thing works perfectly fine.
> Thanks for the help.
>
> On Mon, Nov 24, 2014 at 3:05 PM, Tatyana Leontovich <
> tleontov...@mirantis.com> wrote:
>
>> Guys,
>> task like
>> - role: ['controller']
>> stage: post_deployment
>> type: puppet
>> parameters:
>> puppet_manifest: puppet/site.pp
>> puppet_modules: puppet/modules/
>> timeout: 360
>> works fine for me, so  I believe your task should looks like
>>
>> cat tasks.yaml
>> # This tasks will be applied on controller nodes,
>> # here you can also specify several roles, for example
>> # ['cinder', 'compute'] will be applied only on
>> # cinder and compute nodes
>> - role: ['controller']
>>   stage: post_deployment
>>   type: puppet
>>   parameters:
>> puppet_manifest: install_keystone_ldap.pp
>> puppet_modules: /etc/puppet/modules/"
>>
>> And be sure that install_keystone_ldap.pp thos one invoke other manifests
>>
>> Best,
>> Tatyana
>>
>> On Mon, Nov 24, 2014 at 12:49 PM, Dmitry Ukov  wrote:
>>
>>> Unfortunately this does not work
>>>
>>> cat tasks.yaml
>>> # This tasks will be applied on controller nodes,
>>> # here you can also specify several roles, for example
>>> # ['cinder', 'compute'] will be applied only on
>>> # cinder and compute nodes
>>> - role: ['controller']
>>>   stage: post_deployment
>>>   type: puppet
>>>   parameters:
>>> puppet_manifest: install_keystone_ldap.pp
>>> puppet_modules: "puppet/:/etc/puppet/modules/"
>>>
>>>
>>> fpb --build .
>>> /home/dukov/dev/.plugins_ldap/local/lib/python2.7/site-packages/pkg_resources.py:1045:
>>> UserWarning: /home/dukov/.python-eggs is writable by group/others and
>>> vulnerable to attack when used with get_resource_filename. Consider a more
>>> secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE
>>> environment variable).
>>>   warnings.warn(msg, UserWarning)
>>> 2014-11-24 13:48:32 ERROR 15026 (cli) Wrong value format "0 ->
>>> parameters", for file "./tasks.yaml", {'puppet_modules':
>>> 'puppet/:/etc/puppet/modules/', 'puppet_manifest':
>>> 'install_keystone_ldap.pp'} is not valid under any of the given schemas
>>> Traceback (most recent call last):
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py",
>>> line 90, in main
>>> perform_action(args)
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py",
>>> line 77, in perform_action
>>> actions.BuildPlugin(args.build).run()
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
>>> line 42, in run
>>> self.check()
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
>>> line 99, in check
>>> self._check_structure()
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
>>> line 111, in _check_structure
>>> ValidatorManager(self.plugin_path).get_validator().validate()
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py",
>>> line 39, in validate
>>> self.check_schemas()
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py",
>>> line 46, in check_schemas
>>> self.validate_file_by_schema(v1.TASKS_SCHEMA, self.tasks_path)
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py",
>>> line 47, in validate_file_by_schema
>>> self.validate_schema(data, schema, path)
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py",
>>> line 43, in validate_schema
>>> value_path, path, exc.message))
>>> ValidationError: Wrong value format "0 -> parameters", for file
>>> "./tasks.yaml", {'puppet_modules': 'puppet/:/etc/puppet/modules/',
>>> 'puppet_manifest': 'install_keystone_ldap.pp'} is not valid under any of
>>> the given schemas
>>>
>>>
>>> On Mon, Nov 24, 2014 at 2:34 PM, Aleksandr Didenko <
>>> adide...@mirantis.com> wrote:
>>>
 Hi,

 according to [1] you should be able to use:

 puppet_modules: "puppet/:/etc/puppet/modules/"

 This is valid string yaml parameter that should be parsed just fine.

 [1]
 https://github.com/stackforge/fuel-web/blob/master/tasklib/tasklib/actions/puppet.py#L61-L62

 Regards
 --
 Alex


 On Mon, Nov 24, 2014 at 12:07 PM, Dmitry Ukov 
 wrote:

> Hello All,
> Current implementation of plugins in Fuel unpacks plugin tarball
> into /var/www/nailgun/plugins/.
> If we implement deplo

[openstack-dev] [nova] Integration with Ceph

2014-11-24 Thread Sergey Nikitin
Hi,
As you know we can use Ceph as ephemeral storage in nova. But we have some
problems with its integration. First of all, total storage of compute nodes
is calculated incorrectly. (more details here
https://bugs.launchpad.net/nova/+bug/1387812). I want to fix this problem.
Now size of total storage is only a sum of storage of all compute nodes.
And information about the total storage is got directly from db. (
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L663-L691
).
To fix the problem we should check type of using storage. If type of
storage is RBD we should get information about total storage directly from
Ceph storage.
I proposed a patch (https://review.openstack.org/#/c/132084/) which should
fix this problem, but I got the fair comment that we shouldn't check type
of storage on the API layer.

The other problem is that information about size of compute node incorrect
too. Now size of each node equal to size of whole Ceph cluster.

On one hand it is good to do not check type of storage on the API layer, on
the other hand there are some reasons to check it on API layer:
1. It would be useful for live migration because now a user has to send
information about storage with API request.
2. It helps to fix problem with total storage.
3. It helps to fix problem with size of compute nodes.

So I want to ask you: "Is this a good idea to get information about type of
storage on API layer? If no - Is there are any ideas to get correct
information about Ceph storage?"
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel Plugins, First look; Whats Next?

2014-11-24 Thread Evgeniy L
Hi Andrew,

Comments inline.
Also could you please provide a link on OpenStack upgrade feature?
It's not clear why do you need it as a plugin and how you are going
to deliver this feature.

On Sat, Nov 22, 2014 at 4:23 AM, Andrew Woodward  wrote:

> So as part of the pumphouse integration, I've started poking around
> the Plugin Arch implementation as an attempt to plug it into the fuel
> master.
>
> This would require that the plugin install a container, and some
> scripts into the master node.
>
> First look:
> I've looked over the fuel plugins spec [1] and see that the install
> script was removed from rev 15 ->16 (line 134) This creates problems
> do to the need of installing the container, and scripts so I've
> created a bug [2] for this so that we can allow for an install script
> to be executed prior to HCF for 6.0.
>

Yes, it was removed, but nothing stops you from creating the install
script and putting it in tarball, you don't need any changes in the
current implementation.

The reasons why it was done this way, see in separate mailing thread [1].

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-October/049073.html


>
> Looking into the implementation of the install routine [3] to
> implement [2], I see that the fuelclient is extracting the tar blindly
> (more on that at #3) on the executor system that fuelclient is being
> executed from. Problems with this include 1) the fuelclient may not
> root be privileged (like in Mirantis OpenStack Express) 2) the
> fuelclient may not be running on the same system as nailgun 3) we are
> just calling .extractall on the tarball, this means that we haven't
> done any validation on the files coming out of the tarball. We need to
> validate that 3.a) the tarball was actually encoded with the right
> base path 3.b) that the tasks.yaml file is validated and all the noted
> scripts are found. Really, the install of the plugin should be handled
> by the nailgun side to help with 1,2.
>

1. if you have custom installation you have to provide custom permissions
for /var/www/nailgun/plugins directory
2. you are absolutely right, see the thread from above why we decided to add
this feature even if it was a wrong decision from architecture point of
view
3. "haven't done any validation" - not exactly, validation is done on plugin
building stage, also we have simple validation on plugin installation
stage on
Nailgun side (that data are consistent from nailgun point of view).
There are
several reasons why it was done mainly on fuel-plugin-builder side:
  a. plugin is validated before it's installed (it dramatically
simplifies development)
  b. also you can check that plugin is valid without plugin building,
  use 'fpb --check fuel_plugin_name' parameter
  c. faster fixes delivery, if there is a bug in validation (we had
several of them
  during the development in fuel-plugin-builder), we cannot just
release new
  version of fuel, but we can do it with fuel-plugin-builder, we
had 2 releases [1].
  For more complicated structures you will have bugs in validation
for sure.
  d. if we decide to support validations on both sides, we will come up
with a lot of bugs
  which are related to desynchronization of validators between
Nailgun and fuel-plugin-builder

[1]
https://github.com/stackforge/fuel-plugins/blob/master/fuel_plugin_builder/CHANGELOG.md


>
> Whats next?
> There are many parts of PA that need to be extended, I think that
> these are the ones that we must tackle next to cover the most cases
> a) plugin packaging: it appears that non of the "core plugins" (those
> in fuel-plugins) are bundled into the iso.
> b) plugin signing: we cant have "core plugins" with out some method of
> testing, certifying, and signing them so that we can know that they
> are trusted.
>
> with the help of granular roles:
> c) the ability to replace or add new granular roles
> d) the ability to add or modify real roles
>
> with the help of advanced networks:
> e) add new network roles
>
> At some point soon, we also need to discuss making it easier to find a
> catalog of modules and pull them from it, but this is less important
> than the above
>
> [1]
> https://review.openstack.org/#/c/125608/15..16/specs/6.0/cinder-neutron-plugins-in-fuel.rst
> [2] https://bugs.launchpad.net/fuel/+bug/1395228
> [3]
> https://github.com/stackforge/fuel-web/blob/master/fuelclient/fuelclient/objects/plugins.py#L49
>
> --
> Andrew
> Mirantis
> Ceph community
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where should Schema files live?

2014-11-24 Thread Sandy Walsh
>From: Eoghan Glynn [egl...@redhat.com] Friday, November 21, 2014 11:03 AM
>> >> Some problems / options:
>> >> a. Unlike Python, there is no simple pip install for text files. No
>> >> version control per se. Basically whatever we pull from the repo. The
>> >> problem with a git clone is we need to tweak config files to point to a
>> >> directory and that's a pain for gating tests and CD. Could we assume a
>> >> symlink to some well-known location?
>> >> a': I suppose we could make a python installer for them, but that's a
>> >> pain for other language consumers.
>>
>> >Would it be unfair to push that burden onto the writers of clients
>> >in other languages?
>> >
>> >i.e. OpenStack, being largely python-centric, would take responsibility
>> >for both:
>> >
>> >  1. Maintaining the text versions of the schema in-tree (e.g. as json)
>> >
>> >and:
>> >
>> >  2. Producing a python-specific installer based on #1
>> >
>> >whereas, the first Java-based consumer of these schema would take
>> >#1 and package it up in their native format, i.e. as a jar or
>> >OSGi bundle.

I think Doug's suggestion of keeping the schema files in-tree and pushing them 
to a well-known tarball maker in a build step is best so far. 

It's still a little clunky, but not as clunky as having to sync two repos. 

>[snip]
>> >> d. Should we make separate distro packages? Install to a well known
>> >> location all the time? This would work for local dev and integration
>> >> testing and we could fall back on B and C for production distribution. Of
>> >> course, this will likely require people to add a new distro repo. Is that
>> >> a concern?
>>
>> >Quick clarification ... when you say "distro packages", do you mean
>> >Linux-distro-specific package formats such as .rpm or .deb?
>>
>> Yep.

>So that would indeed work, but just to sound a small note of caution
>that keeping an oft-changing package (assumption #5) up-to-date for
>fedora20/21 & epel6/7, or precise/trusty, would involve some work.

>I don't know much about the Debian/Ubuntu packaging pipeline, in
>particular how it could be automated.

>But in my small experience of Fedora/EL packaging, the process is
>somewhat resistant to many fine-grained updates.

Ah, good to know. So, if we go with the tarball approach, we should be able to 
avoid this. And it allows the service to easily service up the schema using 
their existing REST API. 

Should we proceed under the assumption we'll push to a tarball in a post-build 
step? It could change if we find it's too messy. 

-S

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins improvement

2014-11-24 Thread Mike Scherbakov
> I did not expect that timeout parameter is a mandatory requirement for
task

UX is obviously has to be improved here. Can we make a clear error, if
there is no required parameter, instead of throwing unclear exception?

On Mon, Nov 24, 2014 at 2:34 PM, Dmitry Ukov  wrote:

> That was my fault. I did not expect that timeout parameter is a mandatory
> requirement for task. Every thing works perfectly fine.
> Thanks for the help.
>
> On Mon, Nov 24, 2014 at 3:05 PM, Tatyana Leontovich <
> tleontov...@mirantis.com> wrote:
>
>> Guys,
>> task like
>> - role: ['controller']
>> stage: post_deployment
>> type: puppet
>> parameters:
>> puppet_manifest: puppet/site.pp
>> puppet_modules: puppet/modules/
>> timeout: 360
>> works fine for me, so  I believe your task should looks like
>>
>> cat tasks.yaml
>> # This tasks will be applied on controller nodes,
>> # here you can also specify several roles, for example
>> # ['cinder', 'compute'] will be applied only on
>> # cinder and compute nodes
>> - role: ['controller']
>>   stage: post_deployment
>>   type: puppet
>>   parameters:
>> puppet_manifest: install_keystone_ldap.pp
>> puppet_modules: /etc/puppet/modules/"
>>
>> And be sure that install_keystone_ldap.pp thos one invoke other manifests
>>
>> Best,
>> Tatyana
>>
>> On Mon, Nov 24, 2014 at 12:49 PM, Dmitry Ukov  wrote:
>>
>>> Unfortunately this does not work
>>>
>>> cat tasks.yaml
>>> # This tasks will be applied on controller nodes,
>>> # here you can also specify several roles, for example
>>> # ['cinder', 'compute'] will be applied only on
>>> # cinder and compute nodes
>>> - role: ['controller']
>>>   stage: post_deployment
>>>   type: puppet
>>>   parameters:
>>> puppet_manifest: install_keystone_ldap.pp
>>> puppet_modules: "puppet/:/etc/puppet/modules/"
>>>
>>>
>>> fpb --build .
>>> /home/dukov/dev/.plugins_ldap/local/lib/python2.7/site-packages/pkg_resources.py:1045:
>>> UserWarning: /home/dukov/.python-eggs is writable by group/others and
>>> vulnerable to attack when used with get_resource_filename. Consider a more
>>> secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE
>>> environment variable).
>>>   warnings.warn(msg, UserWarning)
>>> 2014-11-24 13:48:32 ERROR 15026 (cli) Wrong value format "0 ->
>>> parameters", for file "./tasks.yaml", {'puppet_modules':
>>> 'puppet/:/etc/puppet/modules/', 'puppet_manifest':
>>> 'install_keystone_ldap.pp'} is not valid under any of the given schemas
>>> Traceback (most recent call last):
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py",
>>> line 90, in main
>>> perform_action(args)
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py",
>>> line 77, in perform_action
>>> actions.BuildPlugin(args.build).run()
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
>>> line 42, in run
>>> self.check()
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
>>> line 99, in check
>>> self._check_structure()
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
>>> line 111, in _check_structure
>>> ValidatorManager(self.plugin_path).get_validator().validate()
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py",
>>> line 39, in validate
>>> self.check_schemas()
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py",
>>> line 46, in check_schemas
>>> self.validate_file_by_schema(v1.TASKS_SCHEMA, self.tasks_path)
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py",
>>> line 47, in validate_file_by_schema
>>> self.validate_schema(data, schema, path)
>>>   File
>>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py",
>>> line 43, in validate_schema
>>> value_path, path, exc.message))
>>> ValidationError: Wrong value format "0 -> parameters", for file
>>> "./tasks.yaml", {'puppet_modules': 'puppet/:/etc/puppet/modules/',
>>> 'puppet_manifest': 'install_keystone_ldap.pp'} is not valid under any of
>>> the given schemas
>>>
>>>
>>> On Mon, Nov 24, 2014 at 2:34 PM, Aleksandr Didenko <
>>> adide...@mirantis.com> wrote:
>>>
 Hi,

 according to [1] you should be able to use:

 puppet_modules: "puppet/:/etc/puppet/modules/"

 This is valid string yaml parameter that should be parsed just fine.

 [1]
 https://github.com/stackforge/fuel-web/blob/master/tasklib/tasklib/actions/puppet.py#L61-L62

 Regards
 --
 Alex


 On Mon, Nov 24, 2014 at 12:07 PM, Dmitry Ukov 
 wrote:

> Hello All,
> Current implementation of plugins in Fuel unpacks plugin tarball
> into

Re: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm

2014-11-24 Thread Vladimir Kuklin
guys, there is already pxz utility in ubuntu repos. let's test it

On Mon, Nov 24, 2014 at 2:32 PM, Bartłomiej Piotrowski <
bpiotrow...@mirantis.com> wrote:

> On 24 Nov 2014, at 12:25, Matthew Mosesohn  wrote:
> > I did this exercise over many iterations during Docker container
> > packing and found that as long as the data is under 1gb, it's going to
> > compress really well with xz. Over 1gb and lrzip looks more attractive
> > (but only on high memory systems). In reality, we're looking at log
> > footprints from OpenStack environments on the order of 500mb to 2gb.
> >
> > xz is very slow on single-core systems with 1.5gb of memory, but it's
> > quite a bit faster if you run it on a more powerful system. I've found
> > level 4 compression to be the best compromise that works well enough
> > that it's still far better than gzip. If increasing compression time
> > by 3-5x is too much for you guys, why not just go to bzip? You'll
> > still improve compression but be able to cut back on time.
> >
> > Best Regards,
> > Matthew Mosesohn
>
> Alpha release of xz supports multithreading via -T (or —threads) parameter.
> We could also use pbzip2 instead of regular bzip to cut some time on
> multi-core
> systems.
>
> Regards,
> Bartłomiej Piotrowski
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm

2014-11-24 Thread Vladimir Kuklin
Mattherw, Dmitry

I would suggest to use:
1) multi-threaded utilities
2) use xz for small snapshots (<1gb) and lrzip for bigger snapshots

On Mon, Nov 24, 2014 at 2:25 PM, Matthew Mosesohn 
wrote:

> I did this exercise over many iterations during Docker container
> packing and found that as long as the data is under 1gb, it's going to
> compress really well with xz. Over 1gb and lrzip looks more attractive
> (but only on high memory systems). In reality, we're looking at log
> footprints from OpenStack environments on the order of 500mb to 2gb.
>
> xz is very slow on single-core systems with 1.5gb of memory, but it's
> quite a bit faster if you run it on a more powerful system. I've found
> level 4 compression to be the best compromise that works well enough
> that it's still far better than gzip. If increasing compression time
> by 3-5x is too much for you guys, why not just go to bzip? You'll
> still improve compression but be able to cut back on time.
>
> Best Regards,
> Matthew Mosesohn
>
> On Mon, Nov 24, 2014 at 3:14 PM, Vladimir Kuklin 
> wrote:
> > IMO, we should get a bunch of snapshots and decide which compression to
> use
> > according to the results of an experiment. XZ compression takes much
> longer,
> > so you will need to use parallel xz compression utility.
> >
> > On Fri, Nov 21, 2014 at 9:09 PM, Tomasz Napierala <
> tnapier...@mirantis.com>
> > wrote:
> >>
> >>
> >> > On 21 Nov 2014, at 16:55, Dmitry Pyzhov  wrote:
> >> >
> >> > We have a request for change compression from gz to xz. This simple
> >> > change halfs our snapshots. Does anyone has any objections? Otherwise
> we
> >> > will include this change in 6.0 release.
> >>
> >> I agree with the change, but it shouldn’t be high
> >>
> >> Regards,
> >> --
> >> Tomasz 'Zen' Napierala
> >> Sr. OpenStack Engineer
> >> tnapier...@mirantis.com
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > --
> > Yours Faithfully,
> > Vladimir Kuklin,
> > Fuel Library Tech Lead,
> > Mirantis, Inc.
> > +7 (495) 640-49-04
> > +7 (926) 702-39-68
> > Skype kuklinvv
> > 45bk3, Vorontsovskaya Str.
> > Moscow, Russia,
> > www.mirantis.com
> > www.mirantis.ru
> > vkuk...@mirantis.com
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins improvement

2014-11-24 Thread Dmitry Ukov
That was my fault. I did not expect that timeout parameter is a mandatory
requirement for task. Every thing works perfectly fine.
Thanks for the help.

On Mon, Nov 24, 2014 at 3:05 PM, Tatyana Leontovich <
tleontov...@mirantis.com> wrote:

> Guys,
> task like
> - role: ['controller']
> stage: post_deployment
> type: puppet
> parameters:
> puppet_manifest: puppet/site.pp
> puppet_modules: puppet/modules/
> timeout: 360
> works fine for me, so  I believe your task should looks like
>
> cat tasks.yaml
> # This tasks will be applied on controller nodes,
> # here you can also specify several roles, for example
> # ['cinder', 'compute'] will be applied only on
> # cinder and compute nodes
> - role: ['controller']
>   stage: post_deployment
>   type: puppet
>   parameters:
> puppet_manifest: install_keystone_ldap.pp
> puppet_modules: /etc/puppet/modules/"
>
> And be sure that install_keystone_ldap.pp thos one invoke other manifests
>
> Best,
> Tatyana
>
> On Mon, Nov 24, 2014 at 12:49 PM, Dmitry Ukov  wrote:
>
>> Unfortunately this does not work
>>
>> cat tasks.yaml
>> # This tasks will be applied on controller nodes,
>> # here you can also specify several roles, for example
>> # ['cinder', 'compute'] will be applied only on
>> # cinder and compute nodes
>> - role: ['controller']
>>   stage: post_deployment
>>   type: puppet
>>   parameters:
>> puppet_manifest: install_keystone_ldap.pp
>> puppet_modules: "puppet/:/etc/puppet/modules/"
>>
>>
>> fpb --build .
>> /home/dukov/dev/.plugins_ldap/local/lib/python2.7/site-packages/pkg_resources.py:1045:
>> UserWarning: /home/dukov/.python-eggs is writable by group/others and
>> vulnerable to attack when used with get_resource_filename. Consider a more
>> secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE
>> environment variable).
>>   warnings.warn(msg, UserWarning)
>> 2014-11-24 13:48:32 ERROR 15026 (cli) Wrong value format "0 ->
>> parameters", for file "./tasks.yaml", {'puppet_modules':
>> 'puppet/:/etc/puppet/modules/', 'puppet_manifest':
>> 'install_keystone_ldap.pp'} is not valid under any of the given schemas
>> Traceback (most recent call last):
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py",
>> line 90, in main
>> perform_action(args)
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py",
>> line 77, in perform_action
>> actions.BuildPlugin(args.build).run()
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
>> line 42, in run
>> self.check()
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
>> line 99, in check
>> self._check_structure()
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
>> line 111, in _check_structure
>> ValidatorManager(self.plugin_path).get_validator().validate()
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py",
>> line 39, in validate
>> self.check_schemas()
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py",
>> line 46, in check_schemas
>> self.validate_file_by_schema(v1.TASKS_SCHEMA, self.tasks_path)
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py",
>> line 47, in validate_file_by_schema
>> self.validate_schema(data, schema, path)
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py",
>> line 43, in validate_schema
>> value_path, path, exc.message))
>> ValidationError: Wrong value format "0 -> parameters", for file
>> "./tasks.yaml", {'puppet_modules': 'puppet/:/etc/puppet/modules/',
>> 'puppet_manifest': 'install_keystone_ldap.pp'} is not valid under any of
>> the given schemas
>>
>>
>> On Mon, Nov 24, 2014 at 2:34 PM, Aleksandr Didenko > > wrote:
>>
>>> Hi,
>>>
>>> according to [1] you should be able to use:
>>>
>>> puppet_modules: "puppet/:/etc/puppet/modules/"
>>>
>>> This is valid string yaml parameter that should be parsed just fine.
>>>
>>> [1]
>>> https://github.com/stackforge/fuel-web/blob/master/tasklib/tasklib/actions/puppet.py#L61-L62
>>>
>>> Regards
>>> --
>>> Alex
>>>
>>>
>>> On Mon, Nov 24, 2014 at 12:07 PM, Dmitry Ukov 
>>> wrote:
>>>
 Hello All,
 Current implementation of plugins in Fuel unpacks plugin tarball
 into /var/www/nailgun/plugins/.
 If we implement deployment part of plugin using puppet there is a
 setting
 puppet_modules:

 This setting should specify path to modules folder. As soon as main
 deployment part of plugin is implemented as a Puppet module module
 path setting should be:

 puppet_modules: puppet/

 There is big probability that plugin implementation will require some
 custo

Re: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm

2014-11-24 Thread Bartłomiej Piotrowski
On 24 Nov 2014, at 12:25, Matthew Mosesohn  wrote:
> I did this exercise over many iterations during Docker container
> packing and found that as long as the data is under 1gb, it's going to
> compress really well with xz. Over 1gb and lrzip looks more attractive
> (but only on high memory systems). In reality, we're looking at log
> footprints from OpenStack environments on the order of 500mb to 2gb.
> 
> xz is very slow on single-core systems with 1.5gb of memory, but it's
> quite a bit faster if you run it on a more powerful system. I've found
> level 4 compression to be the best compromise that works well enough
> that it's still far better than gzip. If increasing compression time
> by 3-5x is too much for you guys, why not just go to bzip? You'll
> still improve compression but be able to cut back on time.
> 
> Best Regards,
> Matthew Mosesohn

Alpha release of xz supports multithreading via -T (or —threads) parameter.
We could also use pbzip2 instead of regular bzip to cut some time on multi-core
systems.

Regards,
Bartłomiej Piotrowski
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins improvement

2014-11-24 Thread Dmitriy Shulyak
I tried to reproduce this behavior with tasks.yaml:

# Deployment is required for controllers
- role: ['controller']
  stage: post_deployment
  type: puppet
  parameters:
puppet_manifest: site.pp
puppet_modules: "puppet/:/etc/puppet/modules"
timeout: 360

And actually plugin was built successfully, so as Tatyana and Alex said -
the problem is not with puppet_modules format.

I would sugest to update fuel-plugin-builder, and if this issue will be
reproduced - you can show your plugin on gerrit review or personal github,
and we can try to build it.


On Mon, Nov 24, 2014 at 1:05 PM, Tatyana Leontovich <
tleontov...@mirantis.com> wrote:

> Guys,
> task like
> - role: ['controller']
> stage: post_deployment
> type: puppet
> parameters:
> puppet_manifest: puppet/site.pp
> puppet_modules: puppet/modules/
> timeout: 360
> works fine for me, so  I believe your task should looks like
>
> cat tasks.yaml
> # This tasks will be applied on controller nodes,
> # here you can also specify several roles, for example
> # ['cinder', 'compute'] will be applied only on
> # cinder and compute nodes
> - role: ['controller']
>   stage: post_deployment
>   type: puppet
>   parameters:
> puppet_manifest: install_keystone_ldap.pp
> puppet_modules: /etc/puppet/modules/"
>
> And be sure that install_keystone_ldap.pp thos one invoke other manifests
>
> Best,
> Tatyana
>
> On Mon, Nov 24, 2014 at 12:49 PM, Dmitry Ukov  wrote:
>
>> Unfortunately this does not work
>>
>> cat tasks.yaml
>> # This tasks will be applied on controller nodes,
>> # here you can also specify several roles, for example
>> # ['cinder', 'compute'] will be applied only on
>> # cinder and compute nodes
>> - role: ['controller']
>>   stage: post_deployment
>>   type: puppet
>>   parameters:
>> puppet_manifest: install_keystone_ldap.pp
>> puppet_modules: "puppet/:/etc/puppet/modules/"
>>
>>
>> fpb --build .
>> /home/dukov/dev/.plugins_ldap/local/lib/python2.7/site-packages/pkg_resources.py:1045:
>> UserWarning: /home/dukov/.python-eggs is writable by group/others and
>> vulnerable to attack when used with get_resource_filename. Consider a more
>> secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE
>> environment variable).
>>   warnings.warn(msg, UserWarning)
>> 2014-11-24 13:48:32 ERROR 15026 (cli) Wrong value format "0 ->
>> parameters", for file "./tasks.yaml", {'puppet_modules':
>> 'puppet/:/etc/puppet/modules/', 'puppet_manifest':
>> 'install_keystone_ldap.pp'} is not valid under any of the given schemas
>> Traceback (most recent call last):
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py",
>> line 90, in main
>> perform_action(args)
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py",
>> line 77, in perform_action
>> actions.BuildPlugin(args.build).run()
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
>> line 42, in run
>> self.check()
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
>> line 99, in check
>> self._check_structure()
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
>> line 111, in _check_structure
>> ValidatorManager(self.plugin_path).get_validator().validate()
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py",
>> line 39, in validate
>> self.check_schemas()
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py",
>> line 46, in check_schemas
>> self.validate_file_by_schema(v1.TASKS_SCHEMA, self.tasks_path)
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py",
>> line 47, in validate_file_by_schema
>> self.validate_schema(data, schema, path)
>>   File
>> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py",
>> line 43, in validate_schema
>> value_path, path, exc.message))
>> ValidationError: Wrong value format "0 -> parameters", for file
>> "./tasks.yaml", {'puppet_modules': 'puppet/:/etc/puppet/modules/',
>> 'puppet_manifest': 'install_keystone_ldap.pp'} is not valid under any of
>> the given schemas
>>
>>
>> On Mon, Nov 24, 2014 at 2:34 PM, Aleksandr Didenko > > wrote:
>>
>>> Hi,
>>>
>>> according to [1] you should be able to use:
>>>
>>> puppet_modules: "puppet/:/etc/puppet/modules/"
>>>
>>> This is valid string yaml parameter that should be parsed just fine.
>>>
>>> [1]
>>> https://github.com/stackforge/fuel-web/blob/master/tasklib/tasklib/actions/puppet.py#L61-L62
>>>
>>> Regards
>>> --
>>> Alex
>>>
>>>
>>> On Mon, Nov 24, 2014 at 12:07 PM, Dmitry Ukov 
>>> wrote:
>>>
 Hello All,
 Current implementation of plugins in Fuel unpacks plugin tarball
 into /var/www/nailgun/plu

Re: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm

2014-11-24 Thread Matthew Mosesohn
I did this exercise over many iterations during Docker container
packing and found that as long as the data is under 1gb, it's going to
compress really well with xz. Over 1gb and lrzip looks more attractive
(but only on high memory systems). In reality, we're looking at log
footprints from OpenStack environments on the order of 500mb to 2gb.

xz is very slow on single-core systems with 1.5gb of memory, but it's
quite a bit faster if you run it on a more powerful system. I've found
level 4 compression to be the best compromise that works well enough
that it's still far better than gzip. If increasing compression time
by 3-5x is too much for you guys, why not just go to bzip? You'll
still improve compression but be able to cut back on time.

Best Regards,
Matthew Mosesohn

On Mon, Nov 24, 2014 at 3:14 PM, Vladimir Kuklin  wrote:
> IMO, we should get a bunch of snapshots and decide which compression to use
> according to the results of an experiment. XZ compression takes much longer,
> so you will need to use parallel xz compression utility.
>
> On Fri, Nov 21, 2014 at 9:09 PM, Tomasz Napierala 
> wrote:
>>
>>
>> > On 21 Nov 2014, at 16:55, Dmitry Pyzhov  wrote:
>> >
>> > We have a request for change compression from gz to xz. This simple
>> > change halfs our snapshots. Does anyone has any objections? Otherwise we
>> > will include this change in 6.0 release.
>>
>> I agree with the change, but it shouldn’t be high
>>
>> Regards,
>> --
>> Tomasz 'Zen' Napierala
>> Sr. OpenStack Engineer
>> tnapier...@mirantis.com
>>
>>
>>
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 45bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com
> www.mirantis.ru
> vkuk...@mirantis.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm

2014-11-24 Thread Vladimir Kuklin
IMO, we should get a bunch of snapshots and decide which compression to use
according to the results of an experiment. XZ compression takes much
longer, so you will need to use parallel xz compression utility.

On Fri, Nov 21, 2014 at 9:09 PM, Tomasz Napierala 
wrote:

>
> > On 21 Nov 2014, at 16:55, Dmitry Pyzhov  wrote:
> >
> > We have a request for change compression from gz to xz. This simple
> change halfs our snapshots. Does anyone has any objections? Otherwise we
> will include this change in 6.0 release.
>
> I agree with the change, but it shouldn’t be high
>
> Regards,
> --
> Tomasz 'Zen' Napierala
> Sr. OpenStack Engineer
> tnapier...@mirantis.com
>
>
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins improvement

2014-11-24 Thread Tatyana Leontovich
Guys,
task like
- role: ['controller']
stage: post_deployment
type: puppet
parameters:
puppet_manifest: puppet/site.pp
puppet_modules: puppet/modules/
timeout: 360
works fine for me, so  I believe your task should looks like

cat tasks.yaml
# This tasks will be applied on controller nodes,
# here you can also specify several roles, for example
# ['cinder', 'compute'] will be applied only on
# cinder and compute nodes
- role: ['controller']
  stage: post_deployment
  type: puppet
  parameters:
puppet_manifest: install_keystone_ldap.pp
puppet_modules: /etc/puppet/modules/"

And be sure that install_keystone_ldap.pp thos one invoke other manifests

Best,
Tatyana

On Mon, Nov 24, 2014 at 12:49 PM, Dmitry Ukov  wrote:

> Unfortunately this does not work
>
> cat tasks.yaml
> # This tasks will be applied on controller nodes,
> # here you can also specify several roles, for example
> # ['cinder', 'compute'] will be applied only on
> # cinder and compute nodes
> - role: ['controller']
>   stage: post_deployment
>   type: puppet
>   parameters:
> puppet_manifest: install_keystone_ldap.pp
> puppet_modules: "puppet/:/etc/puppet/modules/"
>
>
> fpb --build .
> /home/dukov/dev/.plugins_ldap/local/lib/python2.7/site-packages/pkg_resources.py:1045:
> UserWarning: /home/dukov/.python-eggs is writable by group/others and
> vulnerable to attack when used with get_resource_filename. Consider a more
> secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE
> environment variable).
>   warnings.warn(msg, UserWarning)
> 2014-11-24 13:48:32 ERROR 15026 (cli) Wrong value format "0 ->
> parameters", for file "./tasks.yaml", {'puppet_modules':
> 'puppet/:/etc/puppet/modules/', 'puppet_manifest':
> 'install_keystone_ldap.pp'} is not valid under any of the given schemas
> Traceback (most recent call last):
>   File
> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py",
> line 90, in main
> perform_action(args)
>   File
> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py",
> line 77, in perform_action
> actions.BuildPlugin(args.build).run()
>   File
> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
> line 42, in run
> self.check()
>   File
> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
> line 99, in check
> self._check_structure()
>   File
> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
> line 111, in _check_structure
> ValidatorManager(self.plugin_path).get_validator().validate()
>   File
> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py",
> line 39, in validate
> self.check_schemas()
>   File
> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py",
> line 46, in check_schemas
> self.validate_file_by_schema(v1.TASKS_SCHEMA, self.tasks_path)
>   File
> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py",
> line 47, in validate_file_by_schema
> self.validate_schema(data, schema, path)
>   File
> "/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py",
> line 43, in validate_schema
> value_path, path, exc.message))
> ValidationError: Wrong value format "0 -> parameters", for file
> "./tasks.yaml", {'puppet_modules': 'puppet/:/etc/puppet/modules/',
> 'puppet_manifest': 'install_keystone_ldap.pp'} is not valid under any of
> the given schemas
>
>
> On Mon, Nov 24, 2014 at 2:34 PM, Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>> according to [1] you should be able to use:
>>
>> puppet_modules: "puppet/:/etc/puppet/modules/"
>>
>> This is valid string yaml parameter that should be parsed just fine.
>>
>> [1]
>> https://github.com/stackforge/fuel-web/blob/master/tasklib/tasklib/actions/puppet.py#L61-L62
>>
>> Regards
>> --
>> Alex
>>
>>
>> On Mon, Nov 24, 2014 at 12:07 PM, Dmitry Ukov  wrote:
>>
>>> Hello All,
>>> Current implementation of plugins in Fuel unpacks plugin tarball
>>> into /var/www/nailgun/plugins/.
>>> If we implement deployment part of plugin using puppet there is a
>>> setting
>>> puppet_modules:
>>>
>>> This setting should specify path to modules folder. As soon as main
>>> deployment part of plugin is implemented as a Puppet module module path
>>> setting should be:
>>>
>>> puppet_modules: puppet/
>>>
>>> There is big probability that plugin implementation will require some
>>> custom resources and functions which are implemented in fuel-library (e.g.
>>> service config resources, stdlib functions e.t.c). So in order to use
>>> them plugin developer has to copy them from fuel-library into plugin
>>> (if i'm not missing something). This is not really convenient from my
>>> perspective.
>>>
>>> I'd like to suggest to treat puppet_modules parameter as an array and
>>> pass it to pup

Re: [openstack-dev] [Fuel] Plugins improvement

2014-11-24 Thread Dmitry Ukov
Unfortunately this does not work

cat tasks.yaml
# This tasks will be applied on controller nodes,
# here you can also specify several roles, for example
# ['cinder', 'compute'] will be applied only on
# cinder and compute nodes
- role: ['controller']
  stage: post_deployment
  type: puppet
  parameters:
puppet_manifest: install_keystone_ldap.pp
puppet_modules: "puppet/:/etc/puppet/modules/"


fpb --build .
/home/dukov/dev/.plugins_ldap/local/lib/python2.7/site-packages/pkg_resources.py:1045:
UserWarning: /home/dukov/.python-eggs is writable by group/others and
vulnerable to attack when used with get_resource_filename. Consider a more
secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE
environment variable).
  warnings.warn(msg, UserWarning)
2014-11-24 13:48:32 ERROR 15026 (cli) Wrong value format "0 -> parameters",
for file "./tasks.yaml", {'puppet_modules': 'puppet/:/etc/puppet/modules/',
'puppet_manifest': 'install_keystone_ldap.pp'} is not valid under any of
the given schemas
Traceback (most recent call last):
  File
"/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py",
line 90, in main
perform_action(args)
  File
"/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/cli.py",
line 77, in perform_action
actions.BuildPlugin(args.build).run()
  File
"/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
line 42, in run
self.check()
  File
"/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
line 99, in check
self._check_structure()
  File
"/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/actions/build.py",
line 111, in _check_structure
ValidatorManager(self.plugin_path).get_validator().validate()
  File
"/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py",
line 39, in validate
self.check_schemas()
  File
"/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/validator_v1.py",
line 46, in check_schemas
self.validate_file_by_schema(v1.TASKS_SCHEMA, self.tasks_path)
  File
"/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py",
line 47, in validate_file_by_schema
self.validate_schema(data, schema, path)
  File
"/home/dukov/git/fuel/fuel-plugins/fuel_plugin_builder/fuel_plugin_builder/validators/base.py",
line 43, in validate_schema
value_path, path, exc.message))
ValidationError: Wrong value format "0 -> parameters", for file
"./tasks.yaml", {'puppet_modules': 'puppet/:/etc/puppet/modules/',
'puppet_manifest': 'install_keystone_ldap.pp'} is not valid under any of
the given schemas


On Mon, Nov 24, 2014 at 2:34 PM, Aleksandr Didenko 
wrote:

> Hi,
>
> according to [1] you should be able to use:
>
> puppet_modules: "puppet/:/etc/puppet/modules/"
>
> This is valid string yaml parameter that should be parsed just fine.
>
> [1]
> https://github.com/stackforge/fuel-web/blob/master/tasklib/tasklib/actions/puppet.py#L61-L62
>
> Regards
> --
> Alex
>
>
> On Mon, Nov 24, 2014 at 12:07 PM, Dmitry Ukov  wrote:
>
>> Hello All,
>> Current implementation of plugins in Fuel unpacks plugin tarball
>> into /var/www/nailgun/plugins/.
>> If we implement deployment part of plugin using puppet there is a
>> setting
>> puppet_modules:
>>
>> This setting should specify path to modules folder. As soon as main
>> deployment part of plugin is implemented as a Puppet module module path
>> setting should be:
>>
>> puppet_modules: puppet/
>>
>> There is big probability that plugin implementation will require some
>> custom resources and functions which are implemented in fuel-library (e.g.
>> service config resources, stdlib functions e.t.c). So in order to use
>> them plugin developer has to copy them from fuel-library into plugin (if
>> i'm not missing something). This is not really convenient from my
>> perspective.
>>
>> I'd like to suggest to treat puppet_modules parameter as an array and
>> pass it to puppet binary as
>> # puppet apply --modulepath=:
>> This will allow to add /etc/puppet/modules as module path and use
>> resources and functions form fuel-library.
>>
>> P.S.:
>> puppet_modules: "puppet/:/etc/puppet/moules/: <- is not allowed by yaml
>> parser (and yaml format I believe)
>>
>> Any suggestions here?
>>
>>
>> --
>> Kind regards
>> Dmitry Ukov
>> IT Engineer
>> Mirantis, Inc.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kind regards
Dmitry Ukov
IT Engineer
Mirantis, Inc.
___
OpenStack-dev mailing lis

Re: [openstack-dev] [Fuel] Plugins improvement

2014-11-24 Thread Aleksandr Didenko
Hi,

according to [1] you should be able to use:

puppet_modules: "puppet/:/etc/puppet/modules/"

This is valid string yaml parameter that should be parsed just fine.

[1]
https://github.com/stackforge/fuel-web/blob/master/tasklib/tasklib/actions/puppet.py#L61-L62

Regards
--
Alex


On Mon, Nov 24, 2014 at 12:07 PM, Dmitry Ukov  wrote:

> Hello All,
> Current implementation of plugins in Fuel unpacks plugin tarball
> into /var/www/nailgun/plugins/.
> If we implement deployment part of plugin using puppet there is a setting
> puppet_modules:
>
> This setting should specify path to modules folder. As soon as main
> deployment part of plugin is implemented as a Puppet module module path
> setting should be:
>
> puppet_modules: puppet/
>
> There is big probability that plugin implementation will require some
> custom resources and functions which are implemented in fuel-library (e.g.
> service config resources, stdlib functions e.t.c). So in order to use
> them plugin developer has to copy them from fuel-library into plugin (if
> i'm not missing something). This is not really convenient from my
> perspective.
>
> I'd like to suggest to treat puppet_modules parameter as an array and pass
> it to puppet binary as
> # puppet apply --modulepath=:
> This will allow to add /etc/puppet/modules as module path and use
> resources and functions form fuel-library.
>
> P.S.:
> puppet_modules: "puppet/:/etc/puppet/moules/: <- is not allowed by yaml
> parser (and yaml format I believe)
>
> Any suggestions here?
>
>
> --
> Kind regards
> Dmitry Ukov
> IT Engineer
> Mirantis, Inc.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >