Re: [openstack-dev] [tripleo] Pin some puppet dependencies on git clone

2015-12-15 Thread Jiří Stránský

On 15.12.2015 17:46, Emilien Macchi wrote:

For information, Puppet OpenStack CI is consistent for unit & functional
tests, we use a single (versionned) Puppetfile:
https://github.com/openstack/puppet-openstack-integration/blob/master/Puppetfile

TripleO folks might want to have a look at this to follow the
dependencies actually supported by upstream OR if you prefer surfing on
the edge and risk to break CI every morning.

Let me know if you're interested to support that in TripleO Puppet
elements, I can help with that.


Syncing tripleo-puppet-elements with puppet-openstack-integration is a 
good idea i think, to prevent breakages like the puppet-mysql one 
mentioned before.


One thing to keep in mind is that the module sets in t-p-e and p-o-i are 
not the same. E.g. recently we added the timezone module to t-p-e, and 
it's not in the p-o-i Puppetfile.


Also, sometimes we do have to go to non-openstack puppet modules to fix 
things for TripleO (i don't recall a particular example but i think we 
did a couple of fixes in non-openstack modules to allow us to deploy HA 
with Pacemaker). In cases like this it would be helpful if we still had 
the possibility to pin to something different than what's in 
puppet-openstack-integration perhaps.



Considering the above, if we could figure out a way to have t-p-e behave 
like this:


* install the module set listed in t-p-e, not p-o-i.

* if there's a ref/branch specified directly in t-p-e, use that

* if t-p-e doesn't have a ref/branch specified, use ref/branch from p-o-i

* if t-p-e doesn't have a ref/branch specified, and the module is not 
present in p-o-i, use master


* still honor DIB_REPOREF_* variables to pin individual puppet modules 
to whatever wanted at time of building the image -- very useful for 
temporary workarounds done either manually or in tripleo.sh.


...then i think this would be very useful. Not sure at the moment what 
would be the best way to meet these points though, these are just some 
immediate thoughts on the matter.



Jirka



On 12/14/2015 02:25 PM, Dan Prince wrote:

On Fri, 2015-12-11 at 21:50 +0100, Jaume Devesa wrote:

Hi all,

Today TripleO CI jobs failed because a new commit introduced on
puppetlabs-mysql[1].
Mr. Jiri Stransky solved it as a temporally fix by pinning the puppet
module clone to a previous
commit in the tripleo-common project[2].

source-repositories puppet element[3] allows you to pin the puppet
module clone as well by
adding a reference commit in the source-repository-
file. In this case,
I am talking about the source-repository-puppet-modules[4].

I know you TripleO guys are brave people that live dangerously in the
cutting edge, but I think
the dependencies to puppet modules not managed by the OpenStack
community should be
pinned to last repo tag for the sake of stability.

What do you think?


I've previously considered added a stable puppet modules element for
just this case:

https://review.openstack.org/#/c/184844/

Using stable branches of things like MySQL, Rabbit, etc might make
sense. However I would want to consider following what the upstream
Puppet community does as well specifically because we do want to
continue using upstream openstack/puppet-* modules as well. At least
for our upstream CI.

We also want to make sure our stable TripleO jobs use the stable
branches of openstack/puppet-* so we might need to be careful about
pinning those things too.

Dan



  I can take care of this.

[1]: https://github.com/puppetlabs/puppetlabs-mysql/commit/bdf4d0f52d
fc244d10bbd5b67efb791a39520ed2
[2]: https://review.openstack.org/#/c/256572/
[3]: https://github.com/openstack/diskimage-builder/tree/master/eleme
nts/source-repositories
[4]: https://github.com/openstack/tripleo-puppet-elements/blob/master
/elements/puppet-modules/source-repository-puppet-modules

--
Jaume Devesa
Software Engineer at Midokura
_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [oslo] stable/liberty branch needed for oslo-incubator

2015-12-15 Thread Matt Riedemann



On 12/13/2015 10:33 PM, Robert Collins wrote:

On 14 December 2015 at 15:28, Matt Riedemann  wrote:



I don't have a pressing need to backport something right now, but as long as
there was code in oslo-incubator that *could* be synced to other projects
which wasn't in libraries, then that code could have bugs and code require
backports to stable/liberty oslo-incubator for syncing to projects that use
it.


I thought the thing to do was backport the application of the change
from the projects master?

-Rob



Unless the rules changed, things from oslo-incubator were always 
backported to stable oslo-incubator and then sync'ed to the stable 
branches of the affected projects. This is so we wouldn't lose the fix 
in stable oslo-incubator which is shared across other projects, not just 
the target project consuming the fix from oslo-incubator.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pin some puppet dependencies on git clone

2015-12-15 Thread Emilien Macchi
For information, Puppet OpenStack CI is consistent for unit & functional
tests, we use a single (versionned) Puppetfile:
https://github.com/openstack/puppet-openstack-integration/blob/master/Puppetfile

TripleO folks might want to have a look at this to follow the
dependencies actually supported by upstream OR if you prefer surfing on
the edge and risk to break CI every morning.

Let me know if you're interested to support that in TripleO Puppet
elements, I can help with that.

On 12/14/2015 02:25 PM, Dan Prince wrote:
> On Fri, 2015-12-11 at 21:50 +0100, Jaume Devesa wrote:
>> Hi all,
>>
>> Today TripleO CI jobs failed because a new commit introduced on
>> puppetlabs-mysql[1]. 
>> Mr. Jiri Stransky solved it as a temporally fix by pinning the puppet
>> module clone to a previous
>> commit in the tripleo-common project[2].
>>
>> source-repositories puppet element[3] allows you to pin the puppet
>> module clone as well by 
>> adding a reference commit in the source-repository-
>> file. In this case,
>> I am talking about the source-repository-puppet-modules[4].
>>
>> I know you TripleO guys are brave people that live dangerously in the
>> cutting edge, but I think
>> the dependencies to puppet modules not managed by the OpenStack
>> community should be
>> pinned to last repo tag for the sake of stability. 
>>
>> What do you think?
> 
> I've previously considered added a stable puppet modules element for
> just this case:
> 
> https://review.openstack.org/#/c/184844/
> 
> Using stable branches of things like MySQL, Rabbit, etc might make
> sense. However I would want to consider following what the upstream
> Puppet community does as well specifically because we do want to
> continue using upstream openstack/puppet-* modules as well. At least
> for our upstream CI.
> 
> We also want to make sure our stable TripleO jobs use the stable
> branches of openstack/puppet-* so we might need to be careful about
> pinning those things too.
> 
> Dan
> 
> 
>>  I can take care of this.
>>
>> [1]: https://github.com/puppetlabs/puppetlabs-mysql/commit/bdf4d0f52d
>> fc244d10bbd5b67efb791a39520ed2
>> [2]: https://review.openstack.org/#/c/256572/
>> [3]: https://github.com/openstack/diskimage-builder/tree/master/eleme
>> nts/source-repositories
>> [4]: https://github.com/openstack/tripleo-puppet-elements/blob/master
>> /elements/puppet-modules/source-repository-puppet-modules
>>
>> --
>> Jaume Devesa
>> Software Engineer at Midokura
>> _
>> _
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
>> cribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Rolling upgrades

2015-12-15 Thread Sean McGinnis
On Tue, Dec 15, 2015 at 01:09:10PM +0100, Micha?? Dulko wrote:
> Hi,
> 
> At the meeting recently it was mentioned that our rolling upgrades
> efforts are pursuing an "elusive unicorn" that makes development a lot
> more complicated and restricted. I want to try to clarify this a bit,
> explain the strategy more and give an update on the status of the whole
> affair.

Thanks for the overview Michal. This is such a large and complicated
effort - this helps a lot.

I do think (hope) it is worth the extra effort. This will be a big win
for long term usability.

Thanks!
Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Custom fields for versioned objects

2015-12-15 Thread Sean McGinnis
On Tue, Dec 15, 2015 at 04:46:02PM +0100, Micha?? Dulko wrote:
> On 12/15/2015 04:08 PM, Ryan Rossiter wrote:
> > Thanks for the review Michal! As for the bp/bug report, there???s four 
> > options:
> >
> > 1. Tack the work on as part of bp cinder-objects
> > 2. Make a new blueprint (bp cinder???object-fields)
> > 3. Open a bug to handle all changes for enums/fields
> > 4. Open a bug for each changed enum/field
> >
> > Personally, I???m partial to #1, but #2 is better if you want to track this 
> > work separately from the other objects work. I don???t think we should go 
> > with bug reports because #3 will be a lot of Partial-Bug and #4 will be 
> > kinda spammy. I don???t know what the spec process is in Cinder compared to 
> > Nova, but this is nowhere near enough work to be spec-worthy.
> >
> > If this is something you or others think should be discussed in a meeting, 
> > I can tack it on to the agenda for tomorrow.
> 
> bp/cinder-object topic is a little crowded with patches and it tracks
> mostly rolling-upgrades-related stuff. This is more of a refactoring
> than a ovo essential change, so simple specless bp/cinder-object-fields
> is totally fine to me.

I agree. If you can file a new blueprint for this, I can approve it
right away. That will help track the effort.

Thanks for working on this!

Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pin some puppet dependencies on git clone

2015-12-15 Thread Jaume Devesa
I suggest then to pin the dependencies from [1] to below.

Couldn't be posible to just clone the openstack/puppet-* ones
and then use some tool to install the dependencies from them, some
kind of

  pip install -r requirements.txt

but adapted for Puppet? Does this tool exist?

[1]:
https://github.com/openstack/puppet-openstack-integration/blob/master/Puppetfile#L111

On 15 December 2015 at 17:46, Emilien Macchi  wrote:

> For information, Puppet OpenStack CI is consistent for unit & functional
> tests, we use a single (versionned) Puppetfile:
>
> https://github.com/openstack/puppet-openstack-integration/blob/master/Puppetfile
>
> TripleO folks might want to have a look at this to follow the
> dependencies actually supported by upstream OR if you prefer surfing on
> the edge and risk to break CI every morning.
>
> Let me know if you're interested to support that in TripleO Puppet
> elements, I can help with that.
>
> On 12/14/2015 02:25 PM, Dan Prince wrote:
> > On Fri, 2015-12-11 at 21:50 +0100, Jaume Devesa wrote:
> >> Hi all,
> >>
> >> Today TripleO CI jobs failed because a new commit introduced on
> >> puppetlabs-mysql[1].
> >> Mr. Jiri Stransky solved it as a temporally fix by pinning the puppet
> >> module clone to a previous
> >> commit in the tripleo-common project[2].
> >>
> >> source-repositories puppet element[3] allows you to pin the puppet
> >> module clone as well by
> >> adding a reference commit in the source-repository-
> >> file. In this case,
> >> I am talking about the source-repository-puppet-modules[4].
> >>
> >> I know you TripleO guys are brave people that live dangerously in the
> >> cutting edge, but I think
> >> the dependencies to puppet modules not managed by the OpenStack
> >> community should be
> >> pinned to last repo tag for the sake of stability.
> >>
> >> What do you think?
> >
> > I've previously considered added a stable puppet modules element for
> > just this case:
> >
> > https://review.openstack.org/#/c/184844/
> >
> > Using stable branches of things like MySQL, Rabbit, etc might make
> > sense. However I would want to consider following what the upstream
> > Puppet community does as well specifically because we do want to
> > continue using upstream openstack/puppet-* modules as well. At least
> > for our upstream CI.
> >
> > We also want to make sure our stable TripleO jobs use the stable
> > branches of openstack/puppet-* so we might need to be careful about
> > pinning those things too.
> >
> > Dan
> >
> >
> >>  I can take care of this.
> >>
> >> [1]: https://github.com/puppetlabs/puppetlabs-mysql/commit/bdf4d0f52d
> >> fc244d10bbd5b67efb791a39520ed2
> >> [2]: https://review.openstack.org/#/c/256572/
> >> [3]: https://github.com/openstack/diskimage-builder/tree/master/eleme
> >> nts/source-repositories
> >> [4]: https://github.com/openstack/tripleo-puppet-elements/blob/master
> >> /elements/puppet-modules/source-repository-puppet-modules
> >>
> >> --
> >> Jaume Devesa
> >> Software Engineer at Midokura
> >> _
> >> _
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> >> cribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Jaume Devesa
Software Engineer at Midokura
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Status of the Support Conditionals in Heat templates

2015-12-15 Thread Fox, Kevin M
My $0.02:

heat as it is today, requires all users to be devops, and to carefully craft 
the templates launched specific to the cloud and the particular app they are 
trying to write. Making sharing code between heat users difficult. This means 
the potential user base of heat is restricted to developers knowledgeable in 
heat template format, or those using openstack services that wrap up in front 
of heat (trove, sahara, etc). This mostly relegates heat to the role of 
"plumbing". Where as, I see it as a first class orchestration engine for the 
cloud. Something that should be usable by all in its own right.

Just about every attempt I've seen so far has required something like jinja in 
front to generate the heat templates since heat itself is not generic enough. 
This means its not available from Horizon, and then is only usable by a small 
fraction of openstack users.

I've had some luck with aproximating conditionals using maps and nested stacks. 
It works but its really ugly to code. But from an end users perspective, its 
very nice to use.

Since everyone's reinventing the templating wheel over and over, heat should 
itself gain a bit more templatability in its templates so that everyone can 
stop having to rewrite template engines on top of heat, and heat users don't 
have to take so much time customizing templates so they can launch them.

I don't particularly care what the best solution to making conditionals 
available is. if you can guarantee jinja templates will always halt in a 
reasonable amount of time and is sandboxed appropriately, then sticking it in 
heat would be a good solution. If not, even some simple conditionals ala AWS 
would be extremely welcome. But either way, it should take heat parameters in, 
and operate on them. The heat parameters section is a great contract today 
between heat users, and heat template developers. Its one of the coolest things 
about Heat. It makes for a much better user experience in Horizon and the cli. 
And when I say users, I mean "heat users" != "heat template developers". In the 
same way, a bash script user may not be able to even read a bash script, but 
they don't have to edit one to use it. They just call it with parameters.

Thanks,
Kevin

From: Rob Pothier (rpothier) [rpoth...@cisco.com]
Sent: Tuesday, December 15, 2015 7:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] Status of the Support Conditionals in Heat 
templates


Hi Sergey,
I agree with your feeling, this is from the Heat Wiki page.
"Heat also endeavours to provide compatibility with the AWS CloudFormation 
template format, so that many existing CloudFormation templates can be launched 
on OpenStack."

Note also, there was another review that attempted implement this, but stalled.
https://review.openstack.org/#/c/84468/

Rob

From: Sergey Kraynev >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, December 9, 2015 at 5:42 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [Heat] Status of the Support Conditionals in Heat 
templates

Hi Heaters,

On the last IRC meeting we had a question about Support Conditionals spec [1].
Previous attempt for this staff is here [2].
The example of first POC in Heat can be reviewed here [3]

As I understand we have not had any complete decision about this work.
So I'd like to clarify feelings of community about it. This clarification may 
be done as answers for two simple questions:
 - Why do we want to implement it?
 - Why do NOT we want to implement it?

My personal feeling is:
- Why do we want to implement it?
* A lot of users wants to have similar staff.
* It's already presented in AWS, so will be good to have this feature in 
Heat too.
 - Why do NOT we want to implement it?
* it can be solved with Jinja [4] . However I don't think, that it's really 
important reason for blocking this work.

Please share your idea about two questions above.
It should allows us to eventually decide, want we implement it or not.

[1] https://review.openstack.org/#/c/245042/
[2] https://review.openstack.org/#/c/153771/
[3] https://review.openstack.org/#/c/221648/1
[4] http://jinja.pocoo.org/
--
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-12-15 Thread Rossella Sblendido

Hi Ihar,


wow, good job!!
Sorry for the very slow reply.
I really like your proposal...some comments inline.

On 12/03/2015 04:46 PM, Ihar Hrachyshka wrote:

Hi,

Small update on the RFE. It was approved for Mitaka, assuming we come up
with proper details upfront thru neutron-specs process.

In the meantime, we have found more use cases for flow management among
features in development: QoS DSCP, also the new OF based firewall
driver. Both authors for those new features independently realized that
agent does not currently play nice with flows set by external code due
to its graceful restart behaviour when rules with unknown cookies are
cleaned up. [The agent uses a random session uuid() to mark rules that
belong to its current run.]

Before I proceed, full disclosure: I know almost nothing about OpenFlow
capabilities, so some pieces below may make no sense. I tried to come up
with high level model first and then try to map it to available OF
features. Please don’t hesitate to comment, I like to learn new stuff! ;)


I am not an expert either so I encourage people to chime in here.



I am thinking lately on the use cases we collected so far. One common
need for all features that were seen to be interested in proper
integration with Open vSwitch agent is to be able to manage feature
specific flows on br-int and br-tun. There are other things that
projects may need, like patch ports, though I am still struggling with
the question of whether it may be postponed or avoided for phase 1.

There are several specific operation 'kinds' that we should cover for
the RFE:
- managing flows that modify frames in-place;
- managing flows that redirect frames.

There are some things that should be considered to make features
cooperate with the agent and other extensions:
- feature flows should have proper priorities based on their ‘kind’
(f.e. in-place modification probably go before redirections);
- feature flows should survive flow reset that may be triggered by the
agent;
- feature flows should survive flow reset without data plane disruption
(=they should support graceful restart:
https://review.openstack.org/#/c/182920).

With that in mind, I see the following high level design for the flow
tables:

- table 0 serves as a dispatcher for specific features;
- each feature gets one or more tables, one per flow ‘kind’ needed;
- for each feature table, a new flow entry is added to table 0 that
would redirect to feature specific table; the rule will be triggered
only if OF metadata is not updated inside the feature table (see the
next bullet); the rule will have priority that is defined for the ‘kind’
of the operation that is implemented by the table it redirects to;
-  each feature table will have default actions that will 1) mark OF
metadata for the frame as processed by the feature; 2) redirect back to
table 0;
- all feature specific flow rules (except dispatcher rules) belong to
feature tables;

Now, the workflow for extensions that are interested in setting flows
would be:
- on initialize() call, extension defines feature tables it will need;


Do you mean this in a dynamic way or every extension will have tables 
assigned, basically hard-coded? I prefer the second way so we have more 
controls of the tables that are currently used.



it passes the name of the feature table and the ‘kind’ of the actions it
will execute; with that, the following is initialized by the agent: 1)


It would be nice to pass also a filter to match some packets. We 
probably don't want to send all the packet to the feature table, the 
extension can define that.



table 0 dispatcher entry to redirect frames into feature table; the
entry has the priority according to the ‘kind’ of the table; 2) the


I think we need to define the priority better. According to what you 
wrote we assign priority based on "in-place modification probably go 
before redirections" not sure if it's enough. What happens if we have 
two features that both requires in place-modifications? How do we 
prioritize them? Are we going to allow 2 extension at the same time? Let 
me think more about this...It would be nice to have some real world 
example...



actual feature table with two default rules (update metadata and push
back to table 0);
- whenever extension needs to add a new flow rule, it passes the
following into the agent: 1) table name; 2) flow specific parameters
(actions, priority, ...)

Since the agent will manage setting flows for extensions, it will be
able to use the active agent cookie for all feature flows; next time the
agent is restarted, it should be able to respin extension flows with no
data plane disruption. [Note: we should make sure that on agent restart,
we call to extensions *before* we clean up stale flow rules.]


I like this :)


That design will hopefully allow us to abstract interaction with flows
from extensions into management code inside the agent. It should
guarantee extensions cooperate properly assuming they properly define
their 

Re: [openstack-dev] [neutron][taas] neutron ovs-agent deletes taas flows

2015-12-15 Thread Ihar Hrachyshka

Kyle Mestery  wrote:


>>   o) Workaround:
>>
>>  After a vm is deployed on a (re)started compute node, restart taas
>>  agent before creating a tap-service or tap-flow.
>>  That is, create taas flows after cleanup has been done.
>>
>>  Note that cleanup will be done only once during an ovs-agent is
>>  running.
>>
>>
>>   o) An idea to fix:
>>
>>  1. Set "taas" stamp(*) to taas flows.
>>  2. Modify the cleanup logic in ovs-agent not to delete entries
>> stamped as "taas".
>>
>>  * Maybe a static string.
>>If we need to use a string which generated dynamically
>>(e.g. uuid), API to interact with ovs-agent is required.
>
>
> API proposal with some consideration for flow cleanup not dropping  
flows for

> external code is covered in the following email thread:
>  
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081264.html

>
> I believe you would need to adopt the extensions API once it’s in, moving
> from setup with a separate agent for your feature to l2 agent extension  
for

> taas that will run inside OVS agent.
>

This is really the right approach here as well. Anything modifying flow  
tables and expecting to work with the OVS L2 agent should in fact reside  
in the OVS L2 agent or use this extension API. Ihar, is this currently  
planned for Mitaka?


Optimistically, yes. But I currently lack comments on the approach before I  
bake a spec for that. I would really like to hear from OVS agent folks  
(Rossella, Ann, who else?) on whether the proposal is sane at least on high  
level. I know some folks were promising quick replies to the thread.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Igor Kalnitsky
Hey Mike,

Thanks for your input.

> actually not.  if you replace your ARRAY columns with JSON entirely,

It still needs to fix the code, i.e. change ARRAY-specific queries
with JSON ones around the code. ;)

> there's already a mostly finished PR for SQLAlchemy support in the queue.

Does it mean SQLAlchemy will have one unified interface to make JSON
queries? So we can use different backends if necessary?

Thanks,
- Igor

On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer  wrote:
>
>
> On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
>> Hey Julien,
>>
>>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
>>
>> I believe this blueprint is about DB for OpenStack cloud (we use
>> Galera now), while here we're talking about DB backend for Fuel
>> itself. Fuel has a separate node (so called Fuel Master) and we use
>> PostgreSQL now.
>>
>>> does that mean Fuel is only going to be able to run with PostgreSQL?
>>
>> Unfortunately we already tied up to PostgreSQL. For instance, we use
>> PostgreSQL's ARRAY column type. Introducing JSON column is one more
>> way to tighten knots harder.
>
> actually not.  if you replace your ARRAY columns with JSON entirely,
> MySQL has JSON as well now:
> https://dev.mysql.com/doc/refman/5.7/en/json.html
>
> there's already a mostly finished PR for SQLAlchemy support in the queue.
>
>
>
>>
>> - Igor
>>
>> On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou  wrote:
>>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
>>>
 The things I want to notice are:

 * Currently we aren't tied up to PostgreSQL 9.3.
 * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
 set of JSON operations.
>>>
>>> I'm curious and have just a small side question: does that mean Fuel is
>>> only going to be able to run with PostgreSQL?
>>>
>>> I also see
>>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
>>> maybe it's related?
>>>
>>> Thanks!
>>>
>>> --
>>> Julien Danjou
>>> // Free Software hacker
>>> // https://julien.danjou.info
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Ubuntu bootstrap] WebUI notification

2015-12-15 Thread Vitaly Kramskikh
Hi,

I really don't like setting the error message as the default one in the DB
schema and consider it as a last resort solution. If possible update the
message to error one just before you start to build the image.

2015-12-15 18:48 GMT+03:00 Artur Svechnikov :

> Hi folks,
> Recently was introduced special notification about absented bootstrap
> image.
>
> Currently this notification is sent from fuel-bootstrap-cli. It means that
> error message will not be sent when failure occurs before first building
> (like in [1]). I think it will be better to set error message on WebUI by
> default through fixtures and then remove it if first build is successful.
>
> Please share your opinions about this issue.
>
> [1] https://bugs.launchpad.net/fuel/+bug/1526351
>
> Best regards,
> Svechnikov Artur
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] neutron-lib subteam

2015-12-15 Thread Doug Wiegley
Hi all,

We are starting a formal neutron-lib subteam for work revolving around 
neutron-lib and general decoupling efforts of all neutron projects.

The first meeting is tomorrow at 1730 UTC in #openstack-meeting-4.

More info can be found here:

https://wiki.openstack.org/wiki/Network/Lib/Meetings
https://wiki.openstack.org/wiki/Neutron/Lib
https://review.openstack.org/#/q/status:open+project:openstack/neutron-lib,n,z
https://github.com/openstack/neutron-lib/blob/master/doc/source/review-guidelines.rst

Thanks,
doug



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Status of the Support Conditionals in Heat templates

2015-12-15 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2015-12-15 09:07:02 -0800:
> My $0.02:
> 
> heat as it is today, requires all users to be devops, and to carefully craft 
> the templates launched specific to the cloud and the particular app they are 
> trying to write. Making sharing code between heat users difficult. This means 
> the potential user base of heat is restricted to developers knowledgeable in 
> heat template format, or those using openstack services that wrap up in front 
> of heat (trove, sahara, etc). This mostly relegates heat to the role of 
> "plumbing". Where as, I see it as a first class orchestration engine for the 
> cloud. Something that should be usable by all in its own right.
> 
> Just about every attempt I've seen so far has required something like jinja 
> in front to generate the heat templates since heat itself is not generic 
> enough. This means its not available from Horizon, and then is only usable by 
> a small fraction of openstack users.
> 
> I've had some luck with aproximating conditionals using maps and nested 
> stacks. It works but its really ugly to code. But from an end users 
> perspective, its very nice to use.
> 
> Since everyone's reinventing the templating wheel over and over, heat should 
> itself gain a bit more templatability in its templates so that everyone can 
> stop having to rewrite template engines on top of heat, and heat users don't 
> have to take so much time customizing templates so they can launch them.
> 
> I don't particularly care what the best solution to making conditionals 
> available is. if you can guarantee jinja templates will always halt in a 
> reasonable amount of time and is sandboxed appropriately, then sticking it in 
> heat would be a good solution. If not, even some simple conditionals ala AWS 
> would be extremely welcome. But either way, it should take heat parameters 
> in, and operate on them. The heat parameters section is a great contract 
> today between heat users, and heat template developers. Its one of the 
> coolest things about Heat. It makes for a much better user experience in 
> Horizon and the cli. And when I say users, I mean "heat users" != "heat 
> template developers". In the same way, a bash script user may not be able to 
> even read a bash script, but they don't have to edit one to use it. They just 
> call it with parameters.
> 


I agree with your sentiments Kevin. As somebody who struggled with Heat
before it had provider templates, and ended up writing a templating
solution to solve it, I always felt that Heat was holding me back from
writing reusable, composable templates. The CloudFormation way of doing
conditions seems worth copying.

Jinja2 in the engine, however, is not a good idea. Can it be contained?
Maybe. However, you already have Javascript that is built for this exact
purpose and already optimized as such.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Different versions for different components

2015-12-15 Thread Roman Prykhodchenko
Aleksandra,

thank you for the clarification, it makes sense to me now.

In my opinion our current approach is not flexible at all and very outdated. 
After splitting fuel-web to smaller components we realized that some of them 
may be actually used outside of a master node as standalone things. In this 
case it is required for some of them to be distributable and upgradable over 
PyPi. It’s also required for different components to be able to make minor 
releases to release important bug fixes and improvements for users that use 
them outside their master nodes. For that we should be able to modify the minor 
version independently.

Do you think it is possible to achieve in the observable future.


- romcheg

> 15 груд. 2015 р. о 12:21 Aleksandra Fedorova  
> написав(ла):
> 
> Roman,
> 
> we use 8.0 version everywhere in Fuel code _before_ 8.0 release. We
> don't use bump approach, rather bump version, run a development
> and test cycle, then create release and tag it.
> 
> In more details:
> 
> 1) there is a master branch, in which development for upcoming release
> (currently 8.0) happens. All hardcoded version parameters in master
> branch are set to 8.0.
> 
> 2) at Soft Code Freeze (which is one week from now) we create
> stable/8.0 branch from current master. Then we immediately bump
> versions in master branches of all Fuel projects to 9.0.
> Since SCF we have stable/8.0 branch with 8.0 version and master with
> 9.0, but there is still bugfixing in progress, so there might be
> changes in stable/8.0 code.
> 
> 3) On RTM day we finally create 8.0 tags on stable/8.0 branch, and
> this is the time when we should release packages to PyPI and other
> resources.
> 
> 
> 
> On Tue, Dec 15, 2015 at 2:03 PM, Roman Prykhodchenko  wrote:
>> Folks,
>> 
>> I can see that version for python-fuelclient package is already [1] set to
>> 8.0.0. However, there’s still no corresponding tag and so the version was
>> not released to PyPi.
>> The question is it finally safe to tag different versions for different
>> components? As for Fuel client we need to tag 8.0.0 to push a Debian package
>> for it.
>> 
>> 
>> 1. https://github.com/openstack/python-fuelclient/blob/master/setup.cfg#L3
>> 
>> 
>> - romcheg
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> 
> --
> Aleksandra Fedorova
> Fuel CI Team Lead
> bookwar
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] OpenStack versioning in Fuel

2015-12-15 Thread Igor Kalnitsky
Folks,

I want to bring this up again. There were no progress since last
Oleg's mail, and we must decide. It's good that we still have
"2015.1.0-8.0" version while OpenStack uses "Liberty" name for
versions.

Let's decide which name to use, file a bug and finally resolve it.

- Igor

On Thu, Oct 22, 2015 at 10:23 PM, Oleg Gelbukh  wrote:
> Igor, it is interesting that you mention backward compatibility in this
> context.
>
> I can see lots of code in Nailgun that checks for release version to
> enable/disable features that were added or removed more than 2 releases
> before [1] [2] [3] (there's a lot more).
>
> What should we do about that code? I believe we could 'safely' delete it. It
> will make our code base much more compact and supportable without even
> decoupling serializers, etc. Is my assumption correct, or I just missing
> something?
>
> This will also help to switch to another scheme of versioning of releases,
> since there will be much less places where those version scheme is
> hardcoded.
>
> [1]
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/objects/release.py#L142-L145
> [2]
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py#L554-L555
> [3]
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/objects/serializers/node.py#L124-L126
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Mon, Oct 19, 2015 at 6:34 PM, Igor Kalnitsky 
> wrote:
>>
>> Oleg,
>>
>> I think we can remove this function for new releases and keep them
>> only for backward compatibility with previous ones. Why not? If
>> there's a way to do things better let's do them better. :)
>>
>> On Sat, Oct 17, 2015 at 11:50 PM, Oleg Gelbukh 
>> wrote:
>> > In short, because of this:
>> >
>> > https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/db/sqlalchemy/models/release.py#L74-L99
>> >
>> > Unless we use dashed 2-component version where OpenStack version comes
>> > first, followed by version of Fuel, this will break creation of a
>> > cluster
>> > with given release.
>> >
>> > -Oleg
>> >
>> > On Sat, Oct 17, 2015 at 10:24 PM, Sergii Golovatiuk
>> >  wrote:
>> >>
>> >> Why can't we use 'liberty' without 8.0?
>> >>
>> >> On Sat, 17 Oct 2015 at 19:33, Oleg Gelbukh 
>> >> wrote:
>> >>>
>> >>> After closer look, the only viable option in closer term seems to be
>> >>> 'liberty-8.0' version. It does not to break comparisons that exist in
>> >>> the
>> >>> code and allows for smooth transition.
>> >>>
>> >>> --
>> >>> Best regards,
>> >>> Oleg Gelbukh
>> >>>
>> >>> On Fri, Oct 16, 2015 at 5:35 PM, Igor Kalnitsky
>> >>> 
>> >>> wrote:
>> 
>>  Oleg,
>> 
>>  Awesome! That's what I was looking for. :)
>> 
>>  - Igor
>> 
>>  On Fri, Oct 16, 2015 at 5:09 PM, Oleg Gelbukh 
>>  wrote:
>>  > Igor,
>>  >
>>  > Got your question now. Coordinated point (maintenance) releases are
>>  > dropped.
>>  > [1] [2]
>>  >
>>  > [1]
>>  >
>>  > http://lists.openstack.org/pipermail/openstack-dev/2015-May/065144.html
>>  > [2]
>>  >
>>  >
>>  > https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fliberty_releases
>>  >
>>  > --
>>  > Best regards,
>>  > Oleg Gelbukh
>>  >
>>  > On Fri, Oct 16, 2015 at 3:30 PM, Igor Kalnitsky
>>  > 
>>  > wrote:
>>  >>
>>  >> Oleg,
>>  >>
>>  >> Yes, I know. Still you didn't answer my question - are they
>>  >> planning
>>  >> to release stable branches time-to-time? Like I said, Liberty is
>>  >> something similar 2015.2.0. How they will name release of
>>  >> something
>>  >> like 2015.2.1 (stable release, with bugfixes) ? Or they plan to
>>  >> drop
>>  >> it?
>>  >>
>>  >> Thanks,
>>  >> Igor
>>  >>
>>  >> On Fri, Oct 16, 2015 at 1:02 PM, Oleg Gelbukh
>>  >> 
>>  >> wrote:
>>  >> > Igor,
>>  >> >
>>  >> > The point is that there's no 2015.2.0 version anywhere in
>>  >> > OpenStack. So
>>  >> > every component will be versioned separately, for example, in
>>  >> > Libery,
>>  >> > Nova
>>  >> > has version 12.0.0, and minor release of it is going to have
>>  >> > version
>>  >> > 12.0.1,
>>  >> > while Keystone, for instance, will have version 11.0.0 and
>>  >> > 11.0.1
>>  >> > for
>>  >> > minor
>>  >> > release.
>>  >> >
>>  >> > The problem in Fuel is that coordinated release version is used
>>  >> > in
>>  >> > several
>>  >> > places, the most important being installation path of the
>>  >> > fuel-library.
>>  >> > We
>>  >> > won't be able to use it the same way since Liberty. I'd like to
>>  >> > understand
>> 

Re: [openstack-dev] [neutron][taas] neutron ovs-agent deletes taas flows

2015-12-15 Thread Ihar Hrachyshka

Assaf Muller  wrote:


SFC are going to hit this issue as well. Really any out of tree
Neutron project that extends the OVS agent and expects things to work
:)


Yes. SFC project is considered for l2 agent extensions mechanism, though we  
need to deliver in core first. AFAIK for the time being they ship their OVS  
agent fork (or plan to). Other projects that we are aware of being affected  
are BGP-VPN, or some new QoS rule types.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-15 Thread Adrian Otto
Vilobh,

Thanks for advancing this important topic. I took a look at what Tim referenced 
how Nova is implementing nested quotas, and it seems to me that’s something we 
could fold in as well to our design. Do you agree?

Adrian

On Dec 14, 2015, at 10:22 PM, Tim Bell 
> wrote:

Can we have nested project quotas in from the beginning ? Nested projects are 
in Keystone V3 from Kilo onwards and retrofitting this is hard work.

For details, see the Nova functions at 
https://review.openstack.org/#/c/242626/. Cinder now also has similar functions.

Tim

From: Vilobh Meshram [mailto:vilobhmeshram.openst...@gmail.com]
Sent: 15 December 2015 01:59
To: OpenStack Development Mailing List (not for usage questions) 
>; 
OpenStack Mailing List (not for usage questions) 
>
Subject: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

Hi All,

Currently, it is possible to create unlimited number of resource like 
bay/pod/service/. In Magnum, there should be a limitation for user or project 
to create Magnum resource,
and the limitation should be configurable[1].

I proposed following design :-

1. Introduce new table magnum.quotas
++--+--+-+-++
| Field  | Type | Null | Key | Default | Extra  |
++--+--+-+-++
| id | int(11)  | NO   | PRI | NULL| auto_increment |
| created_at | datetime | YES  | | NULL||
| updated_at | datetime | YES  | | NULL||
| deleted_at | datetime | YES  | | NULL||
| project_id | varchar(255) | YES  | MUL | NULL||
| resource   | varchar(255) | NO   | | NULL||
| hard_limit | int(11)  | YES  | | NULL||
| deleted| int(11)  | YES  | | NULL||
++--+--+-+-++
resource can be Bay, Pod, Containers, etc.

2. API controller for quota will be created to make sure basic CLI commands 
work.
quota-show, quota-delete, quota-create, quota-update
3. When the admin specifies a quota of X number of resources to be created the 
code should abide by that. For example if hard limit for Bay is 5 (i.e. a 
project can have maximum 5 Bay's) if a user in a project tries to exceed that 
hardlimit it won't be allowed. Similarly goes for other resources.
4. Please note the quota validation only works for resources created via 
Magnum. Could not think of a way that Magnum to know if a COE specific 
utilities created a resource in background. One way could be to see the 
difference between whats stored in magnum.quotas and the information of the 
actual resources created for a particular bay in k8s/COE.
5. Introduce a config variable to set quotas values.
If everyone agrees will start the changes by introducing quota restrictions on 
Bay creation.
Thoughts ??

-Vilobh
[1] https://blueprints.launchpad.net/magnum/+spec/resource-quota
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-15 Thread Tim Bell
Thanks… it is really important from the user experience that we keep the nested 
quota implementations in sync so we don’t have different semantics.

 

Tim

 

From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: 15 December 2015 18:44
To: OpenStack Development Mailing List (not for usage questions) 

Cc: OpenStack Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

 

Vilobh, 

 

Thanks for advancing this important topic. I took a look at what Tim referenced 
how Nova is implementing nested quotas, and it seems to me that’s something we 
could fold in as well to our design. Do you agree?

 

Adrian

 

On Dec 14, 2015, at 10:22 PM, Tim Bell  > wrote:

 

Can we have nested project quotas in from the beginning ? Nested projects are 
in Keystone V3 from Kilo onwards and retrofitting this is hard work.

 

For details, see the Nova functions at  
 
https://review.openstack.org/#/c/242626/. Cinder now also has similar functions.

 

Tim

 

From: Vilobh Meshram [mailto:vilobhmeshram.openst...@gmail.com] 
Sent: 15 December 2015 01:59
To: OpenStack Development Mailing List (not for usage questions) 
 
>; OpenStack Mailing List (not for usage questions) 
 >
Subject: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

 

Hi All,

 

Currently, it is possible to create unlimited number of resource like 
bay/pod/service/. In Magnum, there should be a limitation for user or project 
to create Magnum resource,
and the limitation should be configurable[1]. 

 

I proposed following design :-

 

1. Introduce new table magnum.quotas

++--+--+-+-++

| Field  | Type | Null | Key | Default | Extra  |

++--+--+-+-++

| id | int(11)  | NO   | PRI | NULL| auto_increment |

| created_at | datetime | YES  | | NULL||

| updated_at | datetime | YES  | | NULL||

| deleted_at | datetime | YES  | | NULL||

| project_id | varchar(255) | YES  | MUL | NULL||

| resource   | varchar(255) | NO   | | NULL||

| hard_limit | int(11)  | YES  | | NULL||

| deleted| int(11)  | YES  | | NULL||

++--+--+-+-++

resource can be Bay, Pod, Containers, etc.

 

2. API controller for quota will be created to make sure basic CLI commands 
work.

quota-show, quota-delete, quota-create, quota-update

3. When the admin specifies a quota of X number of resources to be created the 
code should abide by that. For example if hard limit for Bay is 5 (i.e. a 
project can have maximum 5 Bay's) if a user in a project tries to exceed that 
hardlimit it won't be allowed. Similarly goes for other resources. 

4. Please note the quota validation only works for resources created via 
Magnum. Could not think of a way that Magnum to know if a COE specific 
utilities created a resource in background. One way could be to see the 
difference between whats stored in magnum.quotas and the information of the 
actual resources created for a particular bay in k8s/COE.

5. Introduce a config variable to set quotas values.

If everyone agrees will start the changes by introducing quota restrictions on 
Bay creation.

Thoughts ??

 

-Vilobh

[1]   
https://blueprints.launchpad.net/magnum/+spec/resource-quota

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 ?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Andrew Maksimov
+1 to Igor suggestion to downgrade Postgres to 9.2. Our users don't work
directly with Postgres, so there is no any deprecation of Fuel features.
Maintaining our own custom Postgres package just because we want "JSON
column" is not a rational decision. Come on, fuel is not a billing system
with thousands tables and special requirements to database. At least, we
should try to keep it simple and avoid unnecessary complication.

PS
 BTW, some people suggest to avoid using  json columns, read [1] PostgreSQL
anti-patterns: unnecessary json columns.

[1] -
http://blog.2ndquadrant.com/postgresql-anti-patterns-unnecessary-jsonhstore-dynamic-columns/

Regards,
Andrey Maximov
Fuel Project Manager


On Tue, Dec 15, 2015 at 9:34 PM, Vladimir Kuklin 
wrote:

> Folks
>
> Let me add my 2c here.
>
> I am for using Postgres 9.3. Here is an additional argument to the ones
> provided by Artem, Aleksandra and others.
>
> Fuel is being sometimes highly customized by our users for their specific
> needs. It has been Postgres 9.3 for a while and they might have as well
> gotten used to it and assumed by default that this would not change. So
> some of their respective features they are developing for their own sake
> may depend on Postgres 9.3 and we will never be able to tell the fraction
> of such use cases. Moreover, downgrading DBMS version of Fuel should be
> inevitably considered as a 'deprecation' of some features our software
> suite is providing to our users. This actually means that we MUST provide
> our users with a warning and deprecation period to allow them to adjust to
> these changes. Obviously, accidental change of Postgres version does not
> follow such a policy in any way. So I see no other ways except for getting
> back to Postgres 9.3.
>
>
> On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky 
> wrote:
>
>> Hey Mike,
>>
>> Thanks for your input.
>>
>> > actually not.  if you replace your ARRAY columns with JSON entirely,
>>
>> It still needs to fix the code, i.e. change ARRAY-specific queries
>> with JSON ones around the code. ;)
>>
>> > there's already a mostly finished PR for SQLAlchemy support in the
>> queue.
>>
>> Does it mean SQLAlchemy will have one unified interface to make JSON
>> queries? So we can use different backends if necessary?
>>
>> Thanks,
>> - Igor
>>
>> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer  wrote:
>> >
>> >
>> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
>> >> Hey Julien,
>> >>
>> >>>
>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
>> >>
>> >> I believe this blueprint is about DB for OpenStack cloud (we use
>> >> Galera now), while here we're talking about DB backend for Fuel
>> >> itself. Fuel has a separate node (so called Fuel Master) and we use
>> >> PostgreSQL now.
>> >>
>> >>> does that mean Fuel is only going to be able to run with PostgreSQL?
>> >>
>> >> Unfortunately we already tied up to PostgreSQL. For instance, we use
>> >> PostgreSQL's ARRAY column type. Introducing JSON column is one more
>> >> way to tighten knots harder.
>> >
>> > actually not.  if you replace your ARRAY columns with JSON entirely,
>> > MySQL has JSON as well now:
>> > https://dev.mysql.com/doc/refman/5.7/en/json.html
>> >
>> > there's already a mostly finished PR for SQLAlchemy support in the
>> queue.
>> >
>> >
>> >
>> >>
>> >> - Igor
>> >>
>> >> On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou 
>> wrote:
>> >>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
>> >>>
>>  The things I want to notice are:
>> 
>>  * Currently we aren't tied up to PostgreSQL 9.3.
>>  * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
>>  set of JSON operations.
>> >>>
>> >>> I'm curious and have just a small side question: does that mean Fuel
>> is
>> >>> only going to be able to run with PostgreSQL?
>> >>>
>> >>> I also see
>> >>>
>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
>> >>> maybe it's related?
>> >>>
>> >>> Thanks!
>> >>>
>> >>> --
>> >>> Julien Danjou
>> >>> // Free Software hacker
>> >>> // https://julien.danjou.info
>> >>
>> >>
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> 

Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Vitaly Kramskikh
+1 to Vova and Sasha,

I voted for 9.2 at the beginning of the thread due to potential packaging
and infrastructure issues, but since Artem and Sasha insist on 9.3, I see
no reasons to keep 9.2.

2015-12-15 22:19 GMT+03:00 Aleksandra Fedorova :

> Igor,
>
> that's an anonymous vote for question stated in a wrong way. Sorry,
> but it doesn't really look like a valuable input for the discussion.
>
> On Tue, Dec 15, 2015 at 9:47 PM, Igor Kalnitsky 
> wrote:
> > FYI: so far (according to poll [1]) we have
> >
> > * 11 votes for keeping 9.2
> > * 4 votes for restoring 9.3
> >
> > [1]
> https://docs.google.com/spreadsheets/d/1RNcEVFsg7GdHIXlJl-6LCELhlwQ_zmTbd40Bk_jH1m4/edit?usp=sharing
> >
> > On Tue, Dec 15, 2015 at 8:34 PM, Vladimir Kuklin 
> wrote:
> >> Folks
> >>
> >> Let me add my 2c here.
> >>
> >> I am for using Postgres 9.3. Here is an additional argument to the ones
> >> provided by Artem, Aleksandra and others.
> >>
> >> Fuel is being sometimes highly customized by our users for their
> specific
> >> needs. It has been Postgres 9.3 for a while and they might have as well
> >> gotten used to it and assumed by default that this would not change. So
> some
> >> of their respective features they are developing for their own sake may
> >> depend on Postgres 9.3 and we will never be able to tell the fraction of
> >> such use cases. Moreover, downgrading DBMS version of Fuel should be
> >> inevitably considered as a 'deprecation' of some features our software
> suite
> >> is providing to our users. This actually means that we MUST provide our
> >> users with a warning and deprecation period to allow them to adjust to
> these
> >> changes. Obviously, accidental change of Postgres version does not
> follow
> >> such a policy in any way. So I see no other ways except for getting
> back to
> >> Postgres 9.3.
> >>
> >>
> >> On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky <
> ikalnit...@mirantis.com>
> >> wrote:
> >>>
> >>> Hey Mike,
> >>>
> >>> Thanks for your input.
> >>>
> >>> > actually not.  if you replace your ARRAY columns with JSON entirely,
> >>>
> >>> It still needs to fix the code, i.e. change ARRAY-specific queries
> >>> with JSON ones around the code. ;)
> >>>
> >>> > there's already a mostly finished PR for SQLAlchemy support in the
> >>> > queue.
> >>>
> >>> Does it mean SQLAlchemy will have one unified interface to make JSON
> >>> queries? So we can use different backends if necessary?
> >>>
> >>> Thanks,
> >>> - Igor
> >>>
> >>> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer  wrote:
> >>> >
> >>> >
> >>> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
> >>> >> Hey Julien,
> >>> >>
> >>> >>>
> >>> >>>
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
> >>> >>
> >>> >> I believe this blueprint is about DB for OpenStack cloud (we use
> >>> >> Galera now), while here we're talking about DB backend for Fuel
> >>> >> itself. Fuel has a separate node (so called Fuel Master) and we use
> >>> >> PostgreSQL now.
> >>> >>
> >>> >>> does that mean Fuel is only going to be able to run with
> PostgreSQL?
> >>> >>
> >>> >> Unfortunately we already tied up to PostgreSQL. For instance, we use
> >>> >> PostgreSQL's ARRAY column type. Introducing JSON column is one more
> >>> >> way to tighten knots harder.
> >>> >
> >>> > actually not.  if you replace your ARRAY columns with JSON entirely,
> >>> > MySQL has JSON as well now:
> >>> > https://dev.mysql.com/doc/refman/5.7/en/json.html
> >>> >
> >>> > there's already a mostly finished PR for SQLAlchemy support in the
> >>> > queue.
> >>> >
> >>> >
> >>> >
> >>> >>
> >>> >> - Igor
> >>> >>
> >>> >> On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou  >
> >>> >> wrote:
> >>> >>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
> >>> >>>
> >>>  The things I want to notice are:
> >>> 
> >>>  * Currently we aren't tied up to PostgreSQL 9.3.
> >>>  * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by
> using a
> >>>  set of JSON operations.
> >>> >>>
> >>> >>> I'm curious and have just a small side question: does that mean
> Fuel
> >>> >>> is
> >>> >>> only going to be able to run with PostgreSQL?
> >>> >>>
> >>> >>> I also see
> >>> >>>
> >>> >>>
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
> >>> >>> maybe it's related?
> >>> >>>
> >>> >>> Thanks!
> >>> >>>
> >>> >>> --
> >>> >>> Julien Danjou
> >>> >>> // Free Software hacker
> >>> >>> // https://julien.danjou.info
> >>> >>
> >>> >>
> >>> >>
> __
> >>> >> OpenStack Development Mailing List (not for usage questions)
> >>> >> Unsubscribe:
> >>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >>
> >>> >
> >>> >
> >>> >
> 

Re: [openstack-dev] [oslo][oslo.log]

2015-12-15 Thread Joshua Harlow
IMHO, go for it, the yaml should probably be of the format that the 
following uses so that it's easily known what the format actually is:


https://docs.python.org/2/library/logging.config.html#logging.config.dictConfig

So convert yaml -> that dict format -> profit!

-Josh

Vladislav Kuzmin wrote:

Hi,

I want to specify all my option in yaml file, because it is much more
readable. But I must use ini file, because oslo.log using
logging.config.fileConfig for reading the config file
(https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L216)
Why we cannot use yaml file? Can I propose solution for that?

Thanks.
|
|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-15 Thread Vladimir Kuklin
Folks

This email is a proposal to push Docker containers removal from the master
node to the date beyond 8.0 HCF.

Here is why I propose to do so.

Removal of Docker is a rather invasive change and may introduce a lot of
regressions. It is well may affect how bugs are fixed - we might have 2
ways of fixing them, while during SCF of 8.0 this may affect velocity of
bug fixing as you need to fix bugs in master prior to fixing them in stable
branches. This actually may significantly increase our bugfixing pace and
put 8.0 GA release on risk.



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Vladimir Kuklin
Folks

Let me add my 2c here.

I am for using Postgres 9.3. Here is an additional argument to the ones
provided by Artem, Aleksandra and others.

Fuel is being sometimes highly customized by our users for their specific
needs. It has been Postgres 9.3 for a while and they might have as well
gotten used to it and assumed by default that this would not change. So
some of their respective features they are developing for their own sake
may depend on Postgres 9.3 and we will never be able to tell the fraction
of such use cases. Moreover, downgrading DBMS version of Fuel should be
inevitably considered as a 'deprecation' of some features our software
suite is providing to our users. This actually means that we MUST provide
our users with a warning and deprecation period to allow them to adjust to
these changes. Obviously, accidental change of Postgres version does not
follow such a policy in any way. So I see no other ways except for getting
back to Postgres 9.3.


On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky 
wrote:

> Hey Mike,
>
> Thanks for your input.
>
> > actually not.  if you replace your ARRAY columns with JSON entirely,
>
> It still needs to fix the code, i.e. change ARRAY-specific queries
> with JSON ones around the code. ;)
>
> > there's already a mostly finished PR for SQLAlchemy support in the queue.
>
> Does it mean SQLAlchemy will have one unified interface to make JSON
> queries? So we can use different backends if necessary?
>
> Thanks,
> - Igor
>
> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer  wrote:
> >
> >
> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
> >> Hey Julien,
> >>
> >>>
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
> >>
> >> I believe this blueprint is about DB for OpenStack cloud (we use
> >> Galera now), while here we're talking about DB backend for Fuel
> >> itself. Fuel has a separate node (so called Fuel Master) and we use
> >> PostgreSQL now.
> >>
> >>> does that mean Fuel is only going to be able to run with PostgreSQL?
> >>
> >> Unfortunately we already tied up to PostgreSQL. For instance, we use
> >> PostgreSQL's ARRAY column type. Introducing JSON column is one more
> >> way to tighten knots harder.
> >
> > actually not.  if you replace your ARRAY columns with JSON entirely,
> > MySQL has JSON as well now:
> > https://dev.mysql.com/doc/refman/5.7/en/json.html
> >
> > there's already a mostly finished PR for SQLAlchemy support in the queue.
> >
> >
> >
> >>
> >> - Igor
> >>
> >> On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou 
> wrote:
> >>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
> >>>
>  The things I want to notice are:
> 
>  * Currently we aren't tied up to PostgreSQL 9.3.
>  * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
>  set of JSON operations.
> >>>
> >>> I'm curious and have just a small side question: does that mean Fuel is
> >>> only going to be able to run with PostgreSQL?
> >>>
> >>> I also see
> >>>
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
> >>> maybe it's related?
> >>>
> >>> Thanks!
> >>>
> >>> --
> >>> Julien Danjou
> >>> // Free Software hacker
> >>> // https://julien.danjou.info
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-15 Thread Vilobh Meshram
IMHO for Magnum and Nested Quota we need more discussion
before proceeding ahead because :-

1. The main intent of hierarchical multi tenancy is creating a hierarchy of
projects (so that its easier for the cloud provider to manage different
projects) and nested quota driver being able to validate and impose those
restrictions.
2. The tenancy boundary in Magnum is the bay. Bays offer both a management
and security isolation between multiple tenants.
3. In Magnum there is no intent to share a single bay between multiple
tenants.

So I would like to have a discussion on whether Nested Quota approach fits
in our/Magnum's design and how will the resources be distributed in the
hierarchy. I will include it in our Magnum weekly meeting agenda.

I have in-fact drafted a blueprint for it sometime back [1].

I am a huge supporter of hierarchical projects and nested quota approaches
(as they if done correctly IMHO minimize admin pain of managing quotas) ,
just wanted to see a cleaner way we can get this done for Magnum.

JFYI, I am the primary author of Cinder Nested Quota [2]  and co-author of
Nova Nested Quota[3] so I am familiar with the approach taken in both.

Thoughts ?

-Vilobh

[1]  Magnum Nested Quota :
https://blueprints.launchpad.net/magnum/+spec/nested-quota-magnum
[2] Cinder Nested Quota Driver : https://review.openstack.org/#/c/205369/
[3] Nova Nested Quota Driver : https://review.openstack.org/#/c/242626/

On Tue, Dec 15, 2015 at 10:10 AM, Tim Bell  wrote:

> Thanks… it is really important from the user experience that we keep the
> nested quota implementations in sync so we don’t have different semantics.
>
>
>
> Tim
>
>
>
> *From:* Adrian Otto [mailto:adrian.o...@rackspace.com]
> *Sent:* 15 December 2015 18:44
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Cc:* OpenStack Mailing List (not for usage questions) <
> openst...@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> Resources
>
>
>
> Vilobh,
>
>
>
> Thanks for advancing this important topic. I took a look at what Tim
> referenced how Nova is implementing nested quotas, and it seems to me
> that’s something we could fold in as well to our design. Do you agree?
>
>
>
> Adrian
>
>
>
> On Dec 14, 2015, at 10:22 PM, Tim Bell  wrote:
>
>
>
> Can we have nested project quotas in from the beginning ? Nested projects
> are in Keystone V3 from Kilo onwards and retrofitting this is hard work.
>
>
>
> For details, see the Nova functions at
> https://review.openstack.org/#/c/242626/. Cinder now also has similar
> functions.
>
>
>
> Tim
>
>
>
> *From:* Vilobh Meshram [mailto:vilobhmeshram.openst...@gmail.com
> ]
> *Sent:* 15 December 2015 01:59
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>; OpenStack Mailing List (not for usage
> questions) 
> *Subject:* [openstack-dev] [openstack][magnum] Quota for Magnum Resources
>
>
>
> Hi All,
>
>
>
> Currently, it is possible to create unlimited number of resource like
> bay/pod/service/. In Magnum, there should be a limitation for user or
> project to create Magnum resource,
> and the limitation should be configurable[1].
>
>
>
> I proposed following design :-
>
>
>
> 1. Introduce new table magnum.quotas
>
> ++--+--+-+-++
>
> | Field  | Type | Null | Key | Default | Extra  |
>
> ++--+--+-+-++
>
> | id | int(11)  | NO   | PRI | NULL| auto_increment |
>
> | created_at | datetime | YES  | | NULL||
>
> | updated_at | datetime | YES  | | NULL||
>
> | deleted_at | datetime | YES  | | NULL||
>
> | project_id | varchar(255) | YES  | MUL | NULL||
>
> | resource   | varchar(255) | NO   | | NULL||
>
> | hard_limit | int(11)  | YES  | | NULL||
>
> | deleted| int(11)  | YES  | | NULL||
>
> ++--+--+-+-++
>
> resource can be Bay, Pod, Containers, etc.
>
>
>
> 2. API controller for quota will be created to make sure basic CLI
> commands work.
>
> quota-show, quota-delete, quota-create, quota-update
>
> 3. When the admin specifies a quota of X number of resources to be created
> the code should abide by that. For example if hard limit for Bay is 5 (i.e.
> a project can have maximum 5 Bay's) if a user in a project tries to exceed
> that hardlimit it won't be allowed. Similarly goes for other resources.
>
> 4. Please note the quota validation only works for resources created via
> Magnum. Could not think of a way that Magnum to know if a COE specific
> utilities created a resource in 

Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-15 Thread Egor Guz
Vilobh/Tim, could you elaborate about your use-cases around Magnum quota?

My concern is that user will be easy lost in quotas ;). e.g. we already have 
nova/cinder/neutron and Kub/Mesos(Framework) quotas.

There are two use-cases:
- tenant has it’s own list of bays/clusters (nova/cinder/neutron quota will 
apply)
- operator provision shared cluster and relay at Kub/Mesos(Framework) quota 
management

Also user full access to native tools (Kub/Marathon/Swarm), how quota will be 
applied in this case?

—
Egor

From: Vilobh Meshram 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, December 15, 2015 at 11:11
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Cc: "OpenStack Mailing List (not for usage questions)" 
>, Belmiro 
Moreira >
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

IMHO for Magnum and Nested Quota we need more discussion before proceeding 
ahead because :-

1. The main intent of hierarchical multi tenancy is creating a hierarchy of 
projects (so that its easier for the cloud provider to manage different 
projects) and nested quota driver being able to validate and impose those 
restrictions.
2. The tenancy boundary in Magnum is the bay. Bays offer both a management and 
security isolation between multiple tenants.
3. In Magnum there is no intent to share a single bay between multiple tenants.

So I would like to have a discussion on whether Nested Quota approach fits in 
our/Magnum's design and how will the resources be distributed in the hierarchy. 
I will include it in our Magnum weekly meeting agenda.

I have in-fact drafted a blueprint for it sometime back [1].

I am a huge supporter of hierarchical projects and nested quota approaches (as 
they if done correctly IMHO minimize admin pain of managing quotas) , just 
wanted to see a cleaner way we can get this done for Magnum.

JFYI, I am the primary author of Cinder Nested Quota [2]  and co-author of Nova 
Nested Quota[3] so I am familiar with the approach taken in both.

Thoughts ?

-Vilobh

[1]  Magnum Nested Quota : 
https://blueprints.launchpad.net/magnum/+spec/nested-quota-magnum
[2] Cinder Nested Quota Driver : https://review.openstack.org/#/c/205369/
[3] Nova Nested Quota Driver : https://review.openstack.org/#/c/242626/

On Tue, Dec 15, 2015 at 10:10 AM, Tim Bell 
> wrote:
Thanks… it is really important from the user experience that we keep the nested 
quota implementations in sync so we don’t have different semantics.

Tim

From: Adrian Otto 
[mailto:adrian.o...@rackspace.com]
Sent: 15 December 2015 18:44
To: OpenStack Development Mailing List (not for usage questions) 
>
Cc: OpenStack Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

Vilobh,

Thanks for advancing this important topic. I took a look at what Tim referenced 
how Nova is implementing nested quotas, and it seems to me that’s something we 
could fold in as well to our design. Do you agree?

Adrian

On Dec 14, 2015, at 10:22 PM, Tim Bell 
> wrote:

Can we have nested project quotas in from the beginning ? Nested projects are 
in Keystone V3 from Kilo onwards and retrofitting this is hard work.

For details, see the Nova functions at 
https://review.openstack.org/#/c/242626/. Cinder now also has similar functions.

Tim

From: Vilobh Meshram [mailto:vilobhmeshram.openst...@gmail.com]
Sent: 15 December 2015 01:59
To: OpenStack Development Mailing List (not for usage questions) 
>; 
OpenStack Mailing List (not for usage questions) 
>
Subject: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

Hi All,

Currently, it is possible to create unlimited number of resource like 
bay/pod/service/. In Magnum, there should be a limitation for user or project 
to create Magnum resource,
and the limitation should be configurable[1].

I proposed following design :-

1. Introduce new table magnum.quotas
++--+--+-+-++
| Field  | Type | Null | Key | Default | Extra  |
++--+--+-+-++
| id | int(11)  | NO   | PRI | NULL| 

Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Aleksandra Fedorova
Igor,

that's an anonymous vote for question stated in a wrong way. Sorry,
but it doesn't really look like a valuable input for the discussion.

On Tue, Dec 15, 2015 at 9:47 PM, Igor Kalnitsky  wrote:
> FYI: so far (according to poll [1]) we have
>
> * 11 votes for keeping 9.2
> * 4 votes for restoring 9.3
>
> [1] 
> https://docs.google.com/spreadsheets/d/1RNcEVFsg7GdHIXlJl-6LCELhlwQ_zmTbd40Bk_jH1m4/edit?usp=sharing
>
> On Tue, Dec 15, 2015 at 8:34 PM, Vladimir Kuklin  wrote:
>> Folks
>>
>> Let me add my 2c here.
>>
>> I am for using Postgres 9.3. Here is an additional argument to the ones
>> provided by Artem, Aleksandra and others.
>>
>> Fuel is being sometimes highly customized by our users for their specific
>> needs. It has been Postgres 9.3 for a while and they might have as well
>> gotten used to it and assumed by default that this would not change. So some
>> of their respective features they are developing for their own sake may
>> depend on Postgres 9.3 and we will never be able to tell the fraction of
>> such use cases. Moreover, downgrading DBMS version of Fuel should be
>> inevitably considered as a 'deprecation' of some features our software suite
>> is providing to our users. This actually means that we MUST provide our
>> users with a warning and deprecation period to allow them to adjust to these
>> changes. Obviously, accidental change of Postgres version does not follow
>> such a policy in any way. So I see no other ways except for getting back to
>> Postgres 9.3.
>>
>>
>> On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky 
>> wrote:
>>>
>>> Hey Mike,
>>>
>>> Thanks for your input.
>>>
>>> > actually not.  if you replace your ARRAY columns with JSON entirely,
>>>
>>> It still needs to fix the code, i.e. change ARRAY-specific queries
>>> with JSON ones around the code. ;)
>>>
>>> > there's already a mostly finished PR for SQLAlchemy support in the
>>> > queue.
>>>
>>> Does it mean SQLAlchemy will have one unified interface to make JSON
>>> queries? So we can use different backends if necessary?
>>>
>>> Thanks,
>>> - Igor
>>>
>>> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer  wrote:
>>> >
>>> >
>>> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
>>> >> Hey Julien,
>>> >>
>>> >>>
>>> >>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
>>> >>
>>> >> I believe this blueprint is about DB for OpenStack cloud (we use
>>> >> Galera now), while here we're talking about DB backend for Fuel
>>> >> itself. Fuel has a separate node (so called Fuel Master) and we use
>>> >> PostgreSQL now.
>>> >>
>>> >>> does that mean Fuel is only going to be able to run with PostgreSQL?
>>> >>
>>> >> Unfortunately we already tied up to PostgreSQL. For instance, we use
>>> >> PostgreSQL's ARRAY column type. Introducing JSON column is one more
>>> >> way to tighten knots harder.
>>> >
>>> > actually not.  if you replace your ARRAY columns with JSON entirely,
>>> > MySQL has JSON as well now:
>>> > https://dev.mysql.com/doc/refman/5.7/en/json.html
>>> >
>>> > there's already a mostly finished PR for SQLAlchemy support in the
>>> > queue.
>>> >
>>> >
>>> >
>>> >>
>>> >> - Igor
>>> >>
>>> >> On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou 
>>> >> wrote:
>>> >>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
>>> >>>
>>>  The things I want to notice are:
>>> 
>>>  * Currently we aren't tied up to PostgreSQL 9.3.
>>>  * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
>>>  set of JSON operations.
>>> >>>
>>> >>> I'm curious and have just a small side question: does that mean Fuel
>>> >>> is
>>> >>> only going to be able to run with PostgreSQL?
>>> >>>
>>> >>> I also see
>>> >>>
>>> >>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
>>> >>> maybe it's related?
>>> >>>
>>> >>> Thanks!
>>> >>>
>>> >>> --
>>> >>> Julien Danjou
>>> >>> // Free Software hacker
>>> >>> // https://julien.danjou.info
>>> >>
>>> >>
>>> >> __
>>> >> OpenStack Development Mailing List (not for usage questions)
>>> >> Unsubscribe:
>>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >
>>> >
>>> > __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>

Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Vladimir Kuklin
Igor

Sorry, this vote is irrelevant as it is not about all the concerns rasied
by Artem, Aleksandra and me. It is about JSON vs non-JSON Postgres which is
not exactly the case.

On Tue, Dec 15, 2015 at 9:47 PM, Igor Kalnitsky 
wrote:

> FYI: so far (according to poll [1]) we have
>
> * 11 votes for keeping 9.2
> * 4 votes for restoring 9.3
>
> [1]
> https://docs.google.com/spreadsheets/d/1RNcEVFsg7GdHIXlJl-6LCELhlwQ_zmTbd40Bk_jH1m4/edit?usp=sharing
>
> On Tue, Dec 15, 2015 at 8:34 PM, Vladimir Kuklin 
> wrote:
> > Folks
> >
> > Let me add my 2c here.
> >
> > I am for using Postgres 9.3. Here is an additional argument to the ones
> > provided by Artem, Aleksandra and others.
> >
> > Fuel is being sometimes highly customized by our users for their specific
> > needs. It has been Postgres 9.3 for a while and they might have as well
> > gotten used to it and assumed by default that this would not change. So
> some
> > of their respective features they are developing for their own sake may
> > depend on Postgres 9.3 and we will never be able to tell the fraction of
> > such use cases. Moreover, downgrading DBMS version of Fuel should be
> > inevitably considered as a 'deprecation' of some features our software
> suite
> > is providing to our users. This actually means that we MUST provide our
> > users with a warning and deprecation period to allow them to adjust to
> these
> > changes. Obviously, accidental change of Postgres version does not follow
> > such a policy in any way. So I see no other ways except for getting back
> to
> > Postgres 9.3.
> >
> >
> > On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky  >
> > wrote:
> >>
> >> Hey Mike,
> >>
> >> Thanks for your input.
> >>
> >> > actually not.  if you replace your ARRAY columns with JSON entirely,
> >>
> >> It still needs to fix the code, i.e. change ARRAY-specific queries
> >> with JSON ones around the code. ;)
> >>
> >> > there's already a mostly finished PR for SQLAlchemy support in the
> >> > queue.
> >>
> >> Does it mean SQLAlchemy will have one unified interface to make JSON
> >> queries? So we can use different backends if necessary?
> >>
> >> Thanks,
> >> - Igor
> >>
> >> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer  wrote:
> >> >
> >> >
> >> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
> >> >> Hey Julien,
> >> >>
> >> >>>
> >> >>>
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
> >> >>
> >> >> I believe this blueprint is about DB for OpenStack cloud (we use
> >> >> Galera now), while here we're talking about DB backend for Fuel
> >> >> itself. Fuel has a separate node (so called Fuel Master) and we use
> >> >> PostgreSQL now.
> >> >>
> >> >>> does that mean Fuel is only going to be able to run with PostgreSQL?
> >> >>
> >> >> Unfortunately we already tied up to PostgreSQL. For instance, we use
> >> >> PostgreSQL's ARRAY column type. Introducing JSON column is one more
> >> >> way to tighten knots harder.
> >> >
> >> > actually not.  if you replace your ARRAY columns with JSON entirely,
> >> > MySQL has JSON as well now:
> >> > https://dev.mysql.com/doc/refman/5.7/en/json.html
> >> >
> >> > there's already a mostly finished PR for SQLAlchemy support in the
> >> > queue.
> >> >
> >> >
> >> >
> >> >>
> >> >> - Igor
> >> >>
> >> >> On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou 
> >> >> wrote:
> >> >>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
> >> >>>
> >>  The things I want to notice are:
> >> 
> >>  * Currently we aren't tied up to PostgreSQL 9.3.
> >>  * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by
> using a
> >>  set of JSON operations.
> >> >>>
> >> >>> I'm curious and have just a small side question: does that mean Fuel
> >> >>> is
> >> >>> only going to be able to run with PostgreSQL?
> >> >>>
> >> >>> I also see
> >> >>>
> >> >>>
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
> >> >>> maybe it's related?
> >> >>>
> >> >>> Thanks!
> >> >>>
> >> >>> --
> >> >>> Julien Danjou
> >> >>> // Free Software hacker
> >> >>> // https://julien.danjou.info
> >> >>
> >> >>
> >> >>
> __
> >> >> OpenStack Development Mailing List (not for usage questions)
> >> >> Unsubscribe:
> >> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>
> >> >
> >> >
> >> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for 

Re: [openstack-dev] [tripleo] Pin some puppet dependencies on git clone

2015-12-15 Thread Emilien Macchi


On 12/15/2015 12:23 PM, Jiří Stránský wrote:
> On 15.12.2015 17:46, Emilien Macchi wrote:
>> For information, Puppet OpenStack CI is consistent for unit & functional
>> tests, we use a single (versionned) Puppetfile:
>> https://github.com/openstack/puppet-openstack-integration/blob/master/Puppetfile
>>
>>
>> TripleO folks might want to have a look at this to follow the
>> dependencies actually supported by upstream OR if you prefer surfing on
>> the edge and risk to break CI every morning.
>>
>> Let me know if you're interested to support that in TripleO Puppet
>> elements, I can help with that.
> 
> Syncing tripleo-puppet-elements with puppet-openstack-integration is a
> good idea i think, to prevent breakages like the puppet-mysql one
> mentioned before.
> 
> One thing to keep in mind is that the module sets in t-p-e and p-o-i are
> not the same. E.g. recently we added the timezone module to t-p-e, and
> it's not in the p-o-i Puppetfile.
> 
> Also, sometimes we do have to go to non-openstack puppet modules to fix
> things for TripleO (i don't recall a particular example but i think we
> did a couple of fixes in non-openstack modules to allow us to deploy HA
> with Pacemaker). In cases like this it would be helpful if we still had
> the possibility to pin to something different than what's in
> puppet-openstack-integration perhaps.
> 
> 
> Considering the above, if we could figure out a way to have t-p-e behave
> like this:
> 
> * install the module set listed in t-p-e, not p-o-i.
> 
> * if there's a ref/branch specified directly in t-p-e, use that
> 
> * if t-p-e doesn't have a ref/branch specified, use ref/branch from p-o-i
> 
> * if t-p-e doesn't have a ref/branch specified, and the module is not
> present in p-o-i, use master
> 
> * still honor DIB_REPOREF_* variables to pin individual puppet modules
> to whatever wanted at time of building the image -- very useful for
> temporary workarounds done either manually or in tripleo.sh.
> 
> ...then i think this would be very useful. Not sure at the moment what
> would be the best way to meet these points though, these are just some
> immediate thoughts on the matter.

I think we shout not use puppet-openstack-integration per-se, it was
just an example.

Though we can take this project as reference to build a tool that
prepare Puppet modules in TripleO CI.

If you look at puppet-openstack-integration, we have some scripts that
allow or not to use zuul-cloner with r10k, that's nice because it allows
us to:
* use depends-on puppet patches
* if the end-user does not have zuul, it will git-clone, in tripleo case
I think if DIB_REPOREF_* is set, let's use it
* otherwise use git clone master.

I would suggest also TripleO CI having a Puppetfile that would be gated
(maybe in tripleo-ci repo?).

What do you think?

> 
> Jirka
> 
>>
>> On 12/14/2015 02:25 PM, Dan Prince wrote:
>>> On Fri, 2015-12-11 at 21:50 +0100, Jaume Devesa wrote:
 Hi all,

 Today TripleO CI jobs failed because a new commit introduced on
 puppetlabs-mysql[1].
 Mr. Jiri Stransky solved it as a temporally fix by pinning the puppet
 module clone to a previous
 commit in the tripleo-common project[2].

 source-repositories puppet element[3] allows you to pin the puppet
 module clone as well by
 adding a reference commit in the source-repository-
 file. In this case,
 I am talking about the source-repository-puppet-modules[4].

 I know you TripleO guys are brave people that live dangerously in the
 cutting edge, but I think
 the dependencies to puppet modules not managed by the OpenStack
 community should be
 pinned to last repo tag for the sake of stability.

 What do you think?
>>>
>>> I've previously considered added a stable puppet modules element for
>>> just this case:
>>>
>>> https://review.openstack.org/#/c/184844/
>>>
>>> Using stable branches of things like MySQL, Rabbit, etc might make
>>> sense. However I would want to consider following what the upstream
>>> Puppet community does as well specifically because we do want to
>>> continue using upstream openstack/puppet-* modules as well. At least
>>> for our upstream CI.
>>>
>>> We also want to make sure our stable TripleO jobs use the stable
>>> branches of openstack/puppet-* so we might need to be careful about
>>> pinning those things too.
>>>
>>> Dan
>>>
>>>
   I can take care of this.

 [1]: https://github.com/puppetlabs/puppetlabs-mysql/commit/bdf4d0f52d
 fc244d10bbd5b67efb791a39520ed2
 [2]: https://review.openstack.org/#/c/256572/
 [3]: https://github.com/openstack/diskimage-builder/tree/master/eleme
 nts/source-repositories
 [4]: https://github.com/openstack/tripleo-puppet-elements/blob/master
 /elements/puppet-modules/source-repository-puppet-modules

 -- 
 Jaume Devesa
 Software Engineer at Midokura
 _
 _
 

Re: [openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-15 Thread Andrew Maksimov
+1

Regards,
Andrey Maximov
Fuel Project Manager

On Tue, Dec 15, 2015 at 9:41 PM, Vladimir Kuklin 
wrote:

> Folks
>
> This email is a proposal to push Docker containers removal from the master
> node to the date beyond 8.0 HCF.
>
> Here is why I propose to do so.
>
> Removal of Docker is a rather invasive change and may introduce a lot of
> regressions. It is well may affect how bugs are fixed - we might have 2
> ways of fixing them, while during SCF of 8.0 this may affect velocity of
> bug fixing as you need to fix bugs in master prior to fixing them in stable
> branches. This actually may significantly increase our bugfixing pace and
> put 8.0 GA release on risk.
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] OpenStack versioning in Fuel

2015-12-15 Thread Dmitry Klenov
Hi folks,

I would propose to keep current versioning schema until fuel release
schedule is fully aligned with OpenStack releases. AFAIK it is expected to
happen since 9.0. After it we can switch to OpenStack version names.

BR,
Dmitry.

On Tue, Dec 15, 2015 at 8:41 PM, Igor Kalnitsky 
wrote:

> Folks,
>
> I want to bring this up again. There were no progress since last
> Oleg's mail, and we must decide. It's good that we still have
> "2015.1.0-8.0" version while OpenStack uses "Liberty" name for
> versions.
>
> Let's decide which name to use, file a bug and finally resolve it.
>
> - Igor
>
> On Thu, Oct 22, 2015 at 10:23 PM, Oleg Gelbukh 
> wrote:
> > Igor, it is interesting that you mention backward compatibility in this
> > context.
> >
> > I can see lots of code in Nailgun that checks for release version to
> > enable/disable features that were added or removed more than 2 releases
> > before [1] [2] [3] (there's a lot more).
> >
> > What should we do about that code? I believe we could 'safely' delete
> it. It
> > will make our code base much more compact and supportable without even
> > decoupling serializers, etc. Is my assumption correct, or I just missing
> > something?
> >
> > This will also help to switch to another scheme of versioning of
> releases,
> > since there will be much less places where those version scheme is
> > hardcoded.
> >
> > [1]
> >
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/objects/release.py#L142-L145
> > [2]
> >
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py#L554-L555
> > [3]
> >
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/objects/serializers/node.py#L124-L126
> >
> > --
> > Best regards,
> > Oleg Gelbukh
> >
> > On Mon, Oct 19, 2015 at 6:34 PM, Igor Kalnitsky  >
> > wrote:
> >>
> >> Oleg,
> >>
> >> I think we can remove this function for new releases and keep them
> >> only for backward compatibility with previous ones. Why not? If
> >> there's a way to do things better let's do them better. :)
> >>
> >> On Sat, Oct 17, 2015 at 11:50 PM, Oleg Gelbukh 
> >> wrote:
> >> > In short, because of this:
> >> >
> >> >
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/db/sqlalchemy/models/release.py#L74-L99
> >> >
> >> > Unless we use dashed 2-component version where OpenStack version comes
> >> > first, followed by version of Fuel, this will break creation of a
> >> > cluster
> >> > with given release.
> >> >
> >> > -Oleg
> >> >
> >> > On Sat, Oct 17, 2015 at 10:24 PM, Sergii Golovatiuk
> >> >  wrote:
> >> >>
> >> >> Why can't we use 'liberty' without 8.0?
> >> >>
> >> >> On Sat, 17 Oct 2015 at 19:33, Oleg Gelbukh 
> >> >> wrote:
> >> >>>
> >> >>> After closer look, the only viable option in closer term seems to be
> >> >>> 'liberty-8.0' version. It does not to break comparisons that exist
> in
> >> >>> the
> >> >>> code and allows for smooth transition.
> >> >>>
> >> >>> --
> >> >>> Best regards,
> >> >>> Oleg Gelbukh
> >> >>>
> >> >>> On Fri, Oct 16, 2015 at 5:35 PM, Igor Kalnitsky
> >> >>> 
> >> >>> wrote:
> >> 
> >>  Oleg,
> >> 
> >>  Awesome! That's what I was looking for. :)
> >> 
> >>  - Igor
> >> 
> >>  On Fri, Oct 16, 2015 at 5:09 PM, Oleg Gelbukh <
> ogelb...@mirantis.com>
> >>  wrote:
> >>  > Igor,
> >>  >
> >>  > Got your question now. Coordinated point (maintenance) releases
> are
> >>  > dropped.
> >>  > [1] [2]
> >>  >
> >>  > [1]
> >>  >
> >>  >
> http://lists.openstack.org/pipermail/openstack-dev/2015-May/065144.html
> >>  > [2]
> >>  >
> >>  >
> >>  >
> https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fliberty_releases
> >>  >
> >>  > --
> >>  > Best regards,
> >>  > Oleg Gelbukh
> >>  >
> >>  > On Fri, Oct 16, 2015 at 3:30 PM, Igor Kalnitsky
> >>  > 
> >>  > wrote:
> >>  >>
> >>  >> Oleg,
> >>  >>
> >>  >> Yes, I know. Still you didn't answer my question - are they
> >>  >> planning
> >>  >> to release stable branches time-to-time? Like I said, Liberty is
> >>  >> something similar 2015.2.0. How they will name release of
> >>  >> something
> >>  >> like 2015.2.1 (stable release, with bugfixes) ? Or they plan to
> >>  >> drop
> >>  >> it?
> >>  >>
> >>  >> Thanks,
> >>  >> Igor
> >>  >>
> >>  >> On Fri, Oct 16, 2015 at 1:02 PM, Oleg Gelbukh
> >>  >> 
> >>  >> wrote:
> >>  >> > Igor,
> >>  >> >
> >>  >> > The point is that there's no 2015.2.0 version anywhere in
> >>  >> > OpenStack. So
> >>  >> > every component will be versioned separately, for example, in
> >>  >> > Libery,
> >>  >> > 

Re: [openstack-dev] [Sender Auth Failure] Re: [neutron] How could an L2 agent extension access agent methods ?

2015-12-15 Thread Frances, Margaret
Hello Ihar,

I have some comments and questions about your proposal.  My apologies if
any of what I say here results from misunderstandings on my part.

1. I believe there are two sorts of redirection at play here.  The first
involves inter-table traversal while the second allows a frame to exit the
OF pipeline either by being sent to a different port or by being dropped.
Some of what I say next makes use of this distinction.

2. OpenFlow¹s Goto instruction directs a frame from one table to the next.
 A redirection in this sense must be to a higher-numbered table, which is
to say that OF pipeline processing can only go forward (see p.18, para.2
of the 1.4.1 spec 
).  However, OvS (at
least v2.0.2) implements a resubmit action, which re-searches another
table‹higher-, lower-, or even same-numbered‹and executes any actions
found there in addition to any subsequent actions in the current flow
entry.  It is by using resubmit that the proposed design could work, as
shown in the ovs-ofctl command posted here

.  (Maybe there are other ways, too.)  The resubmit action is a Nicira
vendor extension that at least at one point, and maybe still, was known to
be implemented only by OvS.  I mention this because I wonder if the
proposed design (and my sample command) calls for flow traversal in a
manner not explicitly supported by OpenFlow and so may not work in future
versions of OvS.

3. Regarding the idea of sorting feature flows by kind: I believe that
what is meant by a 'redirection flow table' is a table that could possibly
remove the frame from OF pipeline processing‹i.e., by forwarding or
dropping it.  Can you correct/confirm?

4. Even though the design promotes playing nice by means of feature flow
kinds, I think that features might nevertheless still step on each others¹
toes due to assumptions made about field content.  I¹m thinking, for
instance, of two features whose in-place frame modifications should be
done in a particular order.  Because of this, I¹m not sure that the
granularity of the proposed design can guarantee feature cooperation.
Maybe it would help to prioritize feature flows as ingress-processing
(that is, the flow should be exercised as early as possible in the
pipeline) versus egress-processing (the opposite) in addition to kind‹or
maybe that is just what  the notion of feature flow kind calls for, at
least in part.  Tied (tangential?) to this is the distinction that
OpenFlow makes between an action list and an action set: the former is a
series of actions that is applied to the frame immediately and in the
order specified in the flow entry; the latter is a proper set of actions
that is applied to the frame only upon its exit from the OF pipeline and
in an order specified by protocol.  (Action set content is modified as the
frame traverses the OF pipeline.)  Should action sets be disallowed?

5. Is it a correct rephrasing of the third bullet of the high-level design
to say: each feature-specific flow entry in table 0 would be triggered
only if the frame's relevant OF metadata has not already been updated as a
result of the frame's previous traversal of the feature table.  I
apologize if I¹m suggesting something here that you didn¹t mean.

Hope this is helpful.
Margaret
--
Margaret Frances
Eng 4, Prodt Dev Engineering



On 12/3/15, 11:29 AM, "Johnston, Nate" 
wrote:

>Ihar,
>
>This is brilliant.  The complexity of doing graceful CRUD on the OVS flow
>table especially when other features are active is so complex, that
>abstracting the management of it into functionality optimized for that
>task is an incredibly good idea, especially for those of us like me who
>are not experts in OVS, and thus have a hard time seeing the edge cases.
>
>Thanks very much for this.
>
>‹N.
>
>> On Dec 3, 2015, at 10:46 AM, Ihar Hrachyshka 
>>wrote:
>> 
>> Hi,
>> 
>> Small update on the RFE. It was approved for Mitaka, assuming we come
>>up with proper details upfront thru neutron-specs process.
>> 
>> In the meantime, we have found more use cases for flow management among
>>features in development: QoS DSCP, also the new OF based firewall
>>driver. Both authors for those new features independently realized that
>>agent does not currently play nice with flows set by external code due
>>to its graceful restart behaviour when rules with unknown cookies are
>>cleaned up. [The agent uses a random session uuid() to mark rules that
>>belong to its current run.]
>> 
>> Before I proceed, full disclosure: I know almost nothing about OpenFlow
>>capabilities, so some pieces below may make no sense. I tried to come up
>>with high level model first and then try to map it to available OF
>>features. Please don¹t hesitate to comment, I like to learn new stuff! ;)
>> 
>> I am 

Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Igor Kalnitsky
FYI: so far (according to poll [1]) we have

* 11 votes for keeping 9.2
* 4 votes for restoring 9.3

[1] 
https://docs.google.com/spreadsheets/d/1RNcEVFsg7GdHIXlJl-6LCELhlwQ_zmTbd40Bk_jH1m4/edit?usp=sharing

On Tue, Dec 15, 2015 at 8:34 PM, Vladimir Kuklin  wrote:
> Folks
>
> Let me add my 2c here.
>
> I am for using Postgres 9.3. Here is an additional argument to the ones
> provided by Artem, Aleksandra and others.
>
> Fuel is being sometimes highly customized by our users for their specific
> needs. It has been Postgres 9.3 for a while and they might have as well
> gotten used to it and assumed by default that this would not change. So some
> of their respective features they are developing for their own sake may
> depend on Postgres 9.3 and we will never be able to tell the fraction of
> such use cases. Moreover, downgrading DBMS version of Fuel should be
> inevitably considered as a 'deprecation' of some features our software suite
> is providing to our users. This actually means that we MUST provide our
> users with a warning and deprecation period to allow them to adjust to these
> changes. Obviously, accidental change of Postgres version does not follow
> such a policy in any way. So I see no other ways except for getting back to
> Postgres 9.3.
>
>
> On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky 
> wrote:
>>
>> Hey Mike,
>>
>> Thanks for your input.
>>
>> > actually not.  if you replace your ARRAY columns with JSON entirely,
>>
>> It still needs to fix the code, i.e. change ARRAY-specific queries
>> with JSON ones around the code. ;)
>>
>> > there's already a mostly finished PR for SQLAlchemy support in the
>> > queue.
>>
>> Does it mean SQLAlchemy will have one unified interface to make JSON
>> queries? So we can use different backends if necessary?
>>
>> Thanks,
>> - Igor
>>
>> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer  wrote:
>> >
>> >
>> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
>> >> Hey Julien,
>> >>
>> >>>
>> >>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
>> >>
>> >> I believe this blueprint is about DB for OpenStack cloud (we use
>> >> Galera now), while here we're talking about DB backend for Fuel
>> >> itself. Fuel has a separate node (so called Fuel Master) and we use
>> >> PostgreSQL now.
>> >>
>> >>> does that mean Fuel is only going to be able to run with PostgreSQL?
>> >>
>> >> Unfortunately we already tied up to PostgreSQL. For instance, we use
>> >> PostgreSQL's ARRAY column type. Introducing JSON column is one more
>> >> way to tighten knots harder.
>> >
>> > actually not.  if you replace your ARRAY columns with JSON entirely,
>> > MySQL has JSON as well now:
>> > https://dev.mysql.com/doc/refman/5.7/en/json.html
>> >
>> > there's already a mostly finished PR for SQLAlchemy support in the
>> > queue.
>> >
>> >
>> >
>> >>
>> >> - Igor
>> >>
>> >> On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou 
>> >> wrote:
>> >>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
>> >>>
>>  The things I want to notice are:
>> 
>>  * Currently we aren't tied up to PostgreSQL 9.3.
>>  * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
>>  set of JSON operations.
>> >>>
>> >>> I'm curious and have just a small side question: does that mean Fuel
>> >>> is
>> >>> only going to be able to run with PostgreSQL?
>> >>>
>> >>> I also see
>> >>>
>> >>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
>> >>> maybe it's related?
>> >>>
>> >>> Thanks!
>> >>>
>> >>> --
>> >>> Julien Danjou
>> >>> // Free Software hacker
>> >>> // https://julien.danjou.info
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com
> www.mirantis.ru
> vkuk...@mirantis.com
>
> __
> OpenStack Development Mailing 

Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-15 Thread Jaume Devesa
No. I'm saying that I prefer python-os-midonetclient to be a project by its
own
instead of being merged inside the neutron plugin repo.

On 14 December 2015 at 18:43, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

>
>
> On Mon, Dec 14, 2015 at 6:07 PM, Jaume Devesa  wrote:
>
>> +1
>>
>> I think it is good compromise. Thanks Ryu!
>>
>> I understand the CLI will belong to the external part. I much prefer to
>> have
>> it in a separate project rather than into the plugin. Even if the code is
>> tiny.
>>
>
> Let me summarize it:
>
> python-midonetclient: Low level API that lives and breathes in
> midonet/midonet.
> Has the current cli.
> python-os-midonetclient: High level API that is in
> openstack/python-midonetclient
>  (can be packaged with a different
> name).
>
> Are you asking for python-os-midonetclient not to include the cli tool?
>
> I would prefer to keep with the OpenStack practice [1] of having it
> together. I don't
> think developing a python cli client for the new python-os-midonetclient
> that is
> on par with the neutron cli tool would be that big of a task and I think
> it would
> make operation nicer. It could even find the midonet-api from the zookeeper
> registry like the other tools do.
>
> [1]
> https://github.com/openstack/python-neutronclient/blob/master/setup.cfg
>
>>
>> If you will want to just do midonet calls for debugging or check the
>> MidoNet
>> virtual infrastructure, it will be cleaner to install it without
>> dependencies than
>> dragging the whole neutron project (networking-midonet depends on
>> neutron).
>>
>> Regards,
>>
>> On 14 December 2015 at 17:32, Ryu Ishimoto  wrote:
>>
>>> On Tue, Dec 15, 2015 at 1:00 AM, Sandro Mathys 
>>> wrote:
>>> > On Tue, Dec 15, 2015 at 12:02 AM, Ryu Ishimoto 
>>> wrote:
>>> >
>>> > So if I understand you correctly, you suggest:
>>> > 1) the (midonet/internal) low level API stays where it is and will
>>> > still be called python-midonetclient.
>>> > 2) the (neutron/external) high level API is moved into it's own
>>> > project and will be called something like python-os-midonetclient.
>>> >
>>> > Sounds like a good compromise which addresses the most important
>>> > points, thanks Ryu! I wasn't aware that these parts of the
>>> > python-midonetclient are so clearly distinguishable/separable but if
>>> > so, this makes perfect sense. Not perfectly happy with the naming, but
>>> > I figure it's the way to go.
>>>
>>> Thanks for the endorsement.  Yes, it is trivial to separate them (less
>>> than a day of work) because they are pretty much already separated.
>>>
>>> As for the naming, I think it's better to take a non-disruptive
>>> approach so that it's transparent to those currently developing the
>>> low level midonet client.  To your question, however, I have another
>>> suggestion, which is that for the high level client code, it may also
>>> make sense to just include that as part of the plugin.  It's such
>>> small code that it might not make sense to separate, and also likely
>>> to be used only by the plugin in the future.  Which basically means
>>> that the plugin need not depend on any python client library at all.
>>> I think this will simplify even further.  It should also be ok to be
>>> tied to the plugin release cycles as well assuming that's the only
>>> place the client is needed.
>>>
>>> Cheers,
>>> Ryu
>>>
>>>
>>>
>>> >
>>> > -- Sandro
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Jaume Devesa
>> Software Engineer at Midokura
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Jaume Devesa
Software Engineer at Midokura
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [Plugins] Ways to improve plugin links handling in 9.0

2015-12-15 Thread Vitaly Kramskikh
Hi,

As you may know, in Fuel 8.0 we've implemented blueprint
external-dashboard-links-in-fuel-dashboard
.
It will allow plugins to add links to their dashboards to the Fuel UI after
deployment. As link construction logic could be rather complex (what IP
should be used - public_vip or a separate public IP, should HTTPS be used,
etc), we decided to create a new API handler with auth exepmtion for POST
requests (/api/clusters/:id/plugin_links), which should be used from
post-deployment tasks of plugins. Relative links (without a protocol and a
hostname) are treated relative to public_vip (or SSL hostname in case of
enabled SSL for Horizon). Here are the examples of such post-deployment
tasks: for absolute url

and for relative url
.
For me this approach was designed during 7.0 development cycle and looks
fine to me and some other python developers.

But by the end of the development cycle the we figured out that we also
need to cover the case for plugins which install their dashboard on the
master node. We decided to go with the same approach and add same API
handler for plugins (/api/plugins/:id/plugin_links), but without auth
exemption. It should be used from post_install.sh script to create links.
But the logic of the script appeared to be pretty complex

:

   - It needs to fork (as post_install is run before the plugin
   registration process)
   - It needs to extract login/password from /etc/fuel/astute.yaml to
   access API (so in case they are outdated this approach won't work; it won't
   also be possible to request actual credentials from the user as it's a fork)
   - It needs to obtain a new Keystone token
   - Using this token, it should poll /api/plugins and look for the plugin
   with the needed name until it appears (after registration process)
   - After the plugin is registered, script should construct a url using
   the found id and send a POST request to add a link.

Registering a plugin-level link shouldn't be that complex and we need to
think for a better approach. Do you have any ideas?

I have one: unlike cluster-level links, plugin-level links don't need
custom construction logic as they are always relative to the master node IP
and use the same protocol, so that they can be specified in plugin
metadata. We also can use metadata describe cluster-level links in 2 most
frequent cases: relative to public_vips (Horizon plugins case) and for
plugins which provide only one role with public_ip_required=true and
limits.max=1 (monitoring solutions case). For more complex cases plugins
will still use the API to create the links manually.

-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Better tests for nova scheduler(esp. race conditions)?

2015-12-15 Thread Nikola Đipanov
On 12/15/2015 03:33 AM, Cheng, Yingxin wrote:
> 
>> -Original Message-
>> From: Nikola Đipanov [mailto:ndipa...@redhat.com]
>> Sent: Monday, December 14, 2015 11:11 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [nova] Better tests for nova scheduler(esp. race
>> conditions)?
>>
>> On 12/14/2015 08:20 AM, Cheng, Yingxin wrote:
>>> Hi All,
>>>
>>>
>>>
>>> When I was looking at bugs related to race conditions of scheduler
>>> [1-3], it feels like nova scheduler lacks sanity checks of schedule
>>> decisions according to different situations. We cannot even make sure
>>> that some fixes successfully mitigate race conditions to an acceptable
>>> scale. For example, there is no easy way to test whether server-group
>>> race conditions still exists after a fix for bug[1], or to make sure
>>> that after scheduling there will be no violations of allocation ratios
>>> reported by bug[2], or to test that the retry rate is acceptable in
>>> various corner cases proposed by bug[3]. And there will be much more
>>> in this list.
>>>
>>>
>>>
>>> So I'm asking whether there is a plan to add those tests in the
>>> future, or is there a design exist to simplify writing and executing
>>> those kinds of tests? I'm thinking of using fake databases and fake
>>> interfaces to isolate the entire scheduler service, so that we can
>>> easily build up a disposable environment with all kinds of fake
>>> resources and fake compute nodes to test scheduler behaviors. It is
>>> even a good way to test whether scheduler is capable to scale to 10k
>>> nodes without setting up 10k real compute nodes.
>>>
>>
>> This would be a useful effort - however do not assume that this is going to 
>> be an
>> easy task. Even in the paragraph above, you fail to take into account that in
>> order to test the scheduling you also need to run all compute services since
>> claims work like a kind of 2 phase commit where a scheduling decision gets
>> checked on the destination compute host (through Claims logic), which 
>> involves
>> locking in each compute process.
>>
> 
> Yes, the final goal is to test the entire scheduling process including 2PC. 
> As scheduler is still in the process to be decoupled, some parts such as RT 
> and retry mechanism are highly coupled with nova, thus IMO it is not a good 
> idea to
> include them in this stage. Thus I'll try to isolate filter-scheduler as the 
> first step,
> hope to be supported by community.
> 
> 
>>>
>>>
>>> I'm also interested in the bp[4] to reduce scheduler race conditions
>>> in green-thread level. I think it is a good start point in solving the
>>> huge racing problem of nova scheduler, and I really wish I could help on 
>>> that.
>>>
>>
>> I proposed said blueprint but am very unlikely to have any time to work on 
>> it this
>> cycle, so feel free to take a stab at it. I'd be more than happy to 
>> prioritize any
>> reviews related to the above BP.
>>
>> Thanks for your interest in this
>>
>> N.
>>
> 
> Many thanks nikola! I'm still looking at the claim logic and try to find a 
> way to merge
> it with scheduler host state, will upload patches as soon as I figure it out. 
> 

Great!

Note that that step is not necessary - and indeed it may not be the best
place to start. We already have code duplication between the claims and
(what is only recently been renamed) consume_from_request, so removing
it is a nice to have but really not directly related to fixing the races.

Also after Sylvain's work here https://review.openstack.org/#/c/191251/
it will be trickoer to do as the scheduler side now used the RequestSpec
object instead of Instance, which is not sent over to compute nodes.

I'd personally leave that for last.

M.

> 
>>>
>>>
>>>
>>>
>>> [1] https://bugs.launchpad.net/nova/+bug/1423648
>>>
>>> [2] https://bugs.launchpad.net/nova/+bug/1370207
>>>
>>> [3] https://bugs.launchpad.net/nova/+bug/1341420
>>>
>>> [4]
>>> https://blueprints.launchpad.net/nova/+spec/host-state-level-locking
>>>
>>>
>>>
>>>
>>>
>>> Regards,
>>>
>>> -Yingxin
>>>
> 
> 
> 
> Regards,
> -Yingxin
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Bulat Gaifulin for fuel-web & fuel-mirror cores

2015-12-15 Thread Anastasia Urlapova
+1

On Mon, Dec 14, 2015 at 3:20 PM, Roman Vyalov  wrote:

> +1
>
> On Mon, Dec 14, 2015 at 3:05 PM, Aleksey Kasatkin 
> wrote:
>
>> +1.
>>
>>
>> Aleksey Kasatkin
>>
>>
>> On Mon, Dec 14, 2015 at 12:49 PM, Vladimir Sharshov <
>> vshars...@mirantis.com> wrote:
>>
>>> Hi,
>>>
>>> +1 from me to Bulat.
>>>
>>> On Mon, Dec 14, 2015 at 1:03 PM, Igor Kalnitsky >> > wrote:
>>>
 Hi Fuelers,

 I'd like to nominate Bulat Gaifulin [1] for

 * fuel-web-core [2]
 * fuel-mirror-core [3]

 Bulat's doing a really good review with detailed feedback and he's a
 regular participant in IRC. He's co-author of packetary and
 fuel-mirror projects, and he made valuable contribution to fuel-web
 (e.g. task-based deployment engine).

 Fuel Cores, please reply back with +1/-1.

 - Igor

 [1] http://stackalytics.com/?module=fuel-web_id=bgaifullin
 [2] http://stackalytics.com/report/contribution/fuel-web/90
 [3] http://stackalytics.com/report/contribution/fuel-mirror/90


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Bulat Gaifulin for fuel-web & fuel-mirror cores

2015-12-15 Thread Evgeniy L
+1

On Tue, Dec 15, 2015 at 12:33 PM, Anastasia Urlapova  wrote:

> +1
>
> On Mon, Dec 14, 2015 at 3:20 PM, Roman Vyalov 
> wrote:
>
>> +1
>>
>> On Mon, Dec 14, 2015 at 3:05 PM, Aleksey Kasatkin > > wrote:
>>
>>> +1.
>>>
>>>
>>> Aleksey Kasatkin
>>>
>>>
>>> On Mon, Dec 14, 2015 at 12:49 PM, Vladimir Sharshov <
>>> vshars...@mirantis.com> wrote:
>>>
 Hi,

 +1 from me to Bulat.

 On Mon, Dec 14, 2015 at 1:03 PM, Igor Kalnitsky <
 ikalnit...@mirantis.com> wrote:

> Hi Fuelers,
>
> I'd like to nominate Bulat Gaifulin [1] for
>
> * fuel-web-core [2]
> * fuel-mirror-core [3]
>
> Bulat's doing a really good review with detailed feedback and he's a
> regular participant in IRC. He's co-author of packetary and
> fuel-mirror projects, and he made valuable contribution to fuel-web
> (e.g. task-based deployment engine).
>
> Fuel Cores, please reply back with +1/-1.
>
> - Igor
>
> [1] http://stackalytics.com/?module=fuel-web_id=bgaifullin
> [2] http://stackalytics.com/report/contribution/fuel-web/90
> [3] http://stackalytics.com/report/contribution/fuel-mirror/90
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Team meeting this Tuesday at 1400UTC

2015-12-15 Thread Andreas Scheuring
I want to quickly share the status on the modular l2 agent (common
agent) refactoring blueprint [1] - I'm not able to attend this
afternoon...


- Till Christmas, I would like to see the extension manager patchset
merged [2]
- The goal for Mitaka-2 is to make the split between the common part and
the lb specific part of the lb agent [3]. This is a optimistic goal, as
there's not much time left for review in the new year due to holiday
season, but let's see!
- The goal for Mitaka-3 is to move that common agent into a new file and
implement a driver for macvtap [4]

In parallel to this effort I'm actively reviewing sriovnicagent patches
to make sure new implementations will be compliant to the new common
agent (or modular l2 agent). 


Thanks!


[1] https://blueprints.launchpad.net/neutron/+spec/modular-l2-agent
[2] https://review.openstack.org/250542
[3] https://review.openstack.org/246318
[4] https://bugs.launchpad.net/neutron/+bug/1480979



-- 
Andreas
(IRC: scheuran)



On Mo, 2015-12-14 at 12:33 -0800, Armando M. wrote:
> Hi neutrinos,
> 
> 
> A kind reminder for this week's meeting.
> 
> 
> For this week we'll continue with the post-milestone format: we'll
> continue talking about blueprints/specs and RFEs. We'll be brief on
> announcements and bugs, and skip the other sections, docs and open
> agenda. More details on [1].
> 
> 
> Also, please join me in wishing good riddance to the 'Apologies for
> absence' section. We don't call the roll during the meeting, as
> attendance is not required, even though strongly encouraged.
> 
> 
> Cheers,
> Armando
> 
> [1] https://wiki.openstack.org/wiki/Network/Meetings
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Custom fields for versioned objects

2015-12-15 Thread Michał Dulko
On 12/14/2015 03:59 PM, Ryan Rossiter wrote:
> Hi everyone,
>
> I have a change submitted that lays the groundwork for using custom enums and 
> fields that are used by versioned objects [1]. These custom fields allow for 
> verification on a set of valid values, which prevents the field from being 
> mistakenly set to something invalid. These custom fields are best suited for 
> StringFields that are only assigned certain exact strings (such as a status, 
> format, or type). Some examples for Nova: PciDevice.status, 
> ImageMetaProps.hw_scsi_model, and BlockDeviceMapping.source_type.
>
> These new enums (that are consumed by the fields) are also great for 
> centralizing constants for hard-coded strings throughout the code. For 
> example (using [1]):
>
> Instead of
> if backup.status == ‘creating’:
> 
>
> We now have
> if backup.status == fields.BackupStatus.CREATING:
> 
>
> Granted, this causes a lot of brainless line changes that make for a lot of 
> +/-, but it centralizes a lot. In changes like this, I hope I found all of 
> the occurrences of the different backup statuses, but GitHub search and grep 
> can only do so much. If it turns out this gets in and I missed a string or 
> two, it’s not the end of the world, just push up a follow-up patch to fix up 
> the missed strings. That part of the review is not affected in any way by the 
> RPC/object versioning.
>
> Speaking of object versioning, notice in cinder/objects/backup.py the version 
> was updated to appropriate the new field type. The underlying data passed 
> over RPC has not changed, but this is done for compatibility with older 
> versions that may not have obeyed the set of valid values.
>
> [1] https://review.openstack.org/#/c/256737/
>
>
> -
> Thanks,
>
> Ryan Rossiter (rlrossit)

Thanks for starting this work with formalizing the statuses, I've
commented on the review with a few remarks.

I think we should start a blueprint or bugreport to be able track these
efforts.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Julien Danjou
On Mon, Dec 14 2015, Igor Kalnitsky wrote:

> The things I want to notice are:
>
> * Currently we aren't tied up to PostgreSQL 9.3.
> * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
> set of JSON operations.

I'm curious and have just a small side question: does that mean Fuel is
only going to be able to run with PostgreSQL?

I also see
https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
maybe it's related?

Thanks!

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Ways to improve plugin links handling in 9.0

2015-12-15 Thread Vitaly Kramskikh
Igor,

2015-12-15 13:14 GMT+03:00 Igor Kalnitsky :

> Hey Vitaly,
>
> I agree that having a lot of logic (receiving auth token, creating
> payload and doing post request) in RPM post_install section is a huge
> overhead, and definitely it's not a way to go. We have to find better
> solution, and I think it should be done declaratively (via some YAML).
>
> Moreover, I'd like to see the same approach for cluster's dashboard. I
> see no reason why YAML + custom formatting wouldn't be enough.
>

Cluster-level links building logic is more complex in case of absolute url:
the dashboards can be located on a separate IP or VIP in case of multiple
nodes, it may use HTTPS or not and this may depend on the plugins settings
and/or number of nodes, etc. If we could cover all the cases by YAML
description, that would be perfect, but I don't think that's possible.


>
> - Igor
>
> On Tue, Dec 15, 2015 at 11:53 AM, Vitaly Kramskikh
>  wrote:
> > Hi,
> >
> > As you may know, in Fuel 8.0 we've implemented blueprint
> > external-dashboard-links-in-fuel-dashboard. It will allow plugins to add
> > links to their dashboards to the Fuel UI after deployment. As link
> > construction logic could be rather complex (what IP should be used -
> > public_vip or a separate public IP, should HTTPS be used, etc), we
> decided
> > to create a new API handler with auth exepmtion for POST requests
> > (/api/clusters/:id/plugin_links), which should be used from
> post-deployment
> > tasks of plugins. Relative links (without a protocol and a hostname) are
> > treated relative to public_vip (or SSL hostname in case of enabled SSL
> for
> > Horizon). Here are the examples of such post-deployment tasks: for
> absolute
> > url and for relative url. For me this approach was designed during 7.0
> > development cycle and looks fine to me and some other python developers.
> >
> > But by the end of the development cycle the we figured out that we also
> need
> > to cover the case for plugins which install their dashboard on the master
> > node. We decided to go with the same approach and add same API handler
> for
> > plugins (/api/plugins/:id/plugin_links), but without auth exemption. It
> > should be used from post_install.sh script to create links. But the
> logic of
> > the script appeared to be pretty complex:
> >
> > It needs to fork (as post_install is run before the plugin registration
> > process)
> > It needs to extract login/password from /etc/fuel/astute.yaml to access
> API
> > (so in case they are outdated this approach won't work; it won't also be
> > possible to request actual credentials from the user as it's a fork)
> > It needs to obtain a new Keystone token
> > Using this token, it should poll /api/plugins and look for the plugin
> with
> > the needed name until it appears (after registration process)
> > After the plugin is registered, script should construct a url using the
> > found id and send a POST request to add a link.
> >
> > Registering a plugin-level link shouldn't be that complex and we need to
> > think for a better approach. Do you have any ideas?
> >
> > I have one: unlike cluster-level links, plugin-level links don't need
> custom
> > construction logic as they are always relative to the master node IP and
> use
> > the same protocol, so that they can be specified in plugin metadata. We
> also
> > can use metadata describe cluster-level links in 2 most frequent cases:
> > relative to public_vips (Horizon plugins case) and for plugins which
> provide
> > only one role with public_ip_required=true and limits.max=1 (monitoring
> > solutions case). For more complex cases plugins will still use the API to
> > create the links manually.
> >
> >
> > --
> > Vitaly Kramskikh,
> > Fuel UI Tech Lead,
> > Mirantis, Inc.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Igor Kalnitsky
Hey Julien,

> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql

I believe this blueprint is about DB for OpenStack cloud (we use
Galera now), while here we're talking about DB backend for Fuel
itself. Fuel has a separate node (so called Fuel Master) and we use
PostgreSQL now.

> does that mean Fuel is only going to be able to run with PostgreSQL?

Unfortunately we already tied up to PostgreSQL. For instance, we use
PostgreSQL's ARRAY column type. Introducing JSON column is one more
way to tighten knots harder.

- Igor

On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou  wrote:
> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
>
>> The things I want to notice are:
>>
>> * Currently we aren't tied up to PostgreSQL 9.3.
>> * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
>> set of JSON operations.
>
> I'm curious and have just a small side question: does that mean Fuel is
> only going to be able to run with PostgreSQL?
>
> I also see
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
> maybe it's related?
>
> Thanks!
>
> --
> Julien Danjou
> // Free Software hacker
> // https://julien.danjou.info

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Ways to improve plugin links handling in 9.0

2015-12-15 Thread Igor Kalnitsky
Hey Vitaly,

I agree that having a lot of logic (receiving auth token, creating
payload and doing post request) in RPM post_install section is a huge
overhead, and definitely it's not a way to go. We have to find better
solution, and I think it should be done declaratively (via some YAML).

Moreover, I'd like to see the same approach for cluster's dashboard. I
see no reason why YAML + custom formatting wouldn't be enough.

- Igor

On Tue, Dec 15, 2015 at 11:53 AM, Vitaly Kramskikh
 wrote:
> Hi,
>
> As you may know, in Fuel 8.0 we've implemented blueprint
> external-dashboard-links-in-fuel-dashboard. It will allow plugins to add
> links to their dashboards to the Fuel UI after deployment. As link
> construction logic could be rather complex (what IP should be used -
> public_vip or a separate public IP, should HTTPS be used, etc), we decided
> to create a new API handler with auth exepmtion for POST requests
> (/api/clusters/:id/plugin_links), which should be used from post-deployment
> tasks of plugins. Relative links (without a protocol and a hostname) are
> treated relative to public_vip (or SSL hostname in case of enabled SSL for
> Horizon). Here are the examples of such post-deployment tasks: for absolute
> url and for relative url. For me this approach was designed during 7.0
> development cycle and looks fine to me and some other python developers.
>
> But by the end of the development cycle the we figured out that we also need
> to cover the case for plugins which install their dashboard on the master
> node. We decided to go with the same approach and add same API handler for
> plugins (/api/plugins/:id/plugin_links), but without auth exemption. It
> should be used from post_install.sh script to create links. But the logic of
> the script appeared to be pretty complex:
>
> It needs to fork (as post_install is run before the plugin registration
> process)
> It needs to extract login/password from /etc/fuel/astute.yaml to access API
> (so in case they are outdated this approach won't work; it won't also be
> possible to request actual credentials from the user as it's a fork)
> It needs to obtain a new Keystone token
> Using this token, it should poll /api/plugins and look for the plugin with
> the needed name until it appears (after registration process)
> After the plugin is registered, script should construct a url using the
> found id and send a POST request to add a link.
>
> Registering a plugin-level link shouldn't be that complex and we need to
> think for a better approach. Do you have any ideas?
>
> I have one: unlike cluster-level links, plugin-level links don't need custom
> construction logic as they are always relative to the master node IP and use
> the same protocol, so that they can be specified in plugin metadata. We also
> can use metadata describe cluster-level links in 2 most frequent cases:
> relative to public_vips (Horizon plugins case) and for plugins which provide
> only one role with public_ip_required=true and limits.max=1 (monitoring
> solutions case). For more complex cases plugins will still use the API to
> create the links manually.
>
>
> --
> Vitaly Kramskikh,
> Fuel UI Tech Lead,
> Mirantis, Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Different versions for different components

2015-12-15 Thread Roman Prykhodchenko
Folks,

I can see that version for python-fuelclient package is already [1] set to 
8.0.0. However, there’s still no corresponding tag and so the version was not 
released to PyPi.
The question is it finally safe to tag different versions for different 
components? As for Fuel client we need to tag 8.0.0 to push a Debian package 
for it.


1. https://github.com/openstack/python-fuelclient/blob/master/setup.cfg#L3 



- romcheg



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Team meeting this Tuesday at 1400UTC

2015-12-15 Thread Neil Jerram
On 14/12/15 20:37, Armando M. wrote:
> Hi neutrinos,
>
> A kind reminder for this week's meeting.
>
> For this week we'll continue with the post-milestone format: we'll
> continue talking about blueprints/specs and RFEs. We'll be brief on
> announcements and bugs, and skip the other sections, docs and open
> agenda. More details on [1].
>
> Also, please join me in wishing good riddance to the 'Apologies for
> absence' section. We don't call the roll during the meeting, as
> attendance is not required, even though strongly encouraged.
>
> Cheers,
> Armando
>
> [1] https://wiki.openstack.org/wiki/Network/Meetings
> 

I'm afraid I can't make the meeting today, but will check the logs
afterwards.

One thing that (I think) is not on the agenda is next steps from the
recent stadium discussions.  Perhaps that should be added for next
week's meeting?

Regards,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Rolling upgrades

2015-12-15 Thread Michał Dulko
Hi,

At the meeting recently it was mentioned that our rolling upgrades
efforts are pursuing an "elusive unicorn" that makes development a lot
more complicated and restricted. I want to try to clarify this a bit,
explain the strategy more and give an update on the status of the whole
affair.

So first of all - it's definitely achievable, as Nova supports rolling
upgrades from Kilo. It makes developer's life harder, but the feature is
useful, e.g. CERN was able to upgrade their compute nodes after control
plane services in their enormously big environment in their Juno->Kilo
upgrade [1].

Rolling upgrades are all about interoperability of services running in
different versions. We want to give operators ability to upgrade service
instances one-by-one, starting form c-api, through c-sch to c-vol and
c-bak. Moreover we want to be sure that old and new version of a single
service can coexist. This means we need to be backward compatible with
at least one previous release. There are 3 planes on which
incompatibilities may happen:
* API of RPC methods
* Structure of composite data sent over RPC
* DB schemas

API of RPC methods
--
Here we're strictly following Nova's solution described in [2]. We need
to support RPC version pinning, so each RPC API addition needs to be
versioned and we need to be able to downgrade the request to required
version in rpcapi.py modules. On the other side manager.py should be
able to process the request even when it doesn't receive newly added
parameter. There are already some examples of this approach in tree
([3], [4]). Until the upgrade is completed the RPC API version is pinned
so everything should be compatible with older release. Once only new
services are running the pin may be released.

Structure of composite data sent over RPC
-
Again RPC version pinning is utilized with addition of versioned
objects. Before sending the object we will translate it to the lower
version - according to the version pin. This will make sure that object
can be understand by older services. Note that newer services can
translate the object back to the new version when receiving an old one.

DB schemas
--
This is a hard one. We've needed to adapt approach described in [5] to
our needs as we're calling the DB from all of our services and not only
from nova-conductor as Nova does. This means that in case of a
non-backward compatible migration we need to stretch the process through
3 releases. Good news is that we haven't needed such migration since
Juno (in M we have a few candidates… :(). Process for Cinder is
described in [6]. In general we want to ban migrations that are
non-backward compatible or exclusively lock the table for an extended
period of time ([7] is a good source of truth for MySQL) and allow them
only if they follow 3-relase period of migration (so that N+2 release
has no notion of a column or table so we can drop it).

Right now we're finishing the oslo.versionedobjects adoption -
outstanding patches can be found in [8] (there are still a few to come -
look at table at the bottom of [9]). In case of DB schemas upgrades
we've merged the spec and a test that's banning contracting migrations
is in review [10]. In case of RPC API compatibility I'm actively
reviewing the patches to make sure every change there is done properly.

Apart from that in the backlog is documenting all this in devref and
implementing partial upgrade Grenade tests that will gate on version
interoperability.

I hope this clarifies a bit how we're progressing to be able to upgrade
Cinder with minimal or no downtime.

[1]
http://openstack-in-production.blogspot.de/2015/11/our-cloud-in-kilo.html
[2] http://www.danplanet.com/blog/2015/10/05/upgrades-in-nova-rpc-apis/
[3]
https://github.com/openstack/cinder/blob/12e4d9236/cinder/scheduler/rpcapi.py#L89-L93
[4]
https://github.com/openstack/cinder/blob/12e4d9236/cinder/scheduler/manager.py#L124-L128
[5]
http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/
[6]
https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/online-schema-upgrades.html
[7]
https://dev.mysql.com/doc/refman/5.7/en/innodb-create-index-overview.html
[8]
https://review.openstack.org/#/q/branch:master+topic:bp/cinder-objects,n,z
[9] https://etherpad.openstack.org/p/cinder-rolling-upgrade
[10]
https://review.openstack.org/#/q/branch:master+topic:bp/online-schema-upgrades,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Artem Silenkov
Hello!

I got another few points against downgrading.

1. PostgreSQL-9.2 will reach end-of-life at September 2017 according to [0].
With high probability it means that we will have 9.2 version in centos
repos when fuel9.0 arrives.
It means that we will have to repackage it anyway just later a little bit.

2. 9.2 is slightly incompatible with 9.3, according to [1].
Downgrading is not an easy task,pg_dump, pg_restore from different package
versions can't work together.

3. Shared memory usage is different between 9.2 and 9.3 and this could
bring some troubles and would require config file reworking.


[0]: http://www.postgresql.org/support/versioning/
[1]: http://www.postgresql.org/docs/9.3/static/release-9-3.html

Offtopic sorry for this ->
If we want to reduce number of package we maintain we should start from
ruby Eg.
Gems we use are deprecated like 5 years ago and bring to the table a lot of
efforts repackaging unsupported software.

Regards,

Artem Silenkov
---
MOS-Packaging

On Tue, Dec 15, 2015 at 1:28 PM, Julien Danjou  wrote:

> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
>
> > The things I want to notice are:
> >
> > * Currently we aren't tied up to PostgreSQL 9.3.
> > * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
> > set of JSON operations.
>
> I'm curious and have just a small side question: does that mean Fuel is
> only going to be able to run with PostgreSQL?
>
> I also see
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
> maybe it's related?
>
> Thanks!
>
> --
> Julien Danjou
> // Free Software hacker
> // https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-15 Thread Dan Mihai Dumitriu
Just leave it as is. This whole thread is a waste of time.
On Dec 15, 2015 18:52, "Jaume Devesa"  wrote:

> No. I'm saying that I prefer python-os-midonetclient to be a project by
> its own
> instead of being merged inside the neutron plugin repo.
>
> On 14 December 2015 at 18:43, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
>
>>
>>
>> On Mon, Dec 14, 2015 at 6:07 PM, Jaume Devesa  wrote:
>>
>>> +1
>>>
>>> I think it is good compromise. Thanks Ryu!
>>>
>>> I understand the CLI will belong to the external part. I much prefer to
>>> have
>>> it in a separate project rather than into the plugin. Even if the code
>>> is tiny.
>>>
>>
>> Let me summarize it:
>>
>> python-midonetclient: Low level API that lives and breathes in
>> midonet/midonet.
>> Has the current cli.
>> python-os-midonetclient: High level API that is in
>> openstack/python-midonetclient
>>  (can be packaged with a
>> different name).
>>
>> Are you asking for python-os-midonetclient not to include the cli tool?
>>
>> I would prefer to keep with the OpenStack practice [1] of having it
>> together. I don't
>> think developing a python cli client for the new python-os-midonetclient
>> that is
>> on par with the neutron cli tool would be that big of a task and I think
>> it would
>> make operation nicer. It could even find the midonet-api from the
>> zookeeper
>> registry like the other tools do.
>>
>> [1]
>> https://github.com/openstack/python-neutronclient/blob/master/setup.cfg
>>
>>>
>>> If you will want to just do midonet calls for debugging or check the
>>> MidoNet
>>> virtual infrastructure, it will be cleaner to install it without
>>> dependencies than
>>> dragging the whole neutron project (networking-midonet depends on
>>> neutron).
>>>
>>> Regards,
>>>
>>> On 14 December 2015 at 17:32, Ryu Ishimoto  wrote:
>>>
 On Tue, Dec 15, 2015 at 1:00 AM, Sandro Mathys 
 wrote:
 > On Tue, Dec 15, 2015 at 12:02 AM, Ryu Ishimoto 
 wrote:
 >
 > So if I understand you correctly, you suggest:
 > 1) the (midonet/internal) low level API stays where it is and will
 > still be called python-midonetclient.
 > 2) the (neutron/external) high level API is moved into it's own
 > project and will be called something like python-os-midonetclient.
 >
 > Sounds like a good compromise which addresses the most important
 > points, thanks Ryu! I wasn't aware that these parts of the
 > python-midonetclient are so clearly distinguishable/separable but if
 > so, this makes perfect sense. Not perfectly happy with the naming, but
 > I figure it's the way to go.

 Thanks for the endorsement.  Yes, it is trivial to separate them (less
 than a day of work) because they are pretty much already separated.

 As for the naming, I think it's better to take a non-disruptive
 approach so that it's transparent to those currently developing the
 low level midonet client.  To your question, however, I have another
 suggestion, which is that for the high level client code, it may also
 make sense to just include that as part of the plugin.  It's such
 small code that it might not make sense to separate, and also likely
 to be used only by the plugin in the future.  Which basically means
 that the plugin need not depend on any python client library at all.
 I think this will simplify even further.  It should also be ok to be
 tied to the plugin release cycles as well assuming that's the only
 place the client is needed.

 Cheers,
 Ryu



 >
 > -- Sandro


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> --
>>> Jaume Devesa
>>> Software Engineer at Midokura
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Jaume Devesa
> Software Engineer at Midokura
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [Fuel] Different versions for different components

2015-12-15 Thread Aleksandra Fedorova
Roman,

we use 8.0 version everywhere in Fuel code _before_ 8.0 release. We
don't use bump approach, rather bump version, run a development
and test cycle, then create release and tag it.

In more details:

1) there is a master branch, in which development for upcoming release
(currently 8.0) happens. All hardcoded version parameters in master
branch are set to 8.0.

2) at Soft Code Freeze (which is one week from now) we create
stable/8.0 branch from current master. Then we immediately bump
versions in master branches of all Fuel projects to 9.0.
Since SCF we have stable/8.0 branch with 8.0 version and master with
9.0, but there is still bugfixing in progress, so there might be
changes in stable/8.0 code.

3) On RTM day we finally create 8.0 tags on stable/8.0 branch, and
this is the time when we should release packages to PyPI and other
resources.



On Tue, Dec 15, 2015 at 2:03 PM, Roman Prykhodchenko  wrote:
> Folks,
>
> I can see that version for python-fuelclient package is already [1] set to
> 8.0.0. However, there’s still no corresponding tag and so the version was
> not released to PyPi.
> The question is it finally safe to tag different versions for different
> components? As for Fuel client we need to tag 8.0.0 to push a Debian package
> for it.
>
>
> 1. https://github.com/openstack/python-fuelclient/blob/master/setup.cfg#L3
>
>
> - romcheg
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Aleksandra Fedorova
Fuel CI Team Lead
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-15 Thread Egor Guz
Clark,


What about ephemeral storage at OVH vms? I see may storage related errors (see 
full output below) these days.
Basically it  means Docker cannot create storage device at local drive

-- Logs begin at Mon 2015-12-14 06:40:09 UTC, end at Mon 2015-12-14 07:00:38 
UTC. --
Dec 14 06:45:50 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: Stopped 
Docker Application Container Engine.
Dec 14 06:47:54 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: Starting 
Docker Application Container Engine...
Dec 14 06:48:00 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 docker[1022]: Warning: 
'-d' is deprecated, it will be removed soon. See usage.
Dec 14 06:48:00 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 docker[1022]: 
time="2015-12-14T06:48:00Z" level=warning msg="please use 'docker daemon' 
instead."
Dec 14 06:48:03 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 docker[1022]: 
time="2015-12-14T06:48:03.447936206Z" level=info msg="Listening for HTTP on 
unix (/var/run/docker.sock)"
Dec 14 06:48:06 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 docker[1022]: 
time="2015-12-14T06:48:06.280086735Z" level=fatal msg="Error starting daemon: 
error initializing graphdriver: Non existing device docker-docker--pool"
Dec 14 06:48:06 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: 
docker.service: main process exited, code=exited, status=1/FAILURE
Dec 14 06:48:06 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: Failed to 
start Docker Application Container Engine.
Dec 14 06:48:06 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: Unit 
docker.service entered failed state.
Dec 14 06:48:06 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: 
docker.service failed.


http://logs.openstack.org/58/251158/3/check/gate-functional-dsvm-magnum-k8s/5ed0e01/logs/bay-nodes/worker-test_replication_controller_apis-172.24.5.11/docker.txt.gz


—
Egor




On 12/13/15, 10:51, "Clark Boylan"  wrote:

>On Sat, Dec 12, 2015, at 02:16 PM, Hongbin Lu wrote:
>> Hi,
>> 
>> As Kai Qiang mentioned, magnum gate recently had a bunch of random
>> failures, which occurred on creating a nova instance with 2G of RAM.
>> According to the error message, it seems that the hypervisor tried to
>> allocate memory to the nova instance but couldn’t find enough free memory
>> in the host. However, by adding a few “nova hypervisor-show XX” before,
>> during, and right after the test, it showed that the host has 6G of free
>> RAM, which is far more than 2G. Here is a snapshot of the output [1]. You
>> can find the full log here [2].
>If you look at the dstat log
>http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnum-k8s/5305d7a/logs/screen-dstat.txt.gz
>the host has nowhere near 6GB free memory and less than 2GB. I think you
>actually are just running out of memory.
>> 
>> Another observation is that most of the failure happened on a node with
>> name “devstack-trusty-ovh-*” (You can verify it by entering a query [3]
>> at http://logstash.openstack.org/ ). It seems that the 

Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Dmitry Teselkin
Hello,

I made an attempt to gather all valuable points 'for' and 'against'
9.2.x in one document [1]. Please take a look on it, I also put some
comments there to keep everything in one place. I believe this can help
us to make deliberated decision.

Please add more pros / cons there as I don't pretend to make a
full picture at the first attempt.

Just in case, I'd prefer to 'downgrade' to 9.2 :)

[1] https://etherpad.mirantis.net/p/7ZUruwlwJM

On Tue, 15 Dec 2015 20:47:41 +0200
Igor Kalnitsky  wrote:

> FYI: so far (according to poll [1]) we have
> 
> * 11 votes for keeping 9.2
> * 4 votes for restoring 9.3
> 
> [1]
> https://docs.google.com/spreadsheets/d/1RNcEVFsg7GdHIXlJl-6LCELhlwQ_zmTbd40Bk_jH1m4/edit?usp=sharing
> 
> On Tue, Dec 15, 2015 at 8:34 PM, Vladimir Kuklin
>  wrote:
> > Folks
> >
> > Let me add my 2c here.
> >
> > I am for using Postgres 9.3. Here is an additional argument to the
> > ones provided by Artem, Aleksandra and others.
> >
> > Fuel is being sometimes highly customized by our users for their
> > specific needs. It has been Postgres 9.3 for a while and they might
> > have as well gotten used to it and assumed by default that this
> > would not change. So some of their respective features they are
> > developing for their own sake may depend on Postgres 9.3 and we
> > will never be able to tell the fraction of such use cases.
> > Moreover, downgrading DBMS version of Fuel should be inevitably
> > considered as a 'deprecation' of some features our software suite
> > is providing to our users. This actually means that we MUST provide
> > our users with a warning and deprecation period to allow them to
> > adjust to these changes. Obviously, accidental change of Postgres
> > version does not follow such a policy in any way. So I see no other
> > ways except for getting back to Postgres 9.3.
> >
> >
> > On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky
> >  wrote:
> >>
> >> Hey Mike,
> >>
> >> Thanks for your input.
> >>
> >> > actually not.  if you replace your ARRAY columns with JSON
> >> > entirely,
> >>
> >> It still needs to fix the code, i.e. change ARRAY-specific queries
> >> with JSON ones around the code. ;)
> >>
> >> > there's already a mostly finished PR for SQLAlchemy support in
> >> > the queue.
> >>
> >> Does it mean SQLAlchemy will have one unified interface to make
> >> JSON queries? So we can use different backends if necessary?
> >>
> >> Thanks,
> >> - Igor
> >>
> >> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer 
> >> wrote:
> >> >
> >> >
> >> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
> >> >> Hey Julien,
> >> >>
> >> >>>
> >> >>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
> >> >>
> >> >> I believe this blueprint is about DB for OpenStack cloud (we use
> >> >> Galera now), while here we're talking about DB backend for Fuel
> >> >> itself. Fuel has a separate node (so called Fuel Master) and we
> >> >> use PostgreSQL now.
> >> >>
> >> >>> does that mean Fuel is only going to be able to run with
> >> >>> PostgreSQL?
> >> >>
> >> >> Unfortunately we already tied up to PostgreSQL. For instance,
> >> >> we use PostgreSQL's ARRAY column type. Introducing JSON column
> >> >> is one more way to tighten knots harder.
> >> >
> >> > actually not.  if you replace your ARRAY columns with JSON
> >> > entirely, MySQL has JSON as well now:
> >> > https://dev.mysql.com/doc/refman/5.7/en/json.html
> >> >
> >> > there's already a mostly finished PR for SQLAlchemy support in
> >> > the queue.
> >> >
> >> >
> >> >
> >> >>
> >> >> - Igor
> >> >>
> >> >> On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou
> >> >>  wrote:
> >> >>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
> >> >>>
> >>  The things I want to notice are:
> >> 
> >>  * Currently we aren't tied up to PostgreSQL 9.3.
> >>  * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by
> >>  using a set of JSON operations.
> >> >>>
> >> >>> I'm curious and have just a small side question: does that
> >> >>> mean Fuel is
> >> >>> only going to be able to run with PostgreSQL?
> >> >>>
> >> >>> I also see
> >> >>>
> >> >>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
> >> >>> maybe it's related?
> >> >>>
> >> >>> Thanks!
> >> >>>
> >> >>> --
> >> >>> Julien Danjou
> >> >>> // Free Software hacker
> >> >>> // https://julien.danjou.info
> >> >>
> >> >>
> >> >> __
> >> >> OpenStack Development Mailing List (not for usage questions)
> >> >> Unsubscribe:
> >> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>
> >> >
> >> >
> >> > __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> 

Re: [openstack-dev] [astara][requirements] astara-appliance has requirements not in global-requirements

2015-12-15 Thread Adam Gandelman
Thanks for the heads up, Andreas. I've opened
https://bugs.launchpad.net/astara/+bug/1526527 and hope to resolve it in
the coming days.

Cheers
Adam


On Sun, Dec 13, 2015 at 3:26 AM, Andreas Jaeger  wrote:

> Astara team,
>
> The requirements proposal job complains about astara-appliance with:
> 'gunicorn' is not in global-requirements.txt
>
> Please get this requirement into global-requirements or remove it.
>
> Details:
>
> https://jenkins.openstack.org/job/propose-requirements-updates/602/consoleFull
> http://docs.openstack.org/developer/requirements/
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-15 Thread Clark Boylan
There is no ephemeral drive you will have to use the root disk. Grabbing
a random VM in OVH and running a quick df on it you should have about
60GB of free space for the job to use there. 

Clark

On Tue, Dec 15, 2015, at 02:00 PM, Egor Guz wrote:
> Clark,
> 
> 
> What about ephemeral storage at OVH vms? I see may storage related errors
> (see full output below) these days.
> Basically it  means Docker cannot create storage device at local drive
> 
> -- Logs begin at Mon 2015-12-14 06:40:09 UTC, end at Mon 2015-12-14
> 07:00:38 UTC. --
> Dec 14 06:45:50
> 
> te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: Stopped
> Docker Application Container Engine.
> Dec 14 06:47:54
> 
> te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]:
> Starting Docker Application Container Engine...
> Dec 14 06:48:00
> 
> te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 docker[1022]:
> Warning: '-d' is deprecated, it will be removed soon. See usage.
> Dec 14 06:48:00
> 
> te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 docker[1022]:
> time="2015-12-14T06:48:00Z" level=warning msg="please use 'docker daemon'
> instead."
> Dec 14 06:48:03
> 
> te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 docker[1022]:
> time="2015-12-14T06:48:03.447936206Z" level=info msg="Listening for HTTP
> on unix (/var/run/docker.sock)"
> Dec 14 06:48:06
> 
> te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 docker[1022]:
> time="2015-12-14T06:48:06.280086735Z" level=fatal msg="Error starting
> daemon: error initializing graphdriver: Non existing device
> docker-docker--pool"
> Dec 14 06:48:06
> 
> te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]:
> docker.service: main process exited, code=exited, status=1/FAILURE
> Dec 14 06:48:06
> 
> te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: Failed
> to start Docker Application Container Engine.
> Dec 14 06:48:06
> 
> te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: Unit
> docker.service entered failed state.
> Dec 14 06:48:06
> 
> te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]:
> docker.service failed.
> 
> 
> http://logs.openstack.org/58/251158/3/check/gate-functional-dsvm-magnum-k8s/5ed0e01/logs/bay-nodes/worker-test_replication_controller_apis-172.24.5.11/docker.txt.gz
> 
> 
> —
> Egor
> 
> 
> 
> 
> On 12/13/15, 10:51, "Clark Boylan"  wrote:
> 
> >On Sat, Dec 12, 2015, at 02:16 PM, Hongbin Lu wrote:
> >> Hi,
> >> 
> >> As Kai Qiang mentioned, magnum gate recently had a bunch of random
> >> failures, which occurred on creating a nova instance with 2G of RAM.
> >> According to the error message, it seems that the hypervisor tried to
> >> allocate memory to the nova instance but couldn’t find enough free memory
> >> in the host. However, by adding a few “nova hypervisor-show XX” before,
> >> during, and right after the test, it showed that the host has 6G of free
> >> RAM, which is far more than 2G. Here is a snapshot of the output [1]. You
> >> can find the full log here [2].
> >If you look at the dstat log
> 

Re: [openstack-dev] [Neutron] Team meeting this Tuesday at 1400UTC

2015-12-15 Thread Armando M.
On 15 December 2015 at 04:22, Neil Jerram 
wrote:

> On 14/12/15 20:37, Armando M. wrote:
> > Hi neutrinos,
> >
> > A kind reminder for this week's meeting.
> >
> > For this week we'll continue with the post-milestone format: we'll
> > continue talking about blueprints/specs and RFEs. We'll be brief on
> > announcements and bugs, and skip the other sections, docs and open
> > agenda. More details on [1].
> >
> > Also, please join me in wishing good riddance to the 'Apologies for
> > absence' section. We don't call the roll during the meeting, as
> > attendance is not required, even though strongly encouraged.
> >
> > Cheers,
> > Armando
> >
> > [1] https://wiki.openstack.org/wiki/Network/Meetings
> > 
>
> I'm afraid I can't make the meeting today, but will check the logs
> afterwards.
>
> One thing that (I think) is not on the agenda is next steps from the
> recent stadium discussions.  Perhaps that should be added for next
> week's meeting?
>
>
We'll resume the usual agenda from next week. In the meantime I encourage
blueprint's approver and assignee to stay in sync regularly, update the
blueprint whiteboard, and make sure more progress is done between now and
the next checkpoint (happening post M-2, second half of January).

Happy hacking!

Cheers,
Armando


> Regards,
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0061] Glance image signature uses an insecure hash algorithm (MD5)

2015-12-15 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Glance image signature uses an insecure hash algorithm (MD5)
- ---

### Summary ###
During the Liberty release the Glance project added a feature that
supports verifying images by their signature. There is a flaw in the
implementation that degrades verification by using the weak MD5
algorithm.

### Affected Services / Software ###
Glance, Liberty

### Discussion ###
A signature algorithm is typically created by hashing data and then
encrypting that hash in some way. In the case of the new Glance feature
the signature algorithm does not hash the image to be verified. It
rehashes the existing MD5 checksum that is used to locally verify the
integrity of image data stored in Glance.

The Glance image signature algorithm uses configurable hash algorithms.
No matter which algorithm is used, the overall security of the algorithm
is degraded to that of MD5 because instead of applying it to the image
data it's applied only to the MD5 checksum that already exists in
Glance.

The image signature algorithm is a relatively new feature, introduced in
the Liberty release.

### Recommended Actions ###
Users concerned with image security should be aware that the current
Glance signature algorithm is not secure by todays cryptographic
standards.

A specification for a fix has been proposed by the Glance development
team and is targeted for the Mitaka release.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0061
Original LaunchPad Bug : https://bugs.launchpad.net/glance/+bug/1516031
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
Glance Spec for fix : https://review.openstack.org/#/c/252462/
CVE : CVE-2015-8234
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEbBAEBCAAGBQJWcJc8AAoJEJa+6E7Ri+EVemwH9iG8NkbuIz6e6E3RH7mnIdKm
skmUfxhsHOTN2n+2lQlVtmNoNHYhTDqMmKiQSLuzq1AcMvF/EVuZU36GK/8VBHU5
q3YXqmvXZEM5YqnXNl3xLHlKCUWwgD5SpzGhR9lFrEmFlnT1ZLHwB+FG3JKzsMdm
jOukVVNjiFB6/NhmQQ1FN2pjd3Vkt7lzE1ydvTLpFk+aqx/SDGeW5lnzGxFTOVzr
peTwDdtwGa/fgxsboViT0OprkItmsSuCrXBarKPxgnqTFfhD2bcZ9y5j/7s9II+y
o84A+w/YAJwe8jJgvGChFCyp/7LeV2US8GoxnDsM5OyummMN05DBA06n4FB+9A==
=kl5s
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-15 Thread Clark Boylan
On Sun, Dec 13, 2015, at 10:51 AM, Clark Boylan wrote:
> On Sat, Dec 12, 2015, at 02:16 PM, Hongbin Lu wrote:
> > Hi,
> > 
> > As Kai Qiang mentioned, magnum gate recently had a bunch of random
> > failures, which occurred on creating a nova instance with 2G of RAM.
> > According to the error message, it seems that the hypervisor tried to
> > allocate memory to the nova instance but couldn’t find enough free memory
> > in the host. However, by adding a few “nova hypervisor-show XX” before,
> > during, and right after the test, it showed that the host has 6G of free
> > RAM, which is far more than 2G. Here is a snapshot of the output [1]. You
> > can find the full log here [2].
> If you look at the dstat log
> http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnum-k8s/5305d7a/logs/screen-dstat.txt.gz
> the host has nowhere near 6GB free memory and less than 2GB. I think you
> actually are just running out of memory.
> > 
> > Another observation is that most of the failure happened on a node with
> > name “devstack-trusty-ovh-*” (You can verify it by entering a query [3]
> > at http://logstash.openstack.org/ ). It seems that the jobs will be fine
> > if they are allocated to a node other than “ovh”.
> I have just done a quick spot check of the total memory on
> devstack-trusty hosts across HPCloud, Rackspace, and OVH using `free -m`
> and the results are 7480, 7732, and 6976 megabytes respectively. Despite
> using 8GB flavors in each case there is variation and OVH comes in on
> the low end for some reason. I am guessing that you fail here more often
> because the other hosts give you just enough extra memory to boot these
> VMs.
To follow up on this we seem to have tracked this down to how the linux
kernel restricts memory at boot when you don't have a contiguous chunk
of system memory. We have worked around this by increasing the memory
restriction to 9023M at boot which gets OVH inline with Rackspace and
slightly increases available memory on HPCloud (because it actually has
more of it).

You should see this fix in action after image builds complete tomorrow
(they start at 1400UTC ish).
> 
> We will have to look into why OVH has less memory despite using flavors
> that should be roughly equivalent.
> > 
> > Any hints to debug this issue further? Suggestions are greatly
> > appreciated.
> > 
> > [1] http://paste.openstack.org/show/481746/
> > [2]
> > http://logs.openstack.org/48/256748/1/check/gate-functional-dsvm-magnum-swarm/56d79c3/console.html
> > [3] https://review.openstack.org/#/c/254370/2/queries/1521237.yaml


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] Weekly IRC meeting cancelled December 17th & 24th

2015-12-15 Thread Tripp, Travis S
We will not be holding our weekly IRC meeting this week due to the
busy holiday season with many people out.  Our regular meeting will
resume Thursday, December 31st.

As always, you can find the meeting schedule and agenda here:
http://eavesdrop.openstack.org/#Searchlight_Team_Meeting


Thanks,
Travis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Tricircle]The process for adding networking-tricircle

2015-12-15 Thread joehuang
Hi, Ihar,

Is there any sub-project under Neutron stadium dealing with cross Neutron L2/L3 
networking? If there is no one, why not introduce a new one.

More information about what the plugin wants to do: 
https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit?usp=sharing

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com] 
Sent: Tuesday, December 15, 2015 10:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][Tricircle]The process for adding 
networking-tricircle

Zhipeng Huang  wrote:

> Hi Neutrinos,
>
> We the Tricircle team want to have a neutron-ovn like agent in Neutron 
> for our networking management.

Before we go into discussing tech details on how new subprojects are 
introduced, let me ask one question: have you actually considered integrating 
your needs into existing projects instead of introducing another one? Why isn’t 
it enough?

I feel stadium unwillingly started to encourage forks instead of collaboration 
on common code base.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-15 Thread Kai Qiang Wu
Thanks Clark and infra guys to work around that.
We would keep track that and see if such issue disappear.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Clark Boylan 
To: openstack-dev@lists.openstack.org
Date:   16/12/2015 06:42 am
Subject:Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite
often for "Cannot set up guest memory 'pc.ram': Cannot allocate
memory"



On Sun, Dec 13, 2015, at 10:51 AM, Clark Boylan wrote:
> On Sat, Dec 12, 2015, at 02:16 PM, Hongbin Lu wrote:
> > Hi,
> >
> > As Kai Qiang mentioned, magnum gate recently had a bunch of random
> > failures, which occurred on creating a nova instance with 2G of RAM.
> > According to the error message, it seems that the hypervisor tried to
> > allocate memory to the nova instance but couldn’t find enough free
memory
> > in the host. However, by adding a few “nova hypervisor-show XX” before,
> > during, and right after the test, it showed that the host has 6G of
free
> > RAM, which is far more than 2G. Here is a snapshot of the output [1].
You
> > can find the full log here [2].
> If you look at the dstat log
>
http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnum-k8s/5305d7a/logs/screen-dstat.txt.gz

> the host has nowhere near 6GB free memory and less than 2GB. I think you
> actually are just running out of memory.
> >
> > Another observation is that most of the failure happened on a node with
> > name “devstack-trusty-ovh-*” (You can verify it by entering a query [3]
> > at http://logstash.openstack.org/ ). It seems that the jobs will be
fine
> > if they are allocated to a node other than “ovh”.
> I have just done a quick spot check of the total memory on
> devstack-trusty hosts across HPCloud, Rackspace, and OVH using `free -m`
> and the results are 7480, 7732, and 6976 megabytes respectively. Despite
> using 8GB flavors in each case there is variation and OVH comes in on
> the low end for some reason. I am guessing that you fail here more often
> because the other hosts give you just enough extra memory to boot these
> VMs.
To follow up on this we seem to have tracked this down to how the linux
kernel restricts memory at boot when you don't have a contiguous chunk
of system memory. We have worked around this by increasing the memory
restriction to 9023M at boot which gets OVH inline with Rackspace and
slightly increases available memory on HPCloud (because it actually has
more of it).

You should see this fix in action after image builds complete tomorrow
(they start at 1400UTC ish).
>
> We will have to look into why OVH has less memory despite using flavors
> that should be roughly equivalent.
> >
> > Any hints to debug this issue further? Suggestions are greatly
> > appreciated.
> >
> > [1] http://paste.openstack.org/show/481746/
> > [2]
> >
http://logs.openstack.org/48/256748/1/check/gate-functional-dsvm-magnum-swarm/56d79c3/console.html

> > [3] https://review.openstack.org/#/c/254370/2/queries/1521237.yaml


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Solar] SolarDB/ConfigDB place in Fuel

2015-12-15 Thread Dmitriy Shulyak
Hello folks,

This topic is about configuration storage which will connect data sources
(nailgun/bareon/others) and orchestration. And right now we are developing
two projects that will overlap a bit.

I understand there is not enough context to dive into this thread right
away, but i will appreciate if those people, who participated in design,
will add their opinions/clarifications on this matter.

Main disagreements
---
1. configdb should be passive, writing to configdb is someone else
responsibility
+ simpler implementation, easier to use
- we will need another component that will do writing, or split this
responsibility somehow

2. can be used without other solar components
+ clear inteface between solar components and storage layer
- additional work required to design/refactor communication layer between
modules in solar
- some data will be duplicated between solar orchestrator layer and configdb

3. templates for output
technical detail, can be added on top of solardb if required

Similar functionality
--
1. Hierachical storage
2. Versioning of changes
3. Possibility to overwrite config values
4. Schema for inputs

Overall it seems that we share same goals for both services,
the difference lies in organizational and technical implementation details.

Possible solutions

1. develop configdb and solar with duplicated functionality
- at least 2 additional components will be added to the picture,
one is configdb, another one will need to sync data between configdb and
solar
- in some cases data in solar and configdb will be 100% duplicated
- different teams will work on same functionality
- integration of additional component for fuel will require integration with
configdb and with solar
+ configdb will be independent from solar orchestration/other components

2. make service out of solardb, allign with configdb use cases
+ solardb will be independent from solar orchestration/other solar
components
+ integration of fuel component will be easier than in 1st version
+ clarity about components responsibility and new architecture
- redesign/refactoring communication between components in solar

3. do not use configdb/no extraction of solardb
- inproc communication, which can lead to coupled components (not the case
currently)
+ faster implementation (no major changes required for integration with
fuel)
+ clarity about components responsibility and new architecture

Summary
-
For solar it makes no difference where data will come from: configdb or
data sources, but in overall fuel architecture it will lead to significant
complexity increase.
It would be the best to follow 2nd path, because in long term we don't want
tightly coupled components, but in nearest future we need to concentrate
on:
- integration with fuel
- implementing policy engine
- polishing solar components
This is why i am not sure that we can spend time on 2nd path right now,
or even before 9.0.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Alexey Shtokolov
Dmitry,

Thank you for this document!
Please move it on https://etherpad.openstack.org to make it accessible

Best regards,
Alexey Shtokolov

2015-12-16 1:38 GMT+03:00 Dmitry Teselkin :

> Hello,
>
> I made an attempt to gather all valuable points 'for' and 'against'
> 9.2.x in one document [1]. Please take a look on it, I also put some
> comments there to keep everything in one place. I believe this can help
> us to make deliberated decision.
>
> Please add more pros / cons there as I don't pretend to make a
> full picture at the first attempt.
>
> Just in case, I'd prefer to 'downgrade' to 9.2 :)
>
> [1] https://etherpad.mirantis.net/p/7ZUruwlwJM
>
> On Tue, 15 Dec 2015 20:47:41 +0200
> Igor Kalnitsky  wrote:
>
> > FYI: so far (according to poll [1]) we have
> >
> > * 11 votes for keeping 9.2
> > * 4 votes for restoring 9.3
> >
> > [1]
> >
> https://docs.google.com/spreadsheets/d/1RNcEVFsg7GdHIXlJl-6LCELhlwQ_zmTbd40Bk_jH1m4/edit?usp=sharing
> >
> > On Tue, Dec 15, 2015 at 8:34 PM, Vladimir Kuklin
> >  wrote:
> > > Folks
> > >
> > > Let me add my 2c here.
> > >
> > > I am for using Postgres 9.3. Here is an additional argument to the
> > > ones provided by Artem, Aleksandra and others.
> > >
> > > Fuel is being sometimes highly customized by our users for their
> > > specific needs. It has been Postgres 9.3 for a while and they might
> > > have as well gotten used to it and assumed by default that this
> > > would not change. So some of their respective features they are
> > > developing for their own sake may depend on Postgres 9.3 and we
> > > will never be able to tell the fraction of such use cases.
> > > Moreover, downgrading DBMS version of Fuel should be inevitably
> > > considered as a 'deprecation' of some features our software suite
> > > is providing to our users. This actually means that we MUST provide
> > > our users with a warning and deprecation period to allow them to
> > > adjust to these changes. Obviously, accidental change of Postgres
> > > version does not follow such a policy in any way. So I see no other
> > > ways except for getting back to Postgres 9.3.
> > >
> > >
> > > On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky
> > >  wrote:
> > >>
> > >> Hey Mike,
> > >>
> > >> Thanks for your input.
> > >>
> > >> > actually not.  if you replace your ARRAY columns with JSON
> > >> > entirely,
> > >>
> > >> It still needs to fix the code, i.e. change ARRAY-specific queries
> > >> with JSON ones around the code. ;)
> > >>
> > >> > there's already a mostly finished PR for SQLAlchemy support in
> > >> > the queue.
> > >>
> > >> Does it mean SQLAlchemy will have one unified interface to make
> > >> JSON queries? So we can use different backends if necessary?
> > >>
> > >> Thanks,
> > >> - Igor
> > >>
> > >> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer 
> > >> wrote:
> > >> >
> > >> >
> > >> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
> > >> >> Hey Julien,
> > >> >>
> > >> >>>
> > >> >>>
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
> > >> >>
> > >> >> I believe this blueprint is about DB for OpenStack cloud (we use
> > >> >> Galera now), while here we're talking about DB backend for Fuel
> > >> >> itself. Fuel has a separate node (so called Fuel Master) and we
> > >> >> use PostgreSQL now.
> > >> >>
> > >> >>> does that mean Fuel is only going to be able to run with
> > >> >>> PostgreSQL?
> > >> >>
> > >> >> Unfortunately we already tied up to PostgreSQL. For instance,
> > >> >> we use PostgreSQL's ARRAY column type. Introducing JSON column
> > >> >> is one more way to tighten knots harder.
> > >> >
> > >> > actually not.  if you replace your ARRAY columns with JSON
> > >> > entirely, MySQL has JSON as well now:
> > >> > https://dev.mysql.com/doc/refman/5.7/en/json.html
> > >> >
> > >> > there's already a mostly finished PR for SQLAlchemy support in
> > >> > the queue.
> > >> >
> > >> >
> > >> >
> > >> >>
> > >> >> - Igor
> > >> >>
> > >> >> On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou
> > >> >>  wrote:
> > >> >>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
> > >> >>>
> > >>  The things I want to notice are:
> > >> 
> > >>  * Currently we aren't tied up to PostgreSQL 9.3.
> > >>  * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by
> > >>  using a set of JSON operations.
> > >> >>>
> > >> >>> I'm curious and have just a small side question: does that
> > >> >>> mean Fuel is
> > >> >>> only going to be able to run with PostgreSQL?
> > >> >>>
> > >> >>> I also see
> > >> >>>
> > >> >>>
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
> > >> >>> maybe it's related?
> > >> >>>
> > >> >>> Thanks!
> > >> >>>
> > >> >>> --
> > >> >>> Julien Danjou
> > >> >>> // Free Software hacker
> > >> >>> // https://julien.danjou.info
> > >> >>
> > >> >>
> > >> >>
> 

Re: [openstack-dev] [neutron][taas] neutron ovs-agent deletes taas flows

2015-12-15 Thread Soichi Shigeta



   o) An idea to fix:

  1. Set "taas" stamp(*) to taas flows.
  2. Modify the cleanup logic in ovs-agent not to delete entries
 stamped as "taas".

  * Maybe a static string.
If we need to use a string which generated dynamically
(e.g. uuid), API to interact with ovs-agent is required.



  Last week I proposed to set a static string (e.g. "taas") as cookie
  of flows created by taas agent.

  But I found that the value of a cookie should not be a string,
  but an integer.

  At line 187 in 
"neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py":

  self.agent_uuid_stamp = uuid.uuid4().int & UINT64_BITMASK

  In case of we set an integer value to cookie, coordination
  (reservation of range) is required to avoid conflict of cookies with
  other neutron sub-projects.

  As an alternative (*** short term ***) solution, my idea is:
  Modify the clean up logic in ovs agent not to delete flows whose
  "cookie = 0x0".
  Because old flows created by ovs agent have an old stamp, "cookie =
  0x0" means it was created by other than ovs agent.

  # But, this idea has a disadvantage:
If there are flows which have been created by older version of ovs
agent, they can not be cleaned.

---
 Soichi Shigeta




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Custom fields for versioned objects

2015-12-15 Thread Michał Dulko
On 12/15/2015 04:08 PM, Ryan Rossiter wrote:
> Thanks for the review Michal! As for the bp/bug report, there’s four options:
>
> 1. Tack the work on as part of bp cinder-objects
> 2. Make a new blueprint (bp cinder—object-fields)
> 3. Open a bug to handle all changes for enums/fields
> 4. Open a bug for each changed enum/field
>
> Personally, I’m partial to #1, but #2 is better if you want to track this 
> work separately from the other objects work. I don’t think we should go with 
> bug reports because #3 will be a lot of Partial-Bug and #4 will be kinda 
> spammy. I don’t know what the spec process is in Cinder compared to Nova, but 
> this is nowhere near enough work to be spec-worthy.
>
> If this is something you or others think should be discussed in a meeting, I 
> can tack it on to the agenda for tomorrow.

bp/cinder-object topic is a little crowded with patches and it tracks
mostly rolling-upgrades-related stuff. This is more of a refactoring
than a ovo essential change, so simple specless bp/cinder-object-fields
is totally fine to me.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] RFC: profile matching

2015-12-15 Thread Dmitry Tantsur

On 11/09/2015 03:51 PM, Dmitry Tantsur wrote:

Hi folks!

I spent some time thinking about bringing profile matching back in, so
I'd like to get your comments on the following near-future plan.

First, the scope of the problem. What we do is essentially kind of
capability discovery. We'll help nova scheduler with doing the right
thing by assigning a capability like "suits for compute", "suits for
controller", etc. The most obvious path is to use inspector to assign
capabilities like "profile=1" and then filter nodes by it.

A special care, however, is needed when some of the nodes match 2 or
more profiles. E.g. if we have all 4 nodes matching "compute" and then
only 1 matching "controller", nova can select this one node for
"compute" flavor, and then complain that it does not have enough hosts
for "controller".

We also want to conduct some sanity check before even calling to
heat/nova to avoid cryptic "no valid host found" errors.

(1) Inspector part

During the liberty cycle we've landed a whole bunch of API's to
inspector that allow us to define rules on introspection data. The plan
is to have rules saying, for example:

  rule 1: if memory_mb >= 8192, add capability "compute_profile=1"
  rule 2: if local_gb >= 100, add capability "controller_profile=1"

Note that these rules are defined via inspector API using a JSON-based
DSL [1].

As you see, one node can receive 0, 1 or many such capabilities. So we
need the next step to make a final decision, based on how many nodes we
need of every profile.

(2) Modifications of `overcloud deploy` command: assigning profiles

New argument --assign-profiles will be added. If it's provided,
tripleoclient will fetch all ironic nodes, and try to ensure that we
have enough nodes with all profiles.

Nodes with existing "profile:xxx" capability are left as they are. For
nodes without a profile it will look at "xxx_profile" capabilities
discovered on the previous step. One of the possible profiles will be
chosen and assigned to "profile" capability. The assignment stops as
soon as we have enough nodes of a flavor as requested by a user.


Documentation update with the workflow I have in mind: 
http://docs-draft.openstack.org/67/257867/2/check/gate-tripleo-docs-docs/c938244//doc/build/html/advanced_deployment/profile_matching.html




(3) Modifications of `overcloud deploy` command: validation

To avoid 'no valid host found' errors from nova, the deploy command will
fetch all flavors involved and look at the "profile" capabilities. If
they are set for any flavors, it will check if we have enough ironic
nodes with a given "profile:xxx" capability. This check will happen
after profiles assigning, if --assign-profiles is used.

Please let me know what you think.

[1] https://github.com/openstack/ironic-inspector#introspection-rules

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Status of the Support Conditionals in Heat templates

2015-12-15 Thread Rob Pothier (rpothier)

Hi Sergey,
I agree with your feeling, this is from the Heat Wiki page.
"Heat also endeavours to provide compatibility with the AWS CloudFormation 
template format, so that many existing CloudFormation templates can be launched 
on OpenStack."

Note also, there was another review that attempted implement this, but stalled.
https://review.openstack.org/#/c/84468/

Rob

From: Sergey Kraynev >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, December 9, 2015 at 5:42 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [Heat] Status of the Support Conditionals in Heat 
templates

Hi Heaters,

On the last IRC meeting we had a question about Support Conditionals spec [1].
Previous attempt for this staff is here [2].
The example of first POC in Heat can be reviewed here [3]

As I understand we have not had any complete decision about this work.
So I'd like to clarify feelings of community about it. This clarification may 
be done as answers for two simple questions:
 - Why do we want to implement it?
 - Why do NOT we want to implement it?

My personal feeling is:
- Why do we want to implement it?
* A lot of users wants to have similar staff.
* It's already presented in AWS, so will be good to have this feature in 
Heat too.
 - Why do NOT we want to implement it?
* it can be solved with Jinja [4] . However I don't think, that it's really 
important reason for blocking this work.

Please share your idea about two questions above.
It should allows us to eventually decide, want we implement it or not.

[1] https://review.openstack.org/#/c/245042/
[2] https://review.openstack.org/#/c/153771/
[3] https://review.openstack.org/#/c/221648/1
[4] http://jinja.pocoo.org/
--
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Ubuntu bootstrap] WebUI notification

2015-12-15 Thread Artur Svechnikov
Hi folks,
Recently was introduced special notification about absented bootstrap image.

Currently this notification is sent from fuel-bootstrap-cli. It means that
error message will not be sent when failure occurs before first building
(like in [1]). I think it will be better to set error message on WebUI by
default through fixtures and then remove it if first build is successful.

Please share your opinions about this issue.

[1] https://bugs.launchpad.net/fuel/+bug/1526351

Best regards,
Svechnikov Artur
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0062] Potential reuse of revoked Identity tokens

2015-12-15 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Potential reuse of revoked Identity tokens
- ---

### Summary ###
An authorization token issued by the Identity service can be revoked,
which is designed to immediately make that token invalid for future use.
When the PKI or PKIZ token providers are used, it is possible for an
attacker to manipulate the token contents of a revoked token such that
the token will still be considered to be valid.  This can allow
unauthorized access to cloud resources if a revoked token is intercepted
by an attacker.

### Affected Services / Software ###
Keystone, Icehouse, Juno, Kilo, Liberty

### Discussion ###
Token revocation is used in OpenStack to invalidate a token for further
use.  This token revocation takes place automatically in certain
situations, such as when a user logs out of the Dashboard.  If a revoked
token is obtained by another party, it should no longer be possible to
use it to perform any actions within the cloud.  Unfortunately, this is
not the case when the PKI or PKIZ token providers are used.

When a PKI or PKIZ token is validated, the Identity service checks it
by searching for a revocation by the entire token.  It is possible for
an attacker to manipulate portions of an intercepted PKI or PKIZ token
that are not cryptographically protected, which will cause the
revocation check to improperly consider the token to be valid.

### Recommended Actions ###
We recommend that you do not use the PKI or PKIZ token providers.  The
PKI and PKIZ token providers do not offer any significant benefit over
other token providers such as the UUID or Fernet.

If you are using the PKI or PKIZ token providers, it is recommended that
you switch to using another supported token provider such as the UUID
provider.  This issue might be fixed in a future update of the PKI and
PKIZ token providers in the Identity service.

To check what token provider you are using, you must look in the
'keystone.conf' file for your Identity service.  An example is provided
below:

-  begin keystone.conf sample snippet 
[token]
#provider = keystone.token.providers.pki.Provider
#provider = keystone.token.providers.pkiz.Provider
provider = keystone.token.providers.uuid.Provider
-  end keystone.conf sample snippet 

In the Liberty release of the Identity service, the token provider
configuration is different than previous OpenStack releases.  An
example from the Libery release is provided below:

-  begin keystone.conf sample snippet 
[token]
#provider = pki
#provider = pkiz
provider = uuid
-  end keystone.conf sample snippet 

These configuration snippets are using the UUID token provider.  If you
are using any of the commented out settings from these examples, your
cloud is vulnerable to this issue and you should switch to a different
token provider.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0062
Original LaunchPad Bug : https://bugs.launchpad.net/keystone/+bug/149080
4
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
CVE: CVE-2015-7546
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJWcMVpAAoJEJa+6E7Ri+EVhSIIAKolZPY2bYBwtv1ORoWtOCS0
isXHF3Qpp81NCqmtF7m0CQaEKNBDTQSWxDtZ27jx8tu6ORRdrvktw7Nj2BC0blry
v+DwLh+yfrVMH/I+ynXE82tCYllW3t+1KleQvI2ivebQJrw/AfdfHKaN5D4pI/x9
GVBLj2O/OuZ/aC3dhdE7XvXzrHpCXrXVFMsg2DlZeFS0cC85xGowAZcsCBGMxe3o
ffypCaaT1mE2NONtWbQjfnaxBvlrk+4gLq6ztBxKdd8tscmPtMRtDPFXH0A6NZiM
VVGYGtWgKcUaD7uBkmY42KFd15dgi3fStiL9syFErSE6cKcfdirY6UL+30Gj6uM=
=Zplx
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Regarding Designate install through Openstack-Ansible

2015-12-15 Thread Sharma Swati6
 Hi Major, Jean, Jesse and Kevin,

I have added some part of Designate code and uploaded it on 
https://github.com/sharmaswati6/designate_files

Could you please review this and help me in answering the following questions-
Is there some specific location for the server-side code for all Openstack 
components? And whether I will downloading the actual designate git code to the 
same location?
Is there some specific file 
where I have to give the reference for "tasks:" and "handlers:", so that they 
can be called via roles.To create Designate mys
ql database, is it reference to be given somewhere?How are the hooks(setup 
details) of a new comp
onent associated to it in Openstack-Ansible. Eg- the setup details for 
Designate 
http://git.openstack.org/cgit/openstack/designate/tree/setup.cfg?wb48617274=B56AA8FF
 should map to which file in Openstack-Ansible structure?Thanks in advance.
Regards,
 Swati Sharma
 System Engineer
 Tata Consultancy Services
 Gurgaon - 122 004,Haryana
 India
 Cell:- +91-9717238784
 Mailto: sharma.swa...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.  IT Services
Business Solutions
Consulting
 
 

-Sharma Swati6/DEL/TCS wrote: -
To: ma...@mhtx.net, jean-phili...@evrard.me, jesse.pretor...@rackspace.co.uk, 
kevin.car...@rackspace.com
From: Sharma Swati6/DEL/TCS
Date: 12/08/2015 01:56PM
Cc: openstack-dev@lists.openstack.org, Partha Datta/DEL/TCS@TCS, 
pandey.pree...@tcsin.com
Subject: Regarding Designate install through Openstack-Ansible

 Hi Major, Jean, Jesse and Kevin,

Hope you are all doing well.

I have been interacting with you lately on openstack mailing lists and IRC 
chats regarding Designate component inclusion in Openstack-Ansible, so that its 
deployment can be made similar to other components.

As recommended, I have opened a spec also at : 
https://review.openstack.org/#/c/254161/ and uploaded the sample designate.yml 
file at https://github.com/prpandey26/Designate/blob/master/designate.yml#L3.

To proceed with the configuration and role setup, I have the following queries-
I believe for making the starting the initial setup, only conf.d and env.d 
needs to be altered. 
In env.d, I edited the designate.yml file, then in conf.d, what host changes 
exactly I need to make for designate component?Jean suggested that "after 
making changes in env.d and conf.d, ansible will create the new entries for 
your component". 
Do I have to run anything for this? At what location will the new entries be 
created? Is it with respect to the roles for designate automatically created?As 
a next step, I am planning to add role directories for designate component 
'os-designate-yml' file in '/opt/openstack-ansible/playbooks' and a seperate 
roles directory for 'os-designate' at '/opt/openstack-ansible/playbooks/roles'. 
Can you please let me know if this has to be created by us or env.d and conf.d 
will directly created it?
I have not seen any document yet for the extra-containers to be added to 
openstack-ansible, however I checked only specs have been created for ironic, 
trove, etc. 

Hence, any help from you regarding the steps in sequence will be highly 
appreciated.


Thanks & Regards
 Swati Sharma
 System Engineer
 Tata Consultancy Services
 Mailto: sharma.swa...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.  IT Services
Business Solutions
Consulting
 
 

-Major Hayden  wrote: -
To: "OpenStack Development Mailing List (not for usage questions)" 

From: Major Hayden 
Date: 12/04/2015 06:53PM
Subject: Re: [openstack-dev] [openstack-ansible] Install Openstack-Ansible

On Fri, 2015-12-04 at 10:01 +0530, Sharma Swati6 wrote:
> To add a new container, we have followed the steps as mentioned in
> the extra_container.yml.example. Please find the sample designate.yml
> file attached and created as per the steps.

That's a good start.  However, you'll need to sign up[1] to be an
OpenStack developer (agreeing to some contracts and things so you can
commit this into the upstream repositories.

Once you do that, you'll want to assemble a spec for the changes you
want to make.  A spec defines what you hope to accomplish and gives
everyone on the project a chance to review the steps you're planning to
take.  You can look at a spec I wrote[2] for ideas and then use the
openstack-ansible-specs template[3] to begin working on your spec.

A spec isn't busywork -- it shows the intention of what you're trying
to do and allows other people on the project to point out areas of
concern and improvement.

> To add the new roles in openstack-ansible repository, shall I create
> the directory looking at what is there for keystone or 

Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-12-15 Thread Takashi Yamamoto
hi,

On Fri, Dec 4, 2015 at 12:46 AM, Ihar Hrachyshka  wrote:
> Hi,
>
> Small update on the RFE. It was approved for Mitaka, assuming we come up
> with proper details upfront thru neutron-specs process.
>
> In the meantime, we have found more use cases for flow management among
> features in development: QoS DSCP, also the new OF based firewall driver.
> Both authors for those new features independently realized that agent does
> not currently play nice with flows set by external code due to its graceful
> restart behaviour when rules with unknown cookies are cleaned up. [The agent
> uses a random session uuid() to mark rules that belong to its current run.]
>
> Before I proceed, full disclosure: I know almost nothing about OpenFlow
> capabilities, so some pieces below may make no sense. I tried to come up
> with high level model first and then try to map it to available OF features.
> Please don’t hesitate to comment, I like to learn new stuff! ;)
>
> I am thinking lately on the use cases we collected so far. One common need
> for all features that were seen to be interested in proper integration with
> Open vSwitch agent is to be able to manage feature specific flows on br-int
> and br-tun. There are other things that projects may need, like patch ports,
> though I am still struggling with the question of whether it may be
> postponed or avoided for phase 1.

i suspect port management is mandatory for many of usecases.

>
> There are several specific operation 'kinds' that we should cover for the
> RFE:
> - managing flows that modify frames in-place;
> - managing flows that redirect frames.
>
> There are some things that should be considered to make features cooperate
> with the agent and other extensions:
> - feature flows should have proper priorities based on their ‘kind’ (f.e.
> in-place modification probably go before redirections);
> - feature flows should survive flow reset that may be triggered by the
> agent;
> - feature flows should survive flow reset without data plane disruption
> (=they should support graceful restart:
> https://review.openstack.org/#/c/182920).
>
> With that in mind, I see the following high level design for the flow
> tables:
>
> - table 0 serves as a dispatcher for specific features;
> - each feature gets one or more tables, one per flow ‘kind’ needed;
> - for each feature table, a new flow entry is added to table 0 that would
> redirect to feature specific table; the rule will be triggered only if OF
> metadata is not updated inside the feature table (see the next bullet); the
> rule will have priority that is defined for the ‘kind’ of the operation that
> is implemented by the table it redirects to;
> -  each feature table will have default actions that will 1) mark OF
> metadata for the frame as processed by the feature; 2) redirect back to
> table 0;
> - all feature specific flow rules (except dispatcher rules) belong to
> feature tables;
>
> Now, the workflow for extensions that are interested in setting flows would
> be:
> - on initialize() call, extension defines feature tables it will need; it
> passes the name of the feature table and the ‘kind’ of the actions it will
> execute; with that, the following is initialized by the agent: 1) table 0
> dispatcher entry to redirect frames into feature table; the entry has the
> priority according to the ‘kind’ of the table; 2) the actual feature table
> with two default rules (update metadata and push back to table 0);
> - whenever extension needs to add a new flow rule, it passes the following
> into the agent: 1) table name; 2) flow specific parameters (actions,
> priority, ...)

"actions" here means openflow actions?

passing openflow actions as parameters is not simple as it might sound
because they are complex objects.  esp. when we have two backends.
(ovs-ofctl and native of_interface)

>
> Since the agent will manage setting flows for extensions, it will be able to
> use the active agent cookie for all feature flows; next time the agent is
> restarted, it should be able to respin extension flows with no data plane
> disruption. [Note: we should make sure that on agent restart, we call to
> extensions *before* we clean up stale flow rules.]
>
> That design will hopefully allow us to abstract interaction with flows from
> extensions into management code inside the agent. It should guarantee
> extensions cooperate properly assuming they properly define their priorities
> thru ‘kinds’ of tables they have.
>
> It is also assumed that existing flow based features integrated into the
> agent (dvr? anti-spoofing?) will eventually move to the new flow table
> management model.
>
> I understand that the model does not reflect how do feature processing for
> existing OF based features in the agent. It may require some smart
> workarounds to allow non-disruptive migration to new flow table setup.
>
> It would be great to see the design bashed hard before I start to put it
> into spec format. Especially if 

Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Aleksandra Fedorova
I'd support PostgreSQL 9.3 in 8.0.

* It is clear that PostgreSQL downgrade wasn't planned and discussed
before Feature Freeze, so this change is accidental. We didn't
investigate all possible consequences and changes required for the
switch.
* In Infra we have all our unit tests run on PostgreSQL 9.3.
* For Maintenance team it adds the burden of supporting yet another
version while they have PostgreSQL 9.3 anyway. So this change doesn't
reduce number of supported versions, it rather adds one to the list.

If we are'd like to switch to supported versions of upstream packages
in the future, we can consider using Software Collections where
PostgreSQL 9.4 is available [1].

[1] https://www.softwarecollections.org/en/scls/rhscl/rh-postgresql94/

On Tue, Dec 15, 2015 at 3:23 PM, Artem Silenkov  wrote:
> Hello!
>
> I got another few points against downgrading.
>
> 1. PostgreSQL-9.2 will reach end-of-life at September 2017 according to [0].
> With high probability it means that we will have 9.2 version in centos repos
> when fuel9.0 arrives.
> It means that we will have to repackage it anyway just later a little bit.
>
> 2. 9.2 is slightly incompatible with 9.3, according to [1].
> Downgrading is not an easy task,pg_dump, pg_restore from different package
> versions can't work together.
>
> 3. Shared memory usage is different between 9.2 and 9.3 and this could bring
> some troubles and would require config file reworking.
>
>
> [0]: http://www.postgresql.org/support/versioning/
> [1]: http://www.postgresql.org/docs/9.3/static/release-9-3.html
>
> Offtopic sorry for this ->
> If we want to reduce number of package we maintain we should start from ruby
> Eg.
> Gems we use are deprecated like 5 years ago and bring to the table a lot of
> efforts repackaging unsupported software.
>
> Regards,
>
> Artem Silenkov
> ---
> MOS-Packaging
>
> On Tue, Dec 15, 2015 at 1:28 PM, Julien Danjou  wrote:
>>
>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
>>
>> > The things I want to notice are:
>> >
>> > * Currently we aren't tied up to PostgreSQL 9.3.
>> > * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
>> > set of JSON operations.
>>
>> I'm curious and have just a small side question: does that mean Fuel is
>> only going to be able to run with PostgreSQL?
>>
>> I also see
>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
>> maybe it's related?
>>
>> Thanks!
>>
>> --
>> Julien Danjou
>> // Free Software hacker
>> // https://julien.danjou.info
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Aleksandra Fedorova
CI Team Lead
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Microversions support for extensions without Controller

2015-12-15 Thread Alex Xu
Hi, Alexandre,

we discussed this on api meeting
http://eavesdrop.openstack.org/meetings/nova_api/2015/nova_api.2015-12-15-12.00.log.html

Finally people agreement on merge os-user-data into servers. I will submit
a patch up for this merge.

Thanks
Alex

2015-12-14 21:24 GMT+08:00 Alex Xu :

> Hi, Alexandre,
>
> Yes, I think we need pass the version into `server_update` extension
> point. My irc nick is alex_xu, let me know if you have any trouble with
> this.
>
> Thanks
> Alex
>
> 2015-12-13 2:34 GMT+08:00 Alexandre Levine :
>
>> Hi all,
>>
>> os-user-data extension implements server_create method to add user_data
>> for server creation. No Controller is used for this, only "class
>> UserData(extensions.V21APIExtensionBase)".
>>
>> I want to add server_update method allowing to update the user_data.
>> Obviously I have to add it as a microversioned functionality.
>>
>> And here is the problem: there is no information about the incoming
>> request version in this code. It is available for Controllers only. But
>> checking the version in controller would be too late, because the instance
>> is already updated (non-generator extensions are post-processed).
>>
>> Can anybody guide me how to resolve this collision?
>>
>> Would it be possible to just retroactively add the user_data modification
>> for the whole 2.1 version skipping the microversioning? Or we need to
>> change nova so that request version is passed through to extension?
>>
>> Best regards,
>>   Alex Levine
>>
>> P.S. Sorry for the second attempt - previous letter went with [openstack]
>> instead of [openstack-dev] in the Subject.
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][oslo.log]

2015-12-15 Thread Vladislav Kuzmin
Hi,

I want to specify all my option in yaml file, because it is much more
readable. But I must use ini file, because oslo.log using
logging.config.fileConfig for reading the config file (
https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L216)
Why we cannot use yaml file? Can I propose solution for that?

Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Igor Kalnitsky
Artem -

> PostgreSQL-9.2 will reach end-of-life at September 2017 according to [0].

Python 2.7 will reach end-of-life at the beginning of 2020. However,
we don't drop Python 2.7 and don't start using Python 3.5 instead.

Moreover we aren't going to have CentOS 7 forever. I believe either
new CentOS will be released or they will update PostgreSQL package. So
it's all about support one-more-package by packaging team (what I'm
trying to avoid).

> 9.2 is slightly incompatible with 9.3, according to [1]

Nice catch, thank you. However, we don't backup database as
PostgreSQL's binaries. We use SQL-based backup, and we use psql client
to restore it (not pg_upgrade). So there should be no problems.

> Shared memory usage is different between 9.2 and 9.3 and this could
> bring some troubles and would require config file reworking.

AFAIK, we use default settings (no custom configs). But that must be checked.


On Tue, Dec 15, 2015 at 2:23 PM, Artem Silenkov  wrote:
> Hello!
>
> I got another few points against downgrading.
>
> 1. PostgreSQL-9.2 will reach end-of-life at September 2017 according to [0].
> With high probability it means that we will have 9.2 version in centos repos
> when fuel9.0 arrives.
> It means that we will have to repackage it anyway just later a little bit.
>
> 2. 9.2 is slightly incompatible with 9.3, according to [1].
> Downgrading is not an easy task,pg_dump, pg_restore from different package
> versions can't work together.
>
> 3. Shared memory usage is different between 9.2 and 9.3 and this could bring
> some troubles and would require config file reworking.
>
>
> [0]: http://www.postgresql.org/support/versioning/
> [1]: http://www.postgresql.org/docs/9.3/static/release-9-3.html
>
> Offtopic sorry for this ->
> If we want to reduce number of package we maintain we should start from ruby
> Eg.
> Gems we use are deprecated like 5 years ago and bring to the table a lot of
> efforts repackaging unsupported software.
>
> Regards,
>
> Artem Silenkov
> ---
> MOS-Packaging
>
> On Tue, Dec 15, 2015 at 1:28 PM, Julien Danjou  wrote:
>>
>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
>>
>> > The things I want to notice are:
>> >
>> > * Currently we aren't tied up to PostgreSQL 9.3.
>> > * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
>> > set of JSON operations.
>>
>> I'm curious and have just a small side question: does that mean Fuel is
>> only going to be able to run with PostgreSQL?
>>
>> I also see
>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
>> maybe it's related?
>>
>> Thanks!
>>
>> --
>> Julien Danjou
>> // Free Software hacker
>> // https://julien.danjou.info
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] OpenStack versioning in Fuel

2015-12-15 Thread Oleg Gelbukh
I have a few changes in review [0] that implement a plan outlined in the
bug [1] for seamless merge of the new versioning schema (liberty-8.0). With
those changes merged in order, we should be OK without changing ISO in Fuel
infra.

I also have version of ISO with green BVT that incorporates changes listed
above. It could replace the current ISO in Fuel infra any time we're ready
for it. Currently I'm trying to get green system tests on it as well.

We just need to decide on what path we want to take.

[0]
https://review.openstack.org/#/q/status:open+branch:master+topic:bug/1503663,n,z
[1] https://bugs.launchpad.net/fuel/+bug/1503663/comments/10

--
Best regards,
Oleg Gelbukh

On Tue, Dec 15, 2015 at 8:58 PM Dmitry Klenov  wrote:

> Hi folks,
>
> I would propose to keep current versioning schema until fuel release
> schedule is fully aligned with OpenStack releases. AFAIK it is expected to
> happen since 9.0. After it we can switch to OpenStack version names.
>
> BR,
> Dmitry.
>
> On Tue, Dec 15, 2015 at 8:41 PM, Igor Kalnitsky 
> wrote:
>
>> Folks,
>>
>> I want to bring this up again. There were no progress since last
>> Oleg's mail, and we must decide. It's good that we still have
>> "2015.1.0-8.0" version while OpenStack uses "Liberty" name for
>> versions.
>>
>> Let's decide which name to use, file a bug and finally resolve it.
>>
>> - Igor
>>
>> On Thu, Oct 22, 2015 at 10:23 PM, Oleg Gelbukh 
>> wrote:
>> > Igor, it is interesting that you mention backward compatibility in this
>> > context.
>> >
>> > I can see lots of code in Nailgun that checks for release version to
>> > enable/disable features that were added or removed more than 2 releases
>> > before [1] [2] [3] (there's a lot more).
>> >
>> > What should we do about that code? I believe we could 'safely' delete
>> it. It
>> > will make our code base much more compact and supportable without even
>> > decoupling serializers, etc. Is my assumption correct, or I just missing
>> > something?
>> >
>> > This will also help to switch to another scheme of versioning of
>> releases,
>> > since there will be much less places where those version scheme is
>> > hardcoded.
>> >
>> > [1]
>> >
>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/objects/release.py#L142-L145
>> > [2]
>> >
>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py#L554-L555
>> > [3]
>> >
>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/objects/serializers/node.py#L124-L126
>> >
>> > --
>> > Best regards,
>> > Oleg Gelbukh
>> >
>> > On Mon, Oct 19, 2015 at 6:34 PM, Igor Kalnitsky <
>> ikalnit...@mirantis.com>
>> > wrote:
>> >>
>> >> Oleg,
>> >>
>> >> I think we can remove this function for new releases and keep them
>> >> only for backward compatibility with previous ones. Why not? If
>> >> there's a way to do things better let's do them better. :)
>> >>
>> >> On Sat, Oct 17, 2015 at 11:50 PM, Oleg Gelbukh 
>> >> wrote:
>> >> > In short, because of this:
>> >> >
>> >> >
>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/db/sqlalchemy/models/release.py#L74-L99
>> >> >
>> >> > Unless we use dashed 2-component version where OpenStack version
>> comes
>> >> > first, followed by version of Fuel, this will break creation of a
>> >> > cluster
>> >> > with given release.
>> >> >
>> >> > -Oleg
>> >> >
>> >> > On Sat, Oct 17, 2015 at 10:24 PM, Sergii Golovatiuk
>> >> >  wrote:
>> >> >>
>> >> >> Why can't we use 'liberty' without 8.0?
>> >> >>
>> >> >> On Sat, 17 Oct 2015 at 19:33, Oleg Gelbukh 
>> >> >> wrote:
>> >> >>>
>> >> >>> After closer look, the only viable option in closer term seems to
>> be
>> >> >>> 'liberty-8.0' version. It does not to break comparisons that exist
>> in
>> >> >>> the
>> >> >>> code and allows for smooth transition.
>> >> >>>
>> >> >>> --
>> >> >>> Best regards,
>> >> >>> Oleg Gelbukh
>> >> >>>
>> >> >>> On Fri, Oct 16, 2015 at 5:35 PM, Igor Kalnitsky
>> >> >>> 
>> >> >>> wrote:
>> >> 
>> >>  Oleg,
>> >> 
>> >>  Awesome! That's what I was looking for. :)
>> >> 
>> >>  - Igor
>> >> 
>> >>  On Fri, Oct 16, 2015 at 5:09 PM, Oleg Gelbukh <
>> ogelb...@mirantis.com>
>> >>  wrote:
>> >>  > Igor,
>> >>  >
>> >>  > Got your question now. Coordinated point (maintenance) releases
>> are
>> >>  > dropped.
>> >>  > [1] [2]
>> >>  >
>> >>  > [1]
>> >>  >
>> >>  >
>> http://lists.openstack.org/pipermail/openstack-dev/2015-May/065144.html
>> >>  > [2]
>> >>  >
>> >>  >
>> >>  >
>> https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fliberty_releases
>> >>  >
>> >>  > --
>> >>  > Best regards,
>> >>  > Oleg Gelbukh
>> >>  >
>> >>  > On Fri, Oct 16, 2015 at 3:30 PM, Igor Kalnitsky
>> >> 

Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-15 Thread Clint Byrum
Hi! Can I offer a counter point?

Quotas are for _real_ resources.

Memory, CPU, disk, bandwidth. These are all _closely_ tied to things
that cost real money and cannot be conjured from thin air. As such, the
user being able to allocate 1 billion or 2 containers is not limited by
Magnum, but by real things that they must pay for. If they have enough
Nova quota to allocate 1 billion tiny pods, why would Magnum stop
them? Who actually benefits from that limitation?

So I suggest that you not add any detailed, complicated quota system to
Magnum. If there are real limitations to the implementation that Magnum
has chosen, such as we had in Heat (the entire stack must fit in memory),
then make that the limit. Otherwise, let their vcpu, disk, bandwidth,
and memory quotas be the limit, and enjoy the profit margins that having
an unbound force multiplier like Magnum in your cloud gives you and your
users!

Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
> Hi All,
> 
> Currently, it is possible to create unlimited number of resource like
> bay/pod/service/. In Magnum, there should be a limitation for user or
> project to create Magnum resource,
> and the limitation should be configurable[1].
> 
> I proposed following design :-
> 
> 1. Introduce new table magnum.quotas
> ++--+--+-+-++
> 
> | Field  | Type | Null | Key | Default | Extra  |
> 
> ++--+--+-+-++
> 
> | id | int(11)  | NO   | PRI | NULL| auto_increment |
> 
> | created_at | datetime | YES  | | NULL||
> 
> | updated_at | datetime | YES  | | NULL||
> 
> | deleted_at | datetime | YES  | | NULL||
> 
> | project_id | varchar(255) | YES  | MUL | NULL||
> 
> | resource   | varchar(255) | NO   | | NULL||
> 
> | hard_limit | int(11)  | YES  | | NULL||
> 
> | deleted| int(11)  | YES  | | NULL||
> 
> ++--+--+-+-++
> 
> resource can be Bay, Pod, Containers, etc.
> 
> 
> 2. API controller for quota will be created to make sure basic CLI commands
> work.
> 
> quota-show, quota-delete, quota-create, quota-update
> 
> 3. When the admin specifies a quota of X number of resources to be created
> the code should abide by that. For example if hard limit for Bay is 5 (i.e.
> a project can have maximum 5 Bay's) if a user in a project tries to exceed
> that hardlimit it won't be allowed. Similarly goes for other resources.
> 
> 4. Please note the quota validation only works for resources created via
> Magnum. Could not think of a way that Magnum to know if a COE specific
> utilities created a resource in background. One way could be to see the
> difference between whats stored in magnum.quotas and the information of the
> actual resources created for a particular bay in k8s/COE.
> 
> 5. Introduce a config variable to set quotas values.
> 
> If everyone agrees will start the changes by introducing quota restrictions
> on Bay creation.
> 
> Thoughts ??
> 
> 
> -Vilobh
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/resource-quota

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Alexey Shtokolov
On Tue, Dec 15, 2015 at 9:47 PM, Igor Kalnitsky 
wrote:
> * 11 votes for keeping 9.2
> * 4 votes for restoring 9.3

Igor, please remove my vote from "9.2", I voted for "I'm too conservative,
I want to see classic RDBMS approach" , but not to keep accidentally
downgraded PostgreSQL

If you're asking about using JSON in our PostgreSQL - no, it's not
obligatory, but we can discuss it for specific cases (IMO it's like
"syntactic sugar" for RDB).
If you're asking: should we unexpectedly downgrade DB version after FF and
make upgrade procedure more complicated - strongly disagree.

The reasons are well described above (in Alexandra's mail):

> * It is clear that PostgreSQL downgrade wasn't planned and discussed
> before Feature Freeze, so this change is accidental. We didn't
> investigate all possible consequences and changes required for the
> switch.
> * In Infra we have all our unit tests run on PostgreSQL 9.3.
> * For Maintenance team it adds the burden of supporting yet another
> version while they have PostgreSQL 9.3 anyway. So this change doesn't
> reduce number of supported versions, it rather adds one to the list.

So I think we should keep 9.3 and continue this discussion in the beginning
of 9.0 release.

Best regards,
Alexey Shtokolov

2015-12-15 22:58 GMT+03:00 Vitaly Kramskikh :

> +1 to Vova and Sasha,
>
> I voted for 9.2 at the beginning of the thread due to potential packaging
> and infrastructure issues, but since Artem and Sasha insist on 9.3, I see
> no reasons to keep 9.2.
>
> 2015-12-15 22:19 GMT+03:00 Aleksandra Fedorova :
>
>> Igor,
>>
>> that's an anonymous vote for question stated in a wrong way. Sorry,
>> but it doesn't really look like a valuable input for the discussion.
>>
>> On Tue, Dec 15, 2015 at 9:47 PM, Igor Kalnitsky 
>> wrote:
>> > FYI: so far (according to poll [1]) we have
>> >
>> > * 11 votes for keeping 9.2
>> > * 4 votes for restoring 9.3
>> >
>> > [1]
>> https://docs.google.com/spreadsheets/d/1RNcEVFsg7GdHIXlJl-6LCELhlwQ_zmTbd40Bk_jH1m4/edit?usp=sharing
>> >
>> > On Tue, Dec 15, 2015 at 8:34 PM, Vladimir Kuklin 
>> wrote:
>> >> Folks
>> >>
>> >> Let me add my 2c here.
>> >>
>> >> I am for using Postgres 9.3. Here is an additional argument to the ones
>> >> provided by Artem, Aleksandra and others.
>> >>
>> >> Fuel is being sometimes highly customized by our users for their
>> specific
>> >> needs. It has been Postgres 9.3 for a while and they might have as well
>> >> gotten used to it and assumed by default that this would not change.
>> So some
>> >> of their respective features they are developing for their own sake may
>> >> depend on Postgres 9.3 and we will never be able to tell the fraction
>> of
>> >> such use cases. Moreover, downgrading DBMS version of Fuel should be
>> >> inevitably considered as a 'deprecation' of some features our software
>> suite
>> >> is providing to our users. This actually means that we MUST provide our
>> >> users with a warning and deprecation period to allow them to adjust to
>> these
>> >> changes. Obviously, accidental change of Postgres version does not
>> follow
>> >> such a policy in any way. So I see no other ways except for getting
>> back to
>> >> Postgres 9.3.
>> >>
>> >>
>> >> On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky <
>> ikalnit...@mirantis.com>
>> >> wrote:
>> >>>
>> >>> Hey Mike,
>> >>>
>> >>> Thanks for your input.
>> >>>
>> >>> > actually not.  if you replace your ARRAY columns with JSON entirely,
>> >>>
>> >>> It still needs to fix the code, i.e. change ARRAY-specific queries
>> >>> with JSON ones around the code. ;)
>> >>>
>> >>> > there's already a mostly finished PR for SQLAlchemy support in the
>> >>> > queue.
>> >>>
>> >>> Does it mean SQLAlchemy will have one unified interface to make JSON
>> >>> queries? So we can use different backends if necessary?
>> >>>
>> >>> Thanks,
>> >>> - Igor
>> >>>
>> >>> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer 
>> wrote:
>> >>> >
>> >>> >
>> >>> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
>> >>> >> Hey Julien,
>> >>> >>
>> >>> >>>
>> >>> >>>
>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
>> >>> >>
>> >>> >> I believe this blueprint is about DB for OpenStack cloud (we use
>> >>> >> Galera now), while here we're talking about DB backend for Fuel
>> >>> >> itself. Fuel has a separate node (so called Fuel Master) and we use
>> >>> >> PostgreSQL now.
>> >>> >>
>> >>> >>> does that mean Fuel is only going to be able to run with
>> PostgreSQL?
>> >>> >>
>> >>> >> Unfortunately we already tied up to PostgreSQL. For instance, we
>> use
>> >>> >> PostgreSQL's ARRAY column type. Introducing JSON column is one more
>> >>> >> way to tighten knots harder.
>> >>> >
>> >>> > actually not.  if you replace your ARRAY columns with JSON entirely,
>> >>> > MySQL has JSON as well now:
>> >>> > 

Re: [openstack-dev] [Neutron][Tricircle]The process for adding networking-tricircle

2015-12-15 Thread Ihar Hrachyshka

Zhipeng Huang  wrote:


Hi Neutrinos,

We the Tricircle team want to have a neutron-ovn like agent in Neutron  
for our networking management.


Before we go into discussing tech details on how new subprojects are  
introduced, let me ask one question: have you actually considered  
integrating your needs into existing projects instead of introducing  
another one? Why isn’t it enough?


I feel stadium unwillingly started to encourage forks instead of  
collaboration on common code base.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] neutron ovs-agent deletes taas flows

2015-12-15 Thread Ihar Hrachyshka

Soichi Shigeta  wrote:



 Hi,

  We find a problem that neutron ovs-agent deletes taas flows.

  o) Problem description:

 Background:
  At Liberty, a bug fix to drop only old flows was merged
  to Neutron.
  When ovs-agent is restarted, the cleanup logic drops flow
  entries unless they are stamped by agent_uuid (recorded as
  a cookie).

  bug: #1383674
   "Restarting neutron openvswitch agent causes network
hiccup by throwing away all flows"
   https://bugs.launchpad.net/neutron/+bug/1383674

  commit: 73673beacd75a2d9f51f15b284f1b458d32e992e (patch)
https://git.openstack.org/cgit/openstack/neutron/commit/?id=73673beacd75a2d9f51f15b284f1b458d32e992e


 Problem:
  Cleanup will be done only once, but it seems not to work
  until port configuration is changed.

  Therefore, taas flows will be deleted as follows:
   1. Start a new compute node or restart an existing compute node.
   2. Start taas agent on the compute node.
  --> taas agent creates flows
  (these flows are not stamped by using ovs-agent's uuid)
   3. Deploy a vm on the compute node.
  --> 1. neutron changes port configuration
  2. subsequently, the cleanup logic is invoked
  3. ovs-agent drops taas flows

 Specifically, following taas flows in br_tun are dropped:
 -
  table=35, priority=2,reg0=0x0 actions=resubmit(,36)
  table=35, priority=1,reg0=0x1 actions=resubmit(,36)
  table=35, priority=1,reg0=0x2 actions=resubmit(,37)
 -

 log in q-agt.log
 -
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.ofswitch
req-e5739280-7116-4802-b5ba-d6964b4c5557 Deleting flow
cookie=0x0, duration=434.59s, table=35, n_packets=0, n_bytes=0,
idle_age=434, priority=2,reg0=0x0 actions=resubmit(,36)
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.ofswitch
req-e5739280-7116-4802-b5ba-d6964b4c5557 Deleting flow
cookie=0x0, duration=434.587s, table=35, n_packets=0, n_bytes=0,
idle_age=434, priority=1,reg0=0x1 actions=resubmit(,36)
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.ofswitch
req-e5739280-7116-4802-b5ba-d6964b4c5557 Deleting flow
cookie=0x0, duration=434.583s, table=35, n_packets=0, n_bytes=0,
idle_age=434, priority=1,reg0=0x2 actions=resubmit(,37)
 -


  o) Impact for TaaS:

 Because flows in br_tun are dropped by the cleanup logic, mirrored
 packets will not send to a monitoring vm running on another host.

 Note: Mirrored packets are sent in case of both source vm and
   monitoring vm are running on the same host. (not affected by
   flows in br_tun)


  o) How to reproduce:

 1. Start a new compute node or restart an existing compute node.
(Actually, restarting ovs-agent is enough.)
 2. Start (or restart) taas agent on the compute node.
 3. Deploy a vm on the compute node.
--> The cleanup logic drops taas flows.


  o) Workaround:

 After a vm is deployed on a (re)started compute node, restart taas
 agent before creating a tap-service or tap-flow.
 That is, create taas flows after cleanup has been done.

 Note that cleanup will be done only once during an ovs-agent is
 running.


  o) An idea to fix:

 1. Set "taas" stamp(*) to taas flows.
 2. Modify the cleanup logic in ovs-agent not to delete entries
stamped as "taas".

 * Maybe a static string.
   If we need to use a string which generated dynamically
   (e.g. uuid), API to interact with ovs-agent is required.


API proposal with some consideration for flow cleanup not dropping flows  
for external code is covered in the following email thread:  
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081264.html


I believe you would need to adopt the extensions API once it’s in, moving  
from setup with a separate agent for your feature to l2 agent extension for  
taas that will run inside OVS agent.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-15 Thread James Penick
> getting rid of the raciness of ClusteredComputeManager in my
>current deployment. And I'm willing to help other operators do the same.

 You do alleviate race, but at the cost of complexity and
unpredictability.  Breaking that down, let's say we go with the current
plan and the compute host abstracts hardware specifics from Nova.  The
compute host will report (sum of resources)/(sum of managed compute).  If
the hardware beneath that compute host is heterogenous, then the resources
reported up to nova are not correct, and that really does have significant
impact on deployers.

 As an example: Let's say we have 20 nodes behind a compute process.  Half
of those nodes have 24T of disk, the other have 1T.  An attempt to schedule
a node with 24T of disk will fail, because Nova scheduler is only aware of
12.5T of disk.

 Ok, so one could argue that you should just run two compute processes per
type of host (N+1 redundancy).  If you have different raid levels on two
otherwise identical hosts, you'll now need a new compute process for each
variant of hardware.  What about host aggregates or availability zones?
This sounds like an N^2 problem.  A mere 2 host flavors spread across 2
availability zones means 8 compute processes.

I have hundreds of hardware flavors, across different security, network,
and power availability zones.

>None of this precludes getting to a better world where Gantt actually
>exists, or the resource tracker works well with Ironic.

It doesn't preclude it, no. But Gantt is dead[1], and I haven't seen any
movement to bring it back.

>It just gets us to an incrementally better model in the meantime.

 I strongly disagree. Will Ironic manage its own concept of availability
zones and host aggregates?  What if nova changes their model, will Ironic
change to mirror it?  If not I now need to model the same topology in two
different ways.

 In that context, breaking out scheduling and "hiding" ironic resources
behind a compute process is going to create more problems than it will
solve, and is not the "Least bad" of the options to me.

-James
[1] http://git.openstack.org/cgit/openstack/gantt/tree/README.rst

On Mon, Dec 14, 2015 at 5:28 PM, Jim Rollenhagen 
wrote:

> On Mon, Dec 14, 2015 at 04:15:42PM -0800, James Penick wrote:
> > I'm very much against it.
> >
> >  In my environment we're going to be depending heavily on the nova
> > scheduler for affinity/anti-affinity of physical datacenter constructs,
> > TOR, Power, etc. Like other operators we need to also have a concept of
> > host aggregates and availability zones for our baremetal as well. If
> these
> > decisions move out of Nova, we'd have to replicate that entire concept of
> > topology inside of the Ironic scheduler. Why do that?
> >
> > I see there are 3 main problems:
> >
> > 1. Resource tracker sucks for Ironic.
> > 2. We need compute host HA
> > 3. We need to schedule compute resources in a consistent way.
> >
> >  We've been exploring options to get rid of RT entirely. However, melwitt
> > suggested out that by improving RT itself, and changing it from a pull
> > model to a push, we skip a lot of these problems. I think it's an
> excellent
> > point. If RT moves to a push model, Ironic can dynamically register nodes
> > as they're added, consumed, claimed, etc and update their state in Nova.
> >
> >  Compute host HA is critical for us, too. However, if the compute hosts
> are
> > not responsible for any complex scheduling behaviors, it becomes much
> > simpler to move the compute hosts to being nothing more than dumb workers
> > selected at random.
> >
> >  With this model, the Nova scheduler can still select compute resources
> in
> > the way that it expects, and deployers can expect to build one system to
> > manage VM and BM. We get rid of RT race conditions, and gain compute HA.
>
> Right, so Deva mentioned this here. Copied from below:
>
> > > > Some folks are asking us to implement a non-virtualization-centric
> > > > scheduler / resource tracker in Nova, or advocating that we wait for
> the
> > > > Nova scheduler to be split-out into a separate project. I do not
> believe
> > > > the Nova team is interested in the former, I do not want to wait for
> the
> > > > latter, and I do not believe that either one will be an adequate
> solution
> > > > -- there are other clients (besides Nova) that need to schedule
> workloads
> > > > on Ironic.
>
> And I totally agree with him. We can rewrite the resource tracker, or we
> can break out the scheduler. That will take years - what do you, as an
> operator, plan to do in the meantime? As an operator of ironic myself,
> I'm willing to eat the pain of figuring out what to do with my
> out-of-tree filters (and cells!), in favor of getting rid of the
> raciness of ClusteredComputeManager in my current deployment. And I'm
> willing to help other operators do the same.
>
> We've been talking about this for close to a year already - we need
> to actually do 

Re: [openstack-dev] [Heat] Status of the Support Conditionals in Heat templates

2015-12-15 Thread Fox, Kevin M
the one thing as an Op I'd like to see avoided is having the template language 
be Turing complete. When I'm provisioning heat-engines its much easier if you 
know how many you need when the user can't force them to spin in an infinite 
loop or other such nasties. I think jinja maybe manages to do that, but I'm not 
totally sure. I'd think javascript woudln't though. AWS's conditionals are very 
basic and also should be predictable halting wise. yaql might be a good 
compromise between power and restrictedness.

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Tuesday, December 15, 2015 10:21 AM
To: openstack-dev
Subject: Re: [openstack-dev] [Heat] Status of the Support Conditionals in   
Heat templates

Excerpts from Fox, Kevin M's message of 2015-12-15 09:07:02 -0800:
> My $0.02:
>
> heat as it is today, requires all users to be devops, and to carefully craft 
> the templates launched specific to the cloud and the particular app they are 
> trying to write. Making sharing code between heat users difficult. This means 
> the potential user base of heat is restricted to developers knowledgeable in 
> heat template format, or those using openstack services that wrap up in front 
> of heat (trove, sahara, etc). This mostly relegates heat to the role of 
> "plumbing". Where as, I see it as a first class orchestration engine for the 
> cloud. Something that should be usable by all in its own right.
>
> Just about every attempt I've seen so far has required something like jinja 
> in front to generate the heat templates since heat itself is not generic 
> enough. This means its not available from Horizon, and then is only usable by 
> a small fraction of openstack users.
>
> I've had some luck with aproximating conditionals using maps and nested 
> stacks. It works but its really ugly to code. But from an end users 
> perspective, its very nice to use.
>
> Since everyone's reinventing the templating wheel over and over, heat should 
> itself gain a bit more templatability in its templates so that everyone can 
> stop having to rewrite template engines on top of heat, and heat users don't 
> have to take so much time customizing templates so they can launch them.
>
> I don't particularly care what the best solution to making conditionals 
> available is. if you can guarantee jinja templates will always halt in a 
> reasonable amount of time and is sandboxed appropriately, then sticking it in 
> heat would be a good solution. If not, even some simple conditionals ala AWS 
> would be extremely welcome. But either way, it should take heat parameters 
> in, and operate on them. The heat parameters section is a great contract 
> today between heat users, and heat template developers. Its one of the 
> coolest things about Heat. It makes for a much better user experience in 
> Horizon and the cli. And when I say users, I mean "heat users" != "heat 
> template developers". In the same way, a bash script user may not be able to 
> even read a bash script, but they don't have to edit one to use it. They just 
> call it with parameters.
>


I agree with your sentiments Kevin. As somebody who struggled with Heat
before it had provider templates, and ended up writing a templating
solution to solve it, I always felt that Heat was holding me back from
writing reusable, composable templates. The CloudFormation way of doing
conditions seems worth copying.

Jinja2 in the engine, however, is not a good idea. Can it be contained?
Maybe. However, you already have Javascript that is built for this exact
purpose and already optimized as such.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sender Auth Failure] Re: [neutron] How could an L2 agent extension access agent methods ?

2015-12-15 Thread Ben Pfaff
On Tue, Dec 15, 2015 at 05:58:14PM +, Frances, Margaret wrote:
> 2. OpenFlow¹s Goto instruction directs a frame from one table to the next.
>  A redirection in this sense must be to a higher-numbered table, which is
> to say that OF pipeline processing can only go forward (see p.18, para.2
> of the 1.4.1 spec 
>  specifications/openflow/openflow-switch-v1.4.1.pdf>).  However, OvS (at
> least v2.0.2) implements a resubmit action, which re-searches another
> table‹higher-, lower-, or even same-numbered‹and executes any actions
> found there in addition to any subsequent actions in the current flow
> entry.  It is by using resubmit that the proposed design could work, as
> shown in the ovs-ofctl command posted here
> 
> .  (Maybe there are other ways, too.)  The resubmit action is a Nicira
> vendor extension that at least at one point, and maybe still, was known to
> be implemented only by OvS.  I mention this because I wonder if the
> proposed design (and my sample command) calls for flow traversal in a
> manner not explicitly supported by OpenFlow and so may not work in future
> versions of OvS.

OVS has has "resubmit" for a long time and it's heavily used by lots of
projects.  It's not going to go away.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Status of the Support Conditionals in Heat templates

2015-12-15 Thread Clint Byrum

Excerpts from Fox, Kevin M's message of 2015-12-15 17:21:13 -0800:
> the one thing as an Op I'd like to see avoided is having the template 
> language be Turing complete. When I'm provisioning heat-engines its much 
> easier if you know how many you need when the user can't force them to spin 
> in an infinite loop or other such nasties. I think jinja maybe manages to do 
> that, but I'm not totally sure. I'd think javascript woudln't though. AWS's 
> conditionals are very basic and also should be predictable halting wise. yaql 
> might be a good compromise between power and restrictedness.

Either give people the power, or don't. But going half-way and inventing
new things like YAQL is just going to frustrate users as they become more
sophisticated and realize it isn't the droid they're looking for. By then
they're invested, and will likely be forced back into templating their
templates while they wait for slightly more power to land in HOT. This
feels a lot like the way PHP took over the world: as the world gained
software engineering sophistication, they recoiled in horror and could
do nothing about it. Nobody wants to be responsible for making Heat the
PHP of orchestration systems, right?

The nice thing about those declared conditions is they are _SIMPLE_
and they are definitely not turing complete. They remind me of Ansible's
"when:". And likewise, I think a "with_items:" could probably be added
too to allow the two to work together without needing a turing complete
language to augment things server side.

But, if there are a ton of users clamoring for complicated server side
processing of their templates.. I would go all the way and maybe treat it
like AWS's lambda service... a thing unto itself that Heat just happens
to use.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][aodh][vitrage] The purpose of notification about alarm updating

2015-12-15 Thread liu sheng
Hi AFEK,


Sorry I was busy in other things and didn't pay much attention to Vitrage 
project(But I will do :) ), the notification message I metioned doesn't mean 
the notification based on "alarm_actions". currently, when a alarm's state 
changed, it will trigger the alarm actions specified in alarm definition, and 
also send a notification message about the alarm change to notification.info 
(default config value) meanwhile.


thanks
Liu sheng

At 2015-12-03 00:31:05, "AFEK, Ifat (Ifat)"  
wrote:
>Hi,
>
>In Vitrage[3] project, we would like to be notified on every alarm that is 
>triggered, and respond immediately (e.g. by generating RCA insights, or by 
>triggering new alarms on other resources). We are now in the process of 
>designing our integration with AODH.
>
>If I understood you correctly, you want to remove the notifications to the 
>bus, but keep the alarm_actions in the alarm definition? 
>I'd be happy to get some more details about the difference between these two 
>approaches, and why do you think the notifications should be removed.
>
>[3] https://wiki.openstack.org/wiki/Vitrage
>
>Thanks,
>Ifat.
>
>
>>
>> From: liusheng [mailto:liusheng1...@126.com] 
>> Sent: Tuesday, December 01, 2015 4:32 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: [openstack-dev] [telemetry][aodh] The purpose of notification about 
>> alarm updating
>>
>> Hi folks,
>>
>> Currently, a notification message will be emitted when updating an alarm 
>> (state  transition, attribute updating, creation),  this > functionality was 
>> added by change[1], but the change didn't describe any purpose. So I wonder 
>> whether there is any usage of this 
>> type of notification, we can get the whole details about alarm change by 
>> alarm-history API.  the notification is implicitly 
>> ignored by default, because the "notification_driver" config option won't be 
>> configured by default.  if we enable this option in 
>> aodh.conf and enable the "store_events" in ceilometer.conf, this type of 
>> notifications will be stored as events. so maybe some 
>> users want to aggregate this with events ? what's your opinion ?
>>
>> I have made a change try to deprecate this notification, see [2].
>>
>> [1] https://review.openstack.org/#/c/48949/
>> [2] https://review.openstack.org/#/c/246727/
>>
>> BR
>> Liu sheng
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Mike Scherbakov
Wow such a hot topic...
I'm also the one who voted for 9.2. But I also voted like Alexey S., "I'm
conservative..." - I am actually mostly conservative, and would question
every new cool tool/feature of library unless there is a very good proof on
using it. You can't build a product which will have 90% of cool pars
designed over the last year, that's the reason I'm conservative here too.

Then I learned that we actually had 9.3 before, and now asking if we want
to downgrade to 9.2. My answer is _NO_ under any circumstances to such
changes after Feature Freeze. We've been running lots of installs on 9.3,
and it is too late to change package version after FF from risk management
perspectives. Whether it is downgrade or upgrade.

We could consider downgrading in Fuel 9.0, but I'd very carefully consider
that. As Vladimir Kuklin said, there are may be other users who already
rely on 9.3 for some of their enhancements.

Good question from Julien - even though we are unlikely to replace Postgres
by something else in Fuel, we should still try to use as less
postgres-specific features as possible. Reason is that we might want to
make HA for DB layer some day. In case of MySQL, we can take our existing
ocf scripts for Galera. In case of Postgres, we would have to learn how to
make it HA.


On Tue, Dec 15, 2015 at 2:48 PM Alexey Shtokolov 
wrote:

> Dmitry,
>
> Thank you for this document!
> Please move it on https://etherpad.openstack.org to make it accessible
>
> Best regards,
> Alexey Shtokolov
>
> 2015-12-16 1:38 GMT+03:00 Dmitry Teselkin :
>
>> Hello,
>>
>> I made an attempt to gather all valuable points 'for' and 'against'
>> 9.2.x in one document [1]. Please take a look on it, I also put some
>> comments there to keep everything in one place. I believe this can help
>> us to make deliberated decision.
>>
>> Please add more pros / cons there as I don't pretend to make a
>> full picture at the first attempt.
>>
>> Just in case, I'd prefer to 'downgrade' to 9.2 :)
>>
>> [1] https://etherpad.mirantis.net/p/7ZUruwlwJM
>>
>> On Tue, 15 Dec 2015 20:47:41 +0200
>> Igor Kalnitsky  wrote:
>>
>> > FYI: so far (according to poll [1]) we have
>> >
>> > * 11 votes for keeping 9.2
>> > * 4 votes for restoring 9.3
>> >
>> > [1]
>> >
>> https://docs.google.com/spreadsheets/d/1RNcEVFsg7GdHIXlJl-6LCELhlwQ_zmTbd40Bk_jH1m4/edit?usp=sharing
>> >
>> > On Tue, Dec 15, 2015 at 8:34 PM, Vladimir Kuklin
>> >  wrote:
>> > > Folks
>> > >
>> > > Let me add my 2c here.
>> > >
>> > > I am for using Postgres 9.3. Here is an additional argument to the
>> > > ones provided by Artem, Aleksandra and others.
>> > >
>> > > Fuel is being sometimes highly customized by our users for their
>> > > specific needs. It has been Postgres 9.3 for a while and they might
>> > > have as well gotten used to it and assumed by default that this
>> > > would not change. So some of their respective features they are
>> > > developing for their own sake may depend on Postgres 9.3 and we
>> > > will never be able to tell the fraction of such use cases.
>> > > Moreover, downgrading DBMS version of Fuel should be inevitably
>> > > considered as a 'deprecation' of some features our software suite
>> > > is providing to our users. This actually means that we MUST provide
>> > > our users with a warning and deprecation period to allow them to
>> > > adjust to these changes. Obviously, accidental change of Postgres
>> > > version does not follow such a policy in any way. So I see no other
>> > > ways except for getting back to Postgres 9.3.
>> > >
>> > >
>> > > On Tue, Dec 15, 2015 at 7:39 PM, Igor Kalnitsky
>> > >  wrote:
>> > >>
>> > >> Hey Mike,
>> > >>
>> > >> Thanks for your input.
>> > >>
>> > >> > actually not.  if you replace your ARRAY columns with JSON
>> > >> > entirely,
>> > >>
>> > >> It still needs to fix the code, i.e. change ARRAY-specific queries
>> > >> with JSON ones around the code. ;)
>> > >>
>> > >> > there's already a mostly finished PR for SQLAlchemy support in
>> > >> > the queue.
>> > >>
>> > >> Does it mean SQLAlchemy will have one unified interface to make
>> > >> JSON queries? So we can use different backends if necessary?
>> > >>
>> > >> Thanks,
>> > >> - Igor
>> > >>
>> > >> On Tue, Dec 15, 2015 at 5:06 PM, Mike Bayer 
>> > >> wrote:
>> > >> >
>> > >> >
>> > >> > On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
>> > >> >> Hey Julien,
>> > >> >>
>> > >> >>>
>> > >> >>>
>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
>> > >> >>
>> > >> >> I believe this blueprint is about DB for OpenStack cloud (we use
>> > >> >> Galera now), while here we're talking about DB backend for Fuel
>> > >> >> itself. Fuel has a separate node (so called Fuel Master) and we
>> > >> >> use PostgreSQL now.
>> > >> >>
>> > >> >>> does that mean Fuel is only going to be able to run with
>> > >> >>> PostgreSQL?

Re: [openstack-dev] [neutron][taas] neutron ovs-agent deletes taas flows

2015-12-15 Thread Assaf Muller
SFC are going to hit this issue as well. Really any out of tree
Neutron project that extends the OVS agent and expects things to work
:)

On Tue, Dec 15, 2015 at 9:30 AM, Ihar Hrachyshka  wrote:
> Soichi Shigeta  wrote:
>
>>
>>  Hi,
>>
>>   We find a problem that neutron ovs-agent deletes taas flows.
>>
>>   o) Problem description:
>>
>>  Background:
>>   At Liberty, a bug fix to drop only old flows was merged
>>   to Neutron.
>>   When ovs-agent is restarted, the cleanup logic drops flow
>>   entries unless they are stamped by agent_uuid (recorded as
>>   a cookie).
>>
>>   bug: #1383674
>>"Restarting neutron openvswitch agent causes network
>> hiccup by throwing away all flows"
>>https://bugs.launchpad.net/neutron/+bug/1383674
>>
>>   commit: 73673beacd75a2d9f51f15b284f1b458d32e992e (patch)
>>
>> https://git.openstack.org/cgit/openstack/neutron/commit/?id=73673beacd75a2d9f51f15b284f1b458d32e992e
>>
>>
>>  Problem:
>>   Cleanup will be done only once, but it seems not to work
>>   until port configuration is changed.
>>
>>   Therefore, taas flows will be deleted as follows:
>>1. Start a new compute node or restart an existing compute node.
>>2. Start taas agent on the compute node.
>>   --> taas agent creates flows
>>   (these flows are not stamped by using ovs-agent's uuid)
>>3. Deploy a vm on the compute node.
>>   --> 1. neutron changes port configuration
>>   2. subsequently, the cleanup logic is invoked
>>   3. ovs-agent drops taas flows
>>
>>  Specifically, following taas flows in br_tun are dropped:
>>  -
>>   table=35, priority=2,reg0=0x0 actions=resubmit(,36)
>>   table=35, priority=1,reg0=0x1 actions=resubmit(,36)
>>   table=35, priority=1,reg0=0x2 actions=resubmit(,37)
>>  -
>>
>>  log in q-agt.log
>>  -
>> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.ofswitch
>> req-e5739280-7116-4802-b5ba-d6964b4c5557 Deleting flow
>> cookie=0x0, duration=434.59s, table=35, n_packets=0, n_bytes=0,
>> idle_age=434, priority=2,reg0=0x0 actions=resubmit(,36)
>> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.ofswitch
>> req-e5739280-7116-4802-b5ba-d6964b4c5557 Deleting flow
>> cookie=0x0, duration=434.587s, table=35, n_packets=0, n_bytes=0,
>> idle_age=434, priority=1,reg0=0x1 actions=resubmit(,36)
>> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.ofswitch
>> req-e5739280-7116-4802-b5ba-d6964b4c5557 Deleting flow
>> cookie=0x0, duration=434.583s, table=35, n_packets=0, n_bytes=0,
>> idle_age=434, priority=1,reg0=0x2 actions=resubmit(,37)
>>  -
>>
>>
>>   o) Impact for TaaS:
>>
>>  Because flows in br_tun are dropped by the cleanup logic, mirrored
>>  packets will not send to a monitoring vm running on another host.
>>
>>  Note: Mirrored packets are sent in case of both source vm and
>>monitoring vm are running on the same host. (not affected by
>>flows in br_tun)
>>
>>
>>   o) How to reproduce:
>>
>>  1. Start a new compute node or restart an existing compute node.
>> (Actually, restarting ovs-agent is enough.)
>>  2. Start (or restart) taas agent on the compute node.
>>  3. Deploy a vm on the compute node.
>> --> The cleanup logic drops taas flows.
>>
>>
>>   o) Workaround:
>>
>>  After a vm is deployed on a (re)started compute node, restart taas
>>  agent before creating a tap-service or tap-flow.
>>  That is, create taas flows after cleanup has been done.
>>
>>  Note that cleanup will be done only once during an ovs-agent is
>>  running.
>>
>>
>>   o) An idea to fix:
>>
>>  1. Set "taas" stamp(*) to taas flows.
>>  2. Modify the cleanup logic in ovs-agent not to delete entries
>> stamped as "taas".
>>
>>  * Maybe a static string.
>>If we need to use a string which generated dynamically
>>(e.g. uuid), API to interact with ovs-agent is required.
>
>
> API proposal with some consideration for flow cleanup not dropping flows for
> external code is covered in the following email thread:
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081264.html
>
> I believe you would need to adopt the extensions API once it’s in, moving
> from setup with a separate agent for your feature to l2 agent extension for
> taas that will run inside OVS agent.
>
> Ihar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Custom fields for versioned objects

2015-12-15 Thread Ryan Rossiter
Thanks for the review Michal! As for the bp/bug report, there’s four options:

1. Tack the work on as part of bp cinder-objects
2. Make a new blueprint (bp cinder—object-fields)
3. Open a bug to handle all changes for enums/fields
4. Open a bug for each changed enum/field

Personally, I’m partial to #1, but #2 is better if you want to track this work 
separately from the other objects work. I don’t think we should go with bug 
reports because #3 will be a lot of Partial-Bug and #4 will be kinda spammy. I 
don’t know what the spec process is in Cinder compared to Nova, but this is 
nowhere near enough work to be spec-worthy.

If this is something you or others think should be discussed in a meeting, I 
can tack it on to the agenda for tomorrow.

> On Dec 15, 2015, at 3:52 AM, Michał Dulko  wrote:
> 
> On 12/14/2015 03:59 PM, Ryan Rossiter wrote:
>> Hi everyone,
>> 
>> I have a change submitted that lays the groundwork for using custom enums 
>> and fields that are used by versioned objects [1]. These custom fields allow 
>> for verification on a set of valid values, which prevents the field from 
>> being mistakenly set to something invalid. These custom fields are best 
>> suited for StringFields that are only assigned certain exact strings (such 
>> as a status, format, or type). Some examples for Nova: PciDevice.status, 
>> ImageMetaProps.hw_scsi_model, and BlockDeviceMapping.source_type.
>> 
>> These new enums (that are consumed by the fields) are also great for 
>> centralizing constants for hard-coded strings throughout the code. For 
>> example (using [1]):
>> 
>> Instead of
>>if backup.status == ‘creating’:
>>
>> 
>> We now have
>>if backup.status == fields.BackupStatus.CREATING:
>>
>> 
>> Granted, this causes a lot of brainless line changes that make for a lot of 
>> +/-, but it centralizes a lot. In changes like this, I hope I found all of 
>> the occurrences of the different backup statuses, but GitHub search and grep 
>> can only do so much. If it turns out this gets in and I missed a string or 
>> two, it’s not the end of the world, just push up a follow-up patch to fix up 
>> the missed strings. That part of the review is not affected in any way by 
>> the RPC/object versioning.
>> 
>> Speaking of object versioning, notice in cinder/objects/backup.py the 
>> version was updated to appropriate the new field type. The underlying data 
>> passed over RPC has not changed, but this is done for compatibility with 
>> older versions that may not have obeyed the set of valid values.
>> 
>> [1] https://review.openstack.org/#/c/256737/
>> 
>> 
>> -
>> Thanks,
>> 
>> Ryan Rossiter (rlrossit)
> 
> Thanks for starting this work with formalizing the statuses, I've
> commented on the review with a few remarks.
> 
> I think we should start a blueprint or bugreport to be able track these
> efforts.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Reminder: Low Priority Blueprint Review Day is Thursday 17th December 2015

2015-12-15 Thread John Garbutt
To help with the review push on Thursday, I have created a list of the
blueprint reviews, that are approved, have Jenkins passing, and are
more than 50 days old.

The list is a new section at the top of the regular etherpad we use to
track the most important reviews:
https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking

A full list of approved mitaka blueprints can be seen here:
https://blueprints.launchpad.net/nova/mitaka
http://5885fef486164bb8596d-41634d3e64ee11f37e8658ed1b4d12ec.r44.cf3.rackcdn.com/release_status.html

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #63

2015-12-15 Thread Emilien Macchi


On 12/14/2015 08:55 AM, Emilien Macchi wrote:
> Hello!
> 
> Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
> in #openstack-meeting-4:
> 
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20151215
> 
> See you there!

We did our meeting, you can look notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-12-15-15.00.html

Thanks!
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Artem Silenkov
Hello!

We use mysql-wsrep-5.6 which is latest for galera.
It is based on MySQL-5.6.27.
So JSON features here is not available yet.

Regards,
Artem Silenkov
---
MOS-Packaging

On Tue, Dec 15, 2015 at 6:06 PM, Mike Bayer  wrote:

>
>
> On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
> > Hey Julien,
> >
> >>
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
> >
> > I believe this blueprint is about DB for OpenStack cloud (we use
> > Galera now), while here we're talking about DB backend for Fuel
> > itself. Fuel has a separate node (so called Fuel Master) and we use
> > PostgreSQL now.
> >
> >> does that mean Fuel is only going to be able to run with PostgreSQL?
> >
> > Unfortunately we already tied up to PostgreSQL. For instance, we use
> > PostgreSQL's ARRAY column type. Introducing JSON column is one more
> > way to tighten knots harder.
>
> actually not.  if you replace your ARRAY columns with JSON entirely,
> MySQL has JSON as well now:
> https://dev.mysql.com/doc/refman/5.7/en/json.html
>
> there's already a mostly finished PR for SQLAlchemy support in the queue.
>
>
>
> >
> > - Igor
> >
> > On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou 
> wrote:
> >> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
> >>
> >>> The things I want to notice are:
> >>>
> >>> * Currently we aren't tied up to PostgreSQL 9.3.
> >>> * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
> >>> set of JSON operations.
> >>
> >> I'm curious and have just a small side question: does that mean Fuel is
> >> only going to be able to run with PostgreSQL?
> >>
> >> I also see
> >>
> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
> >> maybe it's related?
> >>
> >> Thanks!
> >>
> >> --
> >> Julien Danjou
> >> // Free Software hacker
> >> // https://julien.danjou.info
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-15 Thread Mike Bayer


On 12/15/2015 07:20 AM, Igor Kalnitsky wrote:
> Hey Julien,
> 
>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql
> 
> I believe this blueprint is about DB for OpenStack cloud (we use
> Galera now), while here we're talking about DB backend for Fuel
> itself. Fuel has a separate node (so called Fuel Master) and we use
> PostgreSQL now.
> 
>> does that mean Fuel is only going to be able to run with PostgreSQL?
> 
> Unfortunately we already tied up to PostgreSQL. For instance, we use
> PostgreSQL's ARRAY column type. Introducing JSON column is one more
> way to tighten knots harder.

actually not.  if you replace your ARRAY columns with JSON entirely,
MySQL has JSON as well now:
https://dev.mysql.com/doc/refman/5.7/en/json.html

there's already a mostly finished PR for SQLAlchemy support in the queue.



> 
> - Igor
> 
> On Tue, Dec 15, 2015 at 12:28 PM, Julien Danjou  wrote:
>> On Mon, Dec 14 2015, Igor Kalnitsky wrote:
>>
>>> The things I want to notice are:
>>>
>>> * Currently we aren't tied up to PostgreSQL 9.3.
>>> * There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
>>> set of JSON operations.
>>
>> I'm curious and have just a small side question: does that mean Fuel is
>> only going to be able to run with PostgreSQL?
>>
>> I also see
>> https://blueprints.launchpad.net/fuel/+spec/openstack-ha-fuel-postgresql,
>> maybe it's related?
>>
>> Thanks!
>>
>> --
>> Julien Danjou
>> // Free Software hacker
>> // https://julien.danjou.info
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] neutron ovs-agent deletes taas flows

2015-12-15 Thread Kyle Mestery
On Tue, Dec 15, 2015 at 9:11 AM, Assaf Muller  wrote:

> SFC are going to hit this issue as well. Really any out of tree
> Neutron project that extends the OVS agent and expects things to work
> :)
>
>
Yes, this is the case.


> On Tue, Dec 15, 2015 at 9:30 AM, Ihar Hrachyshka 
> wrote:
> > Soichi Shigeta  wrote:
> >
> >>
> >>  Hi,
> >>
> >>   We find a problem that neutron ovs-agent deletes taas flows.
> >>
> >>   o) Problem description:
> >>
> >>  Background:
> >>   At Liberty, a bug fix to drop only old flows was merged
> >>   to Neutron.
> >>   When ovs-agent is restarted, the cleanup logic drops flow
> >>   entries unless they are stamped by agent_uuid (recorded as
> >>   a cookie).
> >>
> >>   bug: #1383674
> >>"Restarting neutron openvswitch agent causes network
> >> hiccup by throwing away all flows"
> >>https://bugs.launchpad.net/neutron/+bug/1383674
> >>
> >>   commit: 73673beacd75a2d9f51f15b284f1b458d32e992e (patch)
> >>
> >>
> https://git.openstack.org/cgit/openstack/neutron/commit/?id=73673beacd75a2d9f51f15b284f1b458d32e992e
> >>
> >>
> >>  Problem:
> >>   Cleanup will be done only once, but it seems not to work
> >>   until port configuration is changed.
> >>
> >>   Therefore, taas flows will be deleted as follows:
> >>1. Start a new compute node or restart an existing compute node.
> >>2. Start taas agent on the compute node.
> >>   --> taas agent creates flows
> >>   (these flows are not stamped by using ovs-agent's uuid)
> >>3. Deploy a vm on the compute node.
> >>   --> 1. neutron changes port configuration
> >>   2. subsequently, the cleanup logic is invoked
> >>   3. ovs-agent drops taas flows
> >>
> >>  Specifically, following taas flows in br_tun are dropped:
> >>  -
> >>   table=35, priority=2,reg0=0x0 actions=resubmit(,36)
> >>   table=35, priority=1,reg0=0x1 actions=resubmit(,36)
> >>   table=35, priority=1,reg0=0x2 actions=resubmit(,37)
> >>  -
> >>
> >>  log in q-agt.log
> >>  -
> >>
> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.ofswitch
> >> req-e5739280-7116-4802-b5ba-d6964b4c5557 Deleting flow
> >> cookie=0x0, duration=434.59s, table=35, n_packets=0, n_bytes=0,
> >> idle_age=434, priority=2,reg0=0x0 actions=resubmit(,36)
> >>
> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.ofswitch
> >> req-e5739280-7116-4802-b5ba-d6964b4c5557 Deleting flow
> >> cookie=0x0, duration=434.587s, table=35, n_packets=0, n_bytes=0,
> >> idle_age=434, priority=1,reg0=0x1 actions=resubmit(,36)
> >>
> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.ofswitch
> >> req-e5739280-7116-4802-b5ba-d6964b4c5557 Deleting flow
> >> cookie=0x0, duration=434.583s, table=35, n_packets=0, n_bytes=0,
> >> idle_age=434, priority=1,reg0=0x2 actions=resubmit(,37)
> >>  -
> >>
> >>
> >>   o) Impact for TaaS:
> >>
> >>  Because flows in br_tun are dropped by the cleanup logic, mirrored
> >>  packets will not send to a monitoring vm running on another host.
> >>
> >>  Note: Mirrored packets are sent in case of both source vm and
> >>monitoring vm are running on the same host. (not affected by
> >>flows in br_tun)
> >>
> >>
> >>   o) How to reproduce:
> >>
> >>  1. Start a new compute node or restart an existing compute node.
> >> (Actually, restarting ovs-agent is enough.)
> >>  2. Start (or restart) taas agent on the compute node.
> >>  3. Deploy a vm on the compute node.
> >> --> The cleanup logic drops taas flows.
> >>
> >>
> >>   o) Workaround:
> >>
> >>  After a vm is deployed on a (re)started compute node, restart taas
> >>  agent before creating a tap-service or tap-flow.
> >>  That is, create taas flows after cleanup has been done.
> >>
> >>  Note that cleanup will be done only once during an ovs-agent is
> >>  running.
> >>
> >>
> >>   o) An idea to fix:
> >>
> >>  1. Set "taas" stamp(*) to taas flows.
> >>  2. Modify the cleanup logic in ovs-agent not to delete entries
> >> stamped as "taas".
> >>
> >>  * Maybe a static string.
> >>If we need to use a string which generated dynamically
> >>(e.g. uuid), API to interact with ovs-agent is required.
> >
> >
> > API proposal with some consideration for flow cleanup not dropping flows
> for
> > external code is covered in the following email thread:
> >
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081264.html
> >
> > I believe you would need to adopt the extensions API once it’s in, moving
> > from setup with a separate agent for your feature to l2 agent extension
> for
> > taas that will run inside OVS agent.
> >
>

This is 

[openstack-dev] [Smaug]Application Data protection as a Service introduction

2015-12-15 Thread Eran Gampel
Hi All,

Smaug is a new OpenStack project, aiming at providing a full *Application
Data Protection* (e.g DR), including all OpenStack resources.

Our Mission Statement:

Formalize Application Data Protection in OpenStack (APIs, Services,
Plugins, …)

Be able to protect Any Resource in OpenStack (as well as their dependencies)

Allow Diversity of vendor solutions, capabilities and implementations
without compromising usability


*We are starting a bi-weekly IRC meeting for Smaug.*
We propose Tuesdays Every two weeks (on even weeks) at 14:00 (2pm) UTC (9am
EST, 9pm Beijing) #openstack-meeting


If you prefer a different time, let us know and we will try to adjust.

We are currently in the process of reviewing API v1.0 proposal and would
appreciate any feedback & ideas.

Proposed Smaug API v1.0:

https://review.openstack.org/#/c/244756/


https://github.com/openstack/smaug

https://launchpad.net/smaug

Our project IRC channel  #openstack-smaug

BR,

Eran
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Better tests for nova scheduler(esp. race conditions)?

2015-12-15 Thread John Garbutt
On 15 December 2015 at 10:03, Nikola Đipanov  wrote:
> On 12/15/2015 03:33 AM, Cheng, Yingxin wrote:
>>
>>> -Original Message-
>>> From: Nikola Đipanov [mailto:ndipa...@redhat.com]
>>> Sent: Monday, December 14, 2015 11:11 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [nova] Better tests for nova scheduler(esp. 
>>> race
>>> conditions)?
>>>
>>> On 12/14/2015 08:20 AM, Cheng, Yingxin wrote:
 Hi All,



 When I was looking at bugs related to race conditions of scheduler
 [1-3], it feels like nova scheduler lacks sanity checks of schedule
 decisions according to different situations. We cannot even make sure
 that some fixes successfully mitigate race conditions to an acceptable
 scale. For example, there is no easy way to test whether server-group
 race conditions still exists after a fix for bug[1], or to make sure
 that after scheduling there will be no violations of allocation ratios
 reported by bug[2], or to test that the retry rate is acceptable in
 various corner cases proposed by bug[3]. And there will be much more
 in this list.



 So I'm asking whether there is a plan to add those tests in the
 future, or is there a design exist to simplify writing and executing
 those kinds of tests? I'm thinking of using fake databases and fake
 interfaces to isolate the entire scheduler service, so that we can
 easily build up a disposable environment with all kinds of fake
 resources and fake compute nodes to test scheduler behaviors. It is
 even a good way to test whether scheduler is capable to scale to 10k
 nodes without setting up 10k real compute nodes.

>>>
>>> This would be a useful effort - however do not assume that this is going to 
>>> be an
>>> easy task. Even in the paragraph above, you fail to take into account that 
>>> in
>>> order to test the scheduling you also need to run all compute services since
>>> claims work like a kind of 2 phase commit where a scheduling decision gets
>>> checked on the destination compute host (through Claims logic), which 
>>> involves
>>> locking in each compute process.
>>>
>>
>> Yes, the final goal is to test the entire scheduling process including 2PC.
>> As scheduler is still in the process to be decoupled, some parts such as RT
>> and retry mechanism are highly coupled with nova, thus IMO it is not a good 
>> idea to
>> include them in this stage. Thus I'll try to isolate filter-scheduler as the 
>> first step,
>> hope to be supported by community.
>>
>>


 I'm also interested in the bp[4] to reduce scheduler race conditions
 in green-thread level. I think it is a good start point in solving the
 huge racing problem of nova scheduler, and I really wish I could help on 
 that.

>>>
>>> I proposed said blueprint but am very unlikely to have any time to work on 
>>> it this
>>> cycle, so feel free to take a stab at it. I'd be more than happy to 
>>> prioritize any
>>> reviews related to the above BP.
>>>
>>> Thanks for your interest in this
>>>
>>> N.
>>>
>>
>> Many thanks nikola! I'm still looking at the claim logic and try to find a 
>> way to merge
>> it with scheduler host state, will upload patches as soon as I figure it out.
>>
>
> Great!
>
> Note that that step is not necessary - and indeed it may not be the best
> place to start. We already have code duplication between the claims and
> (what is only recently been renamed) consume_from_request, so removing
> it is a nice to have but really not directly related to fixing the races.
>
> Also after Sylvain's work here https://review.openstack.org/#/c/191251/
> it will be trickoer to do as the scheduler side now used the RequestSpec
> object instead of Instance, which is not sent over to compute nodes.
>
> I'd personally leave that for last.

I would recommend you attend the scheduler sub team meetings, if at
all possible, or track what is discussed there:
http://eavesdrop.openstack.org/#Nova_Scheduler_Team_Meeting

There is a rough outline around the current direction of the scheduler work:
http://docs.openstack.org/developer/nova/scheduler_evolution.html
As ever, thats a little out of date right now, and doesn't capture all
the discussions around moving claims into the scheduler.

Thanks,
johnthetubaguy

> M.
>
>>




 [1] https://bugs.launchpad.net/nova/+bug/1423648

 [2] https://bugs.launchpad.net/nova/+bug/1370207

 [3] https://bugs.launchpad.net/nova/+bug/1341420

 [4]
 https://blueprints.launchpad.net/nova/+spec/host-state-level-locking





 Regards,

 -Yingxin

>>
>>
>>
>> Regards,
>> -Yingxin
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 

Re: [openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-15 Thread Bulat Gaifullin
+1

Regards,
Bulat Gaifullin
Mirantis Inc.



> On 15 Dec 2015, at 22:19, Andrew Maksimov  wrote:
> 
> +1
> 
> Regards,
> Andrey Maximov
> Fuel Project Manager
> 
> On Tue, Dec 15, 2015 at 9:41 PM, Vladimir Kuklin  > wrote:
> Folks
> 
> This email is a proposal to push Docker containers removal from the master 
> node to the date beyond 8.0 HCF. 
> 
> Here is why I propose to do so.
> 
> Removal of Docker is a rather invasive change and may introduce a lot of 
> regressions. It is well may affect how bugs are fixed - we might have 2 ways 
> of fixing them, while during SCF of 8.0 this may affect velocity of bug 
> fixing as you need to fix bugs in master prior to fixing them in stable 
> branches. This actually may significantly increase our bugfixing pace and put 
> 8.0 GA release on risk.
> 
>  
> 
> -- 
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru 
> vkuk...@mirantis.com 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] neutron ovs-agent deletes taas flows

2015-12-15 Thread Anna Kamyshnikova
Sorry, that I don't see this earlier. Yes, cookies have integer values, so
we won't be able to set string there. May be we can have a reserved integer
cookie value for a project like all "1".

I won't support idea of modifying cleanup logic not to drop 0x0 cookies.
During implementation of graceful restart it was not dropped at first, but
I get rid of it  as having a lot of flows not related to anything was not
desirable, so we should try to avoid it here, too.

On Wed, Dec 16, 2015 at 7:46 AM, Soichi Shigeta <
shigeta.soi...@jp.fujitsu.com> wrote:

>
>o) An idea to fix:
>>
>>   1. Set "taas" stamp(*) to taas flows.
>>   2. Modify the cleanup logic in ovs-agent not to delete entries
>>  stamped as "taas".
>>
>>   * Maybe a static string.
>> If we need to use a string which generated dynamically
>> (e.g. uuid), API to interact with ovs-agent is required.
>>
>>
>   Last week I proposed to set a static string (e.g. "taas") as cookie
>   of flows created by taas agent.
>
>   But I found that the value of a cookie should not be a string,
>   but an integer.
>
>   At line 187 in
> "neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py":
>   self.agent_uuid_stamp = uuid.uuid4().int & UINT64_BITMASK
>
>   In case of we set an integer value to cookie, coordination
>   (reservation of range) is required to avoid conflict of cookies with
>   other neutron sub-projects.
>
>   As an alternative (*** short term ***) solution, my idea is:
>   Modify the clean up logic in ovs agent not to delete flows whose
>   "cookie = 0x0".
>   Because old flows created by ovs agent have an old stamp, "cookie =
>   0x0" means it was created by other than ovs agent.
>
>   # But, this idea has a disadvantage:
> If there are flows which have been created by older version of ovs
> agent, they can not be cleaned.
>
>
> ---
>  Soichi Shigeta
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] weekly meeting of Dec.16

2015-12-15 Thread joehuang
Hi,

Let’s have regular meeting today starting UTC1300 at #openstack-meeting

Agenda:
Progress of To-do list: https://etherpad.openstack.org/p/TricircleToDo
Discussion of stateless design.


Best Regards
Chaoyi Huang ( Joe Huang )

From: joehuang [mailto:joehu...@huawei.com]
Sent: Tuesday, December 08, 2015 1:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [tricircle]Stateless design proposal for Tricircle 
project

Hi,

Managing multiple instances of OpenStack is a headache.  Each OpenStack 
instance is individual silo, with its separate resources, networks, images, etc.

Tricircle, the project aiming to address this headache, a Top (aka cascading) 
minimalist "OpenStack instance" will manages multiple Bottom (aka cascaded) 
OpenStack instances. The top will expose OpenStack API to embrace all 
eco-system built upon OpenStack API. This model and its value has been verified 
in several production clouds.

Now one stateless design for the Tricircle, the top minimalist "OpenStack 
instance",  is just proposed in the doc [1]:

The stateless design introduce several components, and fully decoupled with 
OpenStack services like Nova, Cinder, and the Tricircle plugin will work just 
like OVN, ODL plugin in Neutron project, the design also try to remove the uuid 
mapping, status synchronization challenges.

•  Admin API
manage sites(bottom OpenStack instances) and availability zone mapping
retrieve object uuid routing
expose api for maintenance
•  Nova API-GW
an standalone web service to receive all nova api request, and routing the 
request to regarding bottom OpenStack according to Availability Zone ( during 
creation ) or resource id ( during operation and query ).
work as stateless service, and could run with processes distributed in 
multi-hosts.
•  Cinder API-GW
an standalone web service to receive all cinder api request, and routing the 
request to regarding bottom OpenStack according to Availability Zone ( during 
creation ) or resource id ( during operation and query ).
work as stateless service, and could run with processes distributed in 
multi-hosts.
•  XJob
receive and process cross OpenStack functionalities and other async. jobs from 
message bus for example, when booting a VM for the first time for the project, 
router, security group rule, FIP and other resources may have not already been 
created in the bottom site, but it’s required. Not like network, security 
group, ssh key etc resources they must be created before a VM booting, these 
resources could be created in async.way to accelerate response for the first VM 
booting request
cross OpenStack networking also will be done in async. jobs
Any of Admin API, Nova API-GW, Cinder API-GW, Neutron Tricircle plugin could 
send an async. job to XJob through message bus with RPC API provided by XJob
•  Neutron Tricircle plugin
Just like OVN, ODL Neutron plugin, the tricircle plugin serve for multi-site 
networking purpose, including interaction with DCI SDN controller, will use ML2 
mechanism driver interface to call DCI SDN controller, especially for cross 
OpenStack provider multi-segment L2 networking.
•  DB
Tricircle can have its own database to store sites, fake nodes, availability 
zone mapping, jobs, resource routing table

A plan to do PoC for this idea is working on the experiment branch of Tricircle 
[2][4], once the result give us positive feedback, the work will be moved to 
the master branch.

Welcome to join the adventure, contribute your power in the review, design  
writing source code, maintaining infrastructure, testing, bug fix, the weekly 
meeting[3]..., all work just starts[4].

[1] design doc: 
https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g
[2] Stateless design branch: 
https://github.com/openstack/tricircle/tree/experiment
[3] weekly meeting: #openstack-meeting on every Wednesday starting from UTC 
13:00
[4] To do list is in the etherpad: 
https://etherpad.openstack.org/p/TricircleToDo

Best Regards
Chaoyi Huang ( Joe Huang )

From: joehuang
Sent: Wednesday, December 02, 2015 2:37 PM
To: 'Zhipeng Huang'; OpenStack Development Mailing List (not for usage 
questions); caizhiyuan (A); Irena Berezovsky; Orran Krieger; Mohammad 
Badruzzaman; 홍석찬
Subject: [openstack-dev][tricircle] weekly meeting of Dec.2

Hi,

Let’s have regular meeting today starting UTC1300 at #openstack-meeting.

The networking proposal is updated in the document, and a proposal for 
stateless PoC also was updated in the doc.
https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit?usp=sharing

Best Regards
Chaoyi Huang ( Joe Huang )

From: Zhipeng Huang [mailto:zhipengh...@gmail.com]
Sent: Wednesday, November 25, 2015 5:44 PM
To: OpenStack Development Mailing List (not for usage questions); joehuang; 
caizhiyuan (A); Irena Berezovsky; Orran Krieger; Mohammad Badruzzaman; 홍석찬
Subject: Re: [openstack-dev][tricircle]Tokyo Summit 

Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-15 Thread Clint Byrum
Excerpts from James Penick's message of 2015-12-15 17:19:19 -0800:
> > getting rid of the raciness of ClusteredComputeManager in my
> >current deployment. And I'm willing to help other operators do the same.
> 
>  You do alleviate race, but at the cost of complexity and
> unpredictability.  Breaking that down, let's say we go with the current
> plan and the compute host abstracts hardware specifics from Nova.  The
> compute host will report (sum of resources)/(sum of managed compute).  If
> the hardware beneath that compute host is heterogenous, then the resources
> reported up to nova are not correct, and that really does have significant
> impact on deployers.
> 
>  As an example: Let's say we have 20 nodes behind a compute process.  Half
> of those nodes have 24T of disk, the other have 1T.  An attempt to schedule
> a node with 24T of disk will fail, because Nova scheduler is only aware of
> 12.5T of disk.
> 
>  Ok, so one could argue that you should just run two compute processes per
> type of host (N+1 redundancy).  If you have different raid levels on two
> otherwise identical hosts, you'll now need a new compute process for each
> variant of hardware.  What about host aggregates or availability zones?
> This sounds like an N^2 problem.  A mere 2 host flavors spread across 2
> availability zones means 8 compute processes.
> 
> I have hundreds of hardware flavors, across different security, network,
> and power availability zones.
> 
> >None of this precludes getting to a better world where Gantt actually
> >exists, or the resource tracker works well with Ironic.
> 
> It doesn't preclude it, no. But Gantt is dead[1], and I haven't seen any
> movement to bring it back.
> 
> >It just gets us to an incrementally better model in the meantime.
> 
>  I strongly disagree. Will Ironic manage its own concept of availability
> zones and host aggregates?  What if nova changes their model, will Ironic
> change to mirror it?  If not I now need to model the same topology in two
> different ways.
> 

Yes and yes?

How many matroska dolls can there possibly be in there anyway?

In all seriousness, I don't think it's unreasonable to say that something
that wants to create its own reasonable facsimile of Nova's scheduling
and resource tracking would need to implement the whole interface,
and would in fact need to continue to follow that interface over time.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Proposal to Delay Docker Removal From Fuel Master Node

2015-12-15 Thread Vladimir Kozhukalov
-1

We already discussed this and we have made a decision to move stable branch
creation from HCF to SCF. There were reasons for this. We agreed that once
stable branch is created, master becomes open for new features. Let's avoid
discussing this again.

Vladimir Kozhukalov

On Wed, Dec 16, 2015 at 9:55 AM, Bulat Gaifullin 
wrote:

> +1
>
> Regards,
> Bulat Gaifullin
> Mirantis Inc.
>
>
>
> On 15 Dec 2015, at 22:19, Andrew Maksimov  wrote:
>
> +1
>
> Regards,
> Andrey Maximov
> Fuel Project Manager
>
> On Tue, Dec 15, 2015 at 9:41 PM, Vladimir Kuklin 
> wrote:
>
>> Folks
>>
>> This email is a proposal to push Docker containers removal from the
>> master node to the date beyond 8.0 HCF.
>>
>> Here is why I propose to do so.
>>
>> Removal of Docker is a rather invasive change and may introduce a lot of
>> regressions. It is well may affect how bugs are fixed - we might have 2
>> ways of fixing them, while during SCF of 8.0 this may affect velocity of
>> bug fixing as you need to fix bugs in master prior to fixing them in stable
>> branches. This actually may significantly increase our bugfixing pace and
>> put 8.0 GA release on risk.
>>
>>
>>
>> --
>> Yours Faithfully,
>> Vladimir Kuklin,
>> Fuel Library Tech Lead,
>> Mirantis, Inc.
>> +7 (495) 640-49-04
>> +7 (926) 702-39-68
>> Skype kuklinvv
>> 35bk3, Vorontsovskaya Str.
>> Moscow, Russia,
>> www.mirantis.com 
>> www.mirantis.ru
>> vkuk...@mirantis.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev