Re: [openstack-dev] neutron metadata-agent HA

2015-12-14 Thread Fox, Kevin M
ha and dvr don't play nicely today though, and for our use case, if given the 
choice, dvr looks more appealing then l3-ha. Thanks for the advice. We may just 
go with the pacemaker option with dvr taking most of the load off of the 
network node until l3+ha & dvr all play nice.

Thanks,
Kevin

From: Assaf Muller [amul...@redhat.com]
Sent: Monday, December 14, 2015 2:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] neutron metadata-agent HA

If you're not running HA routers, you have a couple of options that
I'm aware of:

1) Use multiple L3 agents in A/A, and enable
neutron.conf:allow_automatic_l3agent_failover. In which case you'd
enable the metadata agent on each node. There's pros and cons of this
approach vs. HA routers. Significantly slower failover (Hours instead
of seconds, depending on number of routers), reliable on the control
plane for a successful failover, but simpler with less room for bugs.
I recommend HA routers, but I'm biased.
2) Use Pacemaker or similar to manage a cluster (Or clusters) of
network nodes in A/P, in which case all four Neutron agents (L2,
metadata, DHCP, L3) are enabled on only one machine in a cluster at a
time. This is fairly out of date at this point.

On Mon, Dec 14, 2015 at 12:33 PM, Fox, Kevin M  wrote:
> What about the case where your not running ha routers? Should you still run 
> more then one?
>
> Thanks,
> Kevin
> 
> From: Assaf Muller [amul...@redhat.com]
> Sent: Saturday, December 12, 2015 12:44 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] neutron metadata-agent HA
>
> The neutron metadata agent is stateless. It takes requests from the
> metadata proxies running in the router namespaces and moves the
> requests on to the nova server. If you're using HA routers, start the
> neutron-metadata-agent on every machine the L3 agent runs, and just
> make sure that the metadata-agent is restarted in case it crashes and
> you're done. Nothing else you need to do.
>
> On Fri, Dec 11, 2015 at 3:24 PM, Fabrizio Soppelsa
>  wrote:
>>
>> On Dec 10, 2015, at 12:56 AM, Alvise Dorigo 
>> wrote:
>>
>> So my question is: is there any progress on this topic ? is there a way
>> (something like a cronjob script) to make the metadata-agent redundant
>> without involving the clustering software Pacemaker/Corosync ?
>>
>>
>> Reason for such a dirty solution instead of rely onto pacemaker?
>>
>> I’m not aware of such initiatives - just checked the blueprints in Neutron
>> and I found no relevant. I can suggest to file a proposal to the
>> correspondent launchpad page, by elaborating your idea.
>>
>> F.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-14 Thread Vilobh Meshram
Hi All,

Currently, it is possible to create unlimited number of resource like
bay/pod/service/. In Magnum, there should be a limitation for user or
project to create Magnum resource,
and the limitation should be configurable[1].

I proposed following design :-

1. Introduce new table magnum.quotas
++--+--+-+-++

| Field  | Type | Null | Key | Default | Extra  |

++--+--+-+-++

| id | int(11)  | NO   | PRI | NULL| auto_increment |

| created_at | datetime | YES  | | NULL||

| updated_at | datetime | YES  | | NULL||

| deleted_at | datetime | YES  | | NULL||

| project_id | varchar(255) | YES  | MUL | NULL||

| resource   | varchar(255) | NO   | | NULL||

| hard_limit | int(11)  | YES  | | NULL||

| deleted| int(11)  | YES  | | NULL||

++--+--+-+-++

resource can be Bay, Pod, Containers, etc.


2. API controller for quota will be created to make sure basic CLI commands
work.

quota-show, quota-delete, quota-create, quota-update

3. When the admin specifies a quota of X number of resources to be created
the code should abide by that. For example if hard limit for Bay is 5 (i.e.
a project can have maximum 5 Bay's) if a user in a project tries to exceed
that hardlimit it won't be allowed. Similarly goes for other resources.

4. Please note the quota validation only works for resources created via
Magnum. Could not think of a way that Magnum to know if a COE specific
utilities created a resource in background. One way could be to see the
difference between whats stored in magnum.quotas and the information of the
actual resources created for a particular bay in k8s/COE.

5. Introduce a config variable to set quotas values.

If everyone agrees will start the changes by introducing quota restrictions
on Bay creation.

Thoughts ??


-Vilobh

[1] https://blueprints.launchpad.net/magnum/+spec/resource-quota
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Regarding v2 LoadBalancer's status(es)

2015-12-14 Thread Brandon Logan
Hi Bryan,

On Mon, 2015-12-14 at 15:19 -0600, Bryan Jones wrote:
> Hi All,
> 
> I had a few issues/questions regarding the statuses
> (provisioning_status and operating_status) of a v2 LoadBalancer. To
> preface these, I am working on the LBaaS v2 support in Heat.
> 
> The first question regards the allowed values for each of
> provisioning_status and operating status. Here it seems the
> documentation is ambiguous. [1] provides a list of possible statuses,
> but does not mention if they are options for provisioning_status or
>  operating_status. [2] provides much clearer options for each status,
> but does not show the INACTIVE status mention in [1]. Should INACTIVE
> be included in the possible options for one of the statuses, or should
> it be removed from [1] altogether?

Yeah this needs to be better documented.  I would say all of those
statuses in the docs pertain to provisioning_status, except for
INACTIVE, which I'm actually not sure where that is being used.  I have
to plead ignorance on this.  I was initially thinking operating_status
but I don't see it being used.  So that probably needs to just be pulled
out of the docs entirely.  The operating_status statuses are listed in
code here [1].  They are pretty self explanatory, except for maybe
DEGRADED.  DEGRADED basically means that one or more of its descendants
are in an OFFLINE operating_status.  NO_MONITOR means no health monitor
so operating_status can't be evaluated.  DISABLED means admin_state_up
on that entity is set to False.

> 
> Second, [1] also mentions that an error_details attribute will be
> provided if the status is ERROR. I do not see any error_details
> attribute in the LoadBalancer code [3], so I am wondering where that
> attribute comes from?

This is actually something that was in v1 (status_description) that we
have not added to v2.  It would be nice to have but its not there yet.
The docs should be updated to remove this.
> 
> Finally, I'm curious what operations can be performed on the
> LoadBalancer if the operating_status is OFFLINE and the
> provisioning_status is ACTIVE. First is this state possible? And
> second, can the LoadBalancer be manipulated (i.e. add a Listener to
> the LoadBalancer) if it is in this state?

Operations on a load balancer are only restricted based on the
provisioning_status.  operating_status is purely for information.  If
the load balancer's provisioning status is ACTIVE then you can do any
operation on it, regardless of operating_status.

I don't know of a current scenario where ACTIVE/OFFLINE status is
actually possible for a load balancer, but a driver could decide to do
that, though I'd like to understand that use case first.

> 
> [1]
> http://developer.openstack.org/api-ref-networking-v2-ext.html#lbaas-v2.0
> [2]
> http://developer.openstack.org/api-ref-networking-v2-ext.html#showLoadBalancerv2
> [3]
> https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/services/loadbalancer/data_models.py#L503
> 
> Thanks,
> 
> BRYAN JONES
> Software Engineer - OpenStack Development
> 
> ___
> Phone: 1-507-253-2620
> E-mail: jone...@us.ibm.com
> Find me on: LinkedIn:
> http://www.linkedin.com/in/bjones17/
> IBM
> 
>   3605 Hwy 52 N
>Rochester, MN 55901-1407
>   United States
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[1]
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/services/loadbalancer/constants.py#L100

Thanks,
Brandon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-14 Thread Joshua Harlow
A question but what filtering/scheduling would be done in which place; 
any thoughts on the breakup between nova and ironic?


If say ironic knows about all baremetal resources and nova doesn't know 
about them, then what kind of decisions can nova make during scheduling 
time? I guess the same question exists for other clustered drivers, what 
decision does nova really make for those types of drivers, is the 
decision beneficial?


I guess the same question connects into various/most filters and how 
they operate with clustered drivers:


For example if nova doesn't know about ironic baremetal resources, how 
does the concept of an availability zone or aggregate, or compute 
enabled/disabled filtering work (all these afaik are connected to 
nova-compute *service* and/or services table, but with this clustering 
model, which nova-compute proxies a request into ironic doesn't seem to 
mean that much).


Anyone compiled (or thought about compiling) a list of concepts from 
nova that *appear to* breakdown when a top level project (nova) doesn't 
know about the resources its child projects (ironic...) contain? (maybe 
an etherpad exists somewhere?)


Dan Smith wrote:

Thanks for summing this up, Deva. The planned solution still gets my
vote; we build that, deprecate the old single compute host model where
nova handles all scheduling, and in the meantime figure out the gaps
that operators need filled and the best way to fill them.


Mine as well, speaking only for myself. It's going to require some
deprecation and transition, but anyone with out-of-tree code (filters,
or otherwise) has to be prepared for that at any moment.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [keystone] Is "domain" a mapping to real-world cloud tenant?

2015-12-14 Thread darren wang
Hi Dolph,

 

 Here it is, http://profsandhu.com/confrnc/misconf/nss14-preprint-bo.pdf

 

 You may have a look at it and see if it’s reasonable.

 

Darren

 

发件人: Dolph Mathews [mailto:dolph.math...@gmail.com] 
发送时间: 2015年12月15日 6:10
收件人: OpenStack Development Mailing List (not for usage questions) 

主题: Re: [openstack-dev] [keystone] Is "domain" a mapping to real-world cloud 
tenant?

 

Unfortunately, "tenancy" has multiple definitions in our world so let me try to 
clarify further! Do you have a link to that paper?

 

Tenants (v2) and projects (v3) have a history as serving to isolate the 
resources (VMs, networks, etc) of multiple tenants. They literally provide for 
multitenancy.

 

Domains exist at a higher level, and actually (unfortunately) serve a multiple 
purposes.

 

The first of which is as a container for multiple tenants/projects - think of 
domains as the billable entity in a public cloud. A single domain might be 
responsible for deploying multiple department's or project's resources in the 
cloud (each of which requires multi-tenant isolation, and thus has many 
tenants/projects).

 

The second purpose is that of authorization -- in keystone, you might need 
domain-level authorization to create projects and assign roles. The same might 
apply to domain-specific quotas, domain-specific policies, and other 
domain-level concerns.

 

Lastly, domains serve as a namespaces for users and groups (identity / 
authentication) within keystone itself. They are analogous to identity 
providers in that regard.

 

Hope this helps!

 

On Mon, Dec 14, 2015 at 2:56 AM, darren wang  > wrote:

Hi,

 

I am wondering whether “domain” is a mapping to a real-world cloud tenant (not 
the counterpart of “project” in v2 Identity API) because recently I read a 
paper that describes “domain” as a fit for the abstract concept “cloud tenant”. 
Does this saying stay in line with community’s purpose?

 

Thanks!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-14 Thread Jim Rollenhagen
On Mon, Dec 14, 2015 at 04:15:42PM -0800, James Penick wrote:
> I'm very much against it.
> 
>  In my environment we're going to be depending heavily on the nova
> scheduler for affinity/anti-affinity of physical datacenter constructs,
> TOR, Power, etc. Like other operators we need to also have a concept of
> host aggregates and availability zones for our baremetal as well. If these
> decisions move out of Nova, we'd have to replicate that entire concept of
> topology inside of the Ironic scheduler. Why do that?
> 
> I see there are 3 main problems:
> 
> 1. Resource tracker sucks for Ironic.
> 2. We need compute host HA
> 3. We need to schedule compute resources in a consistent way.
> 
>  We've been exploring options to get rid of RT entirely. However, melwitt
> suggested out that by improving RT itself, and changing it from a pull
> model to a push, we skip a lot of these problems. I think it's an excellent
> point. If RT moves to a push model, Ironic can dynamically register nodes
> as they're added, consumed, claimed, etc and update their state in Nova.
> 
>  Compute host HA is critical for us, too. However, if the compute hosts are
> not responsible for any complex scheduling behaviors, it becomes much
> simpler to move the compute hosts to being nothing more than dumb workers
> selected at random.
> 
>  With this model, the Nova scheduler can still select compute resources in
> the way that it expects, and deployers can expect to build one system to
> manage VM and BM. We get rid of RT race conditions, and gain compute HA.

Right, so Deva mentioned this here. Copied from below:

> > > Some folks are asking us to implement a non-virtualization-centric
> > > scheduler / resource tracker in Nova, or advocating that we wait for the
> > > Nova scheduler to be split-out into a separate project. I do not believe
> > > the Nova team is interested in the former, I do not want to wait for the
> > > latter, and I do not believe that either one will be an adequate solution
> > > -- there are other clients (besides Nova) that need to schedule workloads
> > > on Ironic.

And I totally agree with him. We can rewrite the resource tracker, or we
can break out the scheduler. That will take years - what do you, as an
operator, plan to do in the meantime? As an operator of ironic myself,
I'm willing to eat the pain of figuring out what to do with my
out-of-tree filters (and cells!), in favor of getting rid of the
raciness of ClusteredComputeManager in my current deployment. And I'm
willing to help other operators do the same.

We've been talking about this for close to a year already - we need
to actually do something. I don't believe we can do this in a
reasonable timeline *and* make everybody (ironic devs, nova devs, and
operators) happy. However, as we said elsewhere in the thread, the old
model will go through a deprecation process, and we can wait to remove
it until we do figure out the path forward for operators like yourself.
Then operators that need out-of-tree filters and the like can keep doing
what they're doing, while they help us (or just wait) to build something
that meets everyone's needs.

None of this precludes getting to a better world where Gaant actually
exists, or the resource tracker works well with Ironic. It just gets us
to an incrementally better model in the meantime.

If someone has a *concrete* proposal (preferably in code) for an alternative
that can be done relatively quickly and also keep everyone happy here, I'm
all ears. But I don't believe one exists at this time, and I'm inclined
to keep rolling forward with what we've got here.

// jim

> 
> -James
> 
> On Thu, Dec 10, 2015 at 4:42 PM, Jim Rollenhagen 
> wrote:
> 
> > On Thu, Dec 10, 2015 at 03:57:59PM -0800, Devananda van der Veen wrote:
> > > All,
> > >
> > > I'm going to attempt to summarize a discussion that's been going on for
> > > over a year now, and still remains unresolved.
> > >
> > > TLDR;
> > > 
> > >
> > > The main touch-point between Nova and Ironic continues to be a pain
> > point,
> > > and despite many discussions between the teams over the last year
> > resulting
> > > in a solid proposal, we have not been able to get consensus on a solution
> > > that meets everyone's needs.
> > >
> > > Some folks are asking us to implement a non-virtualization-centric
> > > scheduler / resource tracker in Nova, or advocating that we wait for the
> > > Nova scheduler to be split-out into a separate project. I do not believe
> > > the Nova team is interested in the former, I do not want to wait for the
> > > latter, and I do not believe that either one will be an adequate solution
> > > -- there are other clients (besides Nova) that need to schedule workloads
> > > on Ironic.
> > >
> > > We need to decide on a path of least pain and then proceed. I really want
> > > to get this done in Mitaka.
> > >
> > >
> > > Long version:
> > > -
> > >
> > > During Liberty, Jim and I worked with 

Re: [openstack-dev] [docs][stable][ironic] Stable branch docs

2015-12-14 Thread Tony Breeds
On Mon, Dec 14, 2015 at 06:42:13AM -0800, Jim Rollenhagen wrote:
> Hi all,
> 
> In the big tent, project teams are expected to maintain their own
> install guides within their projects' source tree. There's a
> conversation going on over in the docs list[1] about changing this, but
> in the meantime...
> 
> Ironic (and presumably other projects) publish versioned documentation,
> which includes the install guide. For example, our kilo install guide is
> here[2]. However, there's no way to update those, as stable branch
> policy[3] only allows for important bug fixes to be backported. For
> example, this patch[4] was blocked for this reason (among others).

The stable guide[1] was recently changed[2] to allow just this thing,
essentially at the discretion of the stable teams, both stable-maint-core and
project specific.

So I'd hazard a guess that if the patch you point out were to follow the
documented procedure (clean backport with matching Change-ID done with 
cherry-pick
-x)[3] It'd come down to ironic to decide if it was a good candidate.

[1] 
http://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes
[2] https://review.openstack.org/#/c/247415/1
[3] https://wiki.openstack.org/wiki/StableBranch#Processes [4]
[4] Yes this should be part of the project-team-guide, /me fixes that.

Yours Tony.


pgpLecJrtu9v1.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-14 Thread James Penick
I'm very much against it.

 In my environment we're going to be depending heavily on the nova
scheduler for affinity/anti-affinity of physical datacenter constructs,
TOR, Power, etc. Like other operators we need to also have a concept of
host aggregates and availability zones for our baremetal as well. If these
decisions move out of Nova, we'd have to replicate that entire concept of
topology inside of the Ironic scheduler. Why do that?

I see there are 3 main problems:

1. Resource tracker sucks for Ironic.
2. We need compute host HA
3. We need to schedule compute resources in a consistent way.

 We've been exploring options to get rid of RT entirely. However, melwitt
suggested out that by improving RT itself, and changing it from a pull
model to a push, we skip a lot of these problems. I think it's an excellent
point. If RT moves to a push model, Ironic can dynamically register nodes
as they're added, consumed, claimed, etc and update their state in Nova.

 Compute host HA is critical for us, too. However, if the compute hosts are
not responsible for any complex scheduling behaviors, it becomes much
simpler to move the compute hosts to being nothing more than dumb workers
selected at random.

 With this model, the Nova scheduler can still select compute resources in
the way that it expects, and deployers can expect to build one system to
manage VM and BM. We get rid of RT race conditions, and gain compute HA.

-James

On Thu, Dec 10, 2015 at 4:42 PM, Jim Rollenhagen 
wrote:

> On Thu, Dec 10, 2015 at 03:57:59PM -0800, Devananda van der Veen wrote:
> > All,
> >
> > I'm going to attempt to summarize a discussion that's been going on for
> > over a year now, and still remains unresolved.
> >
> > TLDR;
> > 
> >
> > The main touch-point between Nova and Ironic continues to be a pain
> point,
> > and despite many discussions between the teams over the last year
> resulting
> > in a solid proposal, we have not been able to get consensus on a solution
> > that meets everyone's needs.
> >
> > Some folks are asking us to implement a non-virtualization-centric
> > scheduler / resource tracker in Nova, or advocating that we wait for the
> > Nova scheduler to be split-out into a separate project. I do not believe
> > the Nova team is interested in the former, I do not want to wait for the
> > latter, and I do not believe that either one will be an adequate solution
> > -- there are other clients (besides Nova) that need to schedule workloads
> > on Ironic.
> >
> > We need to decide on a path of least pain and then proceed. I really want
> > to get this done in Mitaka.
> >
> >
> > Long version:
> > -
> >
> > During Liberty, Jim and I worked with Jay Pipes and others on the Nova
> team
> > to come up with a plan. That plan was proposed in a Nova spec [1] and
> > approved in October, shortly before the Mitaka summit. It got significant
> > reviews from the Ironic team, since it is predicated on work being done
> in
> > Ironic to expose a new "reservations" API endpoint. The details of that
> > Ironic change were proposed separately [2] but have deadlocked.
> Discussions
> > with some operators at and after the Mitaka summit have highlighted a
> > problem with this plan.
> >
> > Actually, more than one, so to better understand the divergent viewpoints
> > that result in the current deadlock, I drew a diagram [3]. If you haven't
> > read both the Nova and Ironic specs already, this diagram probably won't
> > make sense to you. I'll attempt to explain it a bit with more words.
> >
> >
> > [A]
> > The Nova team wants to remove the (Host, Node) tuple from all the places
> > that this exists, and return to scheduling only based on Compute Host.
> They
> > also don't want to change any existing scheduler filters (especially not
> > compute_capabilities_filter) or the filter scheduler class or plugin
> > mechanisms. And, as far as I understand it, they're not interested in
> > accepting a filter plugin that calls out to external APIs (eg, Ironic) to
> > identify a Node and pass that Node's UUID to the Compute Host.  [[ nova
> > team: please correct me on any point here where I'm wrong, or your
> > collective views have changed over the last year. ]]
> >
> > [B]
> > OpenStack deployers who are using Nova + Ironic rely on a few things:
> > - compute_capabilities_filter to match node.properties['capabilities']
> > against flavor extra_specs.
> > - other downstream nova scheduler filters that do other sorts of hardware
> > matching
> > These deployers clearly and rightly do not want us to take away either of
> > these capabilities, so anything we do needs to be backwards compatible
> with
> > any current Nova scheduler plugins -- even downstream ones.
> >
> > [C] To meet the compatibility requirements of [B] without requiring the
> > nova-scheduler team to do the work, we would need to forklift some parts
> of
> > the nova-scheduler code into Ironic. But I think that's terrible, and 

[openstack-dev] [kolla] recent MIA from Kolla PTL

2015-12-14 Thread Steven Dake (stdake)
Hey folks,

Normally I wouldn't talk about health problems on a public mailing list, but 
its not super private - just teeth problems.  I wanted to explain why I have 
been MIA for 4 weeks.  I had a tooth infection, which turned into a root canal, 
which turned into a failed root canal, which turned into a worse infection.  
People can die from tooth infections, so it can be pretty serious, but I'm in 
the clear now.  The infection has cleared and I have a crown procedure 
Thursday, which should be the end of the tooth drama for now (YAYAY \o/).  
Basically I have been unable to do much because I've been in so much pain and 
not able to eat well.

But I'm back in action now and everything is nearly healed up.

Since our community is really diverse, it is also extremely resilient and Kolla 
has held up well in my absence.  As a result this may not be necessary, but 
please feel free to do the following via off-list email:

#1: Send me the #1 problem you would like me to solve for you ASAP
#2: Send me the #2 problem you would like me to solve by the end of the year
#3: Send me the #3 problem you would like me to solve by the end of January

I will prioritize these and work down the list of outstanding requests (and 
keep them just on my personal TODO).  Please don't feel like your imposing - 
this is my responsibility and without communication I may not be able to 
effectively service our community needs.  If it is something I should know 
about or already committed to, please still follow the above process so I can 
have a record of what people need from me.

Thanks for your understanding and cooperation!

Regards,
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][stable][ironic] Stable branch docs

2015-12-14 Thread Jim Rollenhagen
On Tue, Dec 15, 2015 at 11:05:16AM +1100, Tony Breeds wrote:
> On Mon, Dec 14, 2015 at 06:42:13AM -0800, Jim Rollenhagen wrote:
> > Hi all,
> > 
> > In the big tent, project teams are expected to maintain their own
> > install guides within their projects' source tree. There's a
> > conversation going on over in the docs list[1] about changing this, but
> > in the meantime...
> > 
> > Ironic (and presumably other projects) publish versioned documentation,
> > which includes the install guide. For example, our kilo install guide is
> > here[2]. However, there's no way to update those, as stable branch
> > policy[3] only allows for important bug fixes to be backported. For
> > example, this patch[4] was blocked for this reason (among others).
> 
> The stable guide[1] was recently changed[2] to allow just this thing,
> essentially at the discretion of the stable teams, both stable-maint-core and
> project specific.
> 
> So I'd hazard a guess that if the patch you point out were to follow the
> documented procedure (clean backport with matching Change-ID done with 
> cherry-pick
> -x)[3] It'd come down to ironic to decide if it was a good candidate.
> 
> [1] 
> http://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes
> [2] https://review.openstack.org/#/c/247415/1
> [3] https://wiki.openstack.org/wiki/StableBranch#Processes [4]
> [4] Yes this should be part of the project-team-guide, /me fixes that.
> 
> Yours Tony.

Perfect! Thanks for pointing that out. :)

As a note, I don't actually see the new note on the page you linked,
though I do see it in the git repo. Strange.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][stable][ironic] Stable branch docs

2015-12-14 Thread Tony Breeds
On Mon, Dec 14, 2015 at 05:33:14PM -0800, Jim Rollenhagen wrote:

> Perfect! Thanks for pointing that out. :)
>
> As a note, I don't actually see the new note on the page you linked,
> though I do see it in the git repo. Strange.

Yeah strange.  I'll look into that.

Yours Tony.


pgpfedCQyoB3H.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cross-Project Meeting SKIPPED, Tue Dec 15th, 21:00 UTC

2015-12-14 Thread Mike Perez
Hi all! 

We will be skipping the cross-project meeting since there are no agenda items 
to discuss, but someone can add one [1] to call a meeting next time. 

We also have a new meeting channel which is #openstack-meeting-cp where the 
cross-project meeting will now take place at it's usual time Tuesdays at 2100 
UTC.

The Technical Committee have a few cross-project specs that will be discussed 
in their meeting [2].

If you're unable to keep up with the Dev list on cross-project initiatives, 
there is also the Dev Digest [3].


[1] - 
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting#Proposed_agenda
[2] - https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee
[3] - 
http://www.openstack.org/blog/2015/12/openstack-developer-mailing-list-digest-20151205/

--  
Mike Perez


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Is there anyone truly working on this issue https://bugs.launchpad.net/cinder/+bug/1520102?

2015-12-14 Thread Gorka Eguileor
On 11/12, mtanino wrote:
> Hi Thang, Vincent,
> 
> I guess the root cause is that finish_volume_migration() still
> handles a volume as a dictionary instead of volume object and
> the method returns dict volume.
> 
> And then, 'rpcapi.delete_volume()' in migrate_volume_completion()
> tries to delete dict volume but it fails due to the following error.
> 

I believe that is not entirely correct, the issue is that
'finish_volume_migration' returns an ORM volume that then is passed by
'rpcapi.delete_volume' in the place of a Versioned Object Volume (this
is the recently added optional argument), so this is serialized and
deserialized as a normal dictionary (instead of as a VO dictionary), and
when the manager at the other end sees that it has received something in
the place of the VO Volume argument it tries to access the 'id'
attribute.

But since the ORM volume was not a VO it was passed as a normal
dictionary and therefore has no 'id' attribute.

For reference, Vincent has proposed a patch [1].

Cheers,
Gorka.

[1]: https://review.openstack.org/250216/

> >As far as you know, is there someone working on this issue? If not, I am 
> >gonna fix it.
> 
> Not yet. You can go ahead.
> 
> - Result of 'cinder migrate --force-host-copy True '
> 
> 2015-12-11 20:36:33.395 ERROR oslo_messaging.rpc.dispatcher 
> [req-2c271a5e-7e6a-4b38-97d1-22ef245c7892 f95ea885e1a34a81975c50be63444a0b 
> 56d8eb5cc90242178cf05aedab3c1612] Exception during message handling: 'dict' 
> object has no attribute 'id'
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
> recent call last):
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
> "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 
> 142, in _dispatch_and_reply
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher 
> executor_callback))
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
> "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 
> 186, in _dispatch
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher 
> executor_callback)
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
> "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 
> 129, in _do_dispatch
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher result = 
> func(ctxt, **new_args)
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
> "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in 
> wrapper
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher return 
> f(*args, **kwargs)
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
> "/opt/stack/cinder/cinder/volume/manager.py", line 152, in lvo_inner1
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher return 
> lvo_inner2(inst, context, volume_id, **kwargs)
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
> "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, 
> in inner
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher return 
> f(*args, **kwargs)
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
> "/opt/stack/cinder/cinder/volume/manager.py", line 151, in lvo_inner2
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher return 
> f(*_args, **_kwargs)
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
> "/opt/stack/cinder/cinder/volume/manager.py", line 603, in delete_volume
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher volume_id = 
> volume.id
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher AttributeError: 
> 'dict' object has no attribute 'id'
> 2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher
> 
> Thanks,
> Mitsuhiro Tanino
> 
> On 12/10/2015 11:24 PM, Thang Pham wrote:
> >I have to try it again myself.  What errors are you seeing?  Is it the same? 
> > Feel free to post a patch if you already have one that would solve it.
> >
> >Regards,
> >Thang
> >
> >On Thu, Dec 10, 2015 at 10:51 PM, Sheng Bo Hou  >> wrote:
> >
> >Hi Mitsuhiro, Thang
> >
> >The patch https://review.openstack.org/#/c/228916is merged, but sadly it 
> > does not cover the issue https://bugs.launchpad.net/cinder/+bug/1520102. 
> > This bug is still valid.
> >As far as you know, is there someone working on this issue? If not, I am 
> > gonna fix it.
> >
> >Best wishes,
> >Vincent Hou (侯胜博)
> >
> >Staff Software Engineer, Open Standards and Open Source Team, Emerging 
> > Technology Institute, IBM China Software Development Lab
> >
> >Tel: 86-10-82450778 Fax: 86-10-82453660
> >Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
> > 
> >Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
> > West Road, Haidian District, Beijing, 

Re: [openstack-dev] [Fuel] Nominate Bulat Gaifulin for fuel-web & fuel-mirror cores

2015-12-14 Thread Vladimir Sharshov
Hi,

+1 from me to Bulat.

On Mon, Dec 14, 2015 at 1:03 PM, Igor Kalnitsky 
wrote:

> Hi Fuelers,
>
> I'd like to nominate Bulat Gaifulin [1] for
>
> * fuel-web-core [2]
> * fuel-mirror-core [3]
>
> Bulat's doing a really good review with detailed feedback and he's a
> regular participant in IRC. He's co-author of packetary and
> fuel-mirror projects, and he made valuable contribution to fuel-web
> (e.g. task-based deployment engine).
>
> Fuel Cores, please reply back with +1/-1.
>
> - Igor
>
> [1] http://stackalytics.com/?module=fuel-web_id=bgaifullin
> [2] http://stackalytics.com/report/contribution/fuel-web/90
> [3] http://stackalytics.com/report/contribution/fuel-mirror/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] status of distil?

2015-12-14 Thread Steve Martinelli


While I was trying to submit patches for projects that had old
keystoneclient references (distil was one of the projects), I noticed that
there hasn't been much action on this project [0]. It's been a year since a
commit [1], no releases [2], and I can't submit a patch since
the .gitreview file doesn't point to review.openstack.org [3].

Is distil alive?

[0] https://github.com/openstack/distil
[1] https://github.com/openstack/distil/commits/master
[2] https://github.com/openstack/distil/releases
[3] https://github.com/openstack/distil/blob/master/.gitreview

thanks,
stevemar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-Announce List

2015-12-14 Thread Tom Fifield

... and back to this thread after a few weeks :)

The conclusions I saw were:
* Audience for openstack-announce should be "users/non-dev"
* Service project releases announcements are good
* Client library release announcements good
* Security announcements are good
* Internal library (particularly oslo) release announcements don't fit

Open Questions:
* Where do Internal library release announcements go? [-dev or new 
-release list or batched inside the weekly newsletter]

* Do SDK releases fit on -announce?


Regards,


Tom


On 20/11/15 12:00, Tom Fifield wrote:

Hi all,

I'd like to get your thoughts about the OpenStack-Announce list.

We describe the list as:

"""
Subscribe to this list to receive important announcements from the
OpenStack Release Team and OpenStack Security Team.

This is a low-traffic, read-only list.
"""

Up until July 2015, it was used for the following:
* Community Weekly Newsletter
* Stable branch release notifications
* Major (i.e. Six-monthly) release notifications
* Important security advisories

and had on average 5-10 messages per month.

After July 2015, the following was added:
* Release notifications for clients and libraries (one email per
library, includes contributor-focused projects)

resulting in an average of 70-80 messages per month.


Personally, I no longer consider this volume "low traffic" :)

In addition, I have been recently receiving feedback that users have
been unsubscribing from or deleting without reading the list's posts.

That isn't good news, given this is supposed to be the place where we
can make very important announcements and have them read.

One simple suggestion might be to batch the week's client/library
release notifications into a single email. Another might be to look at
the audience for the list, what kind of notifications they want, and
chose the announcements differently.

What do you think we should do to ensure the announce list remains useful?



Regards,


Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-12-14 Thread Alexis Lee
Marian Horban said on Thu, Dec 10, 2015 at 03:33:26PM +0200:
> Are there some progress with reloading configuration?
> Could we restore oslo-config review https://review.openstack.org/#/c/213062/
> ?

Hi Marian,

I'm also working on this, you might find
https://review.openstack.org/#/c/251471/ interesting.


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][release][all] Automatic .ics generation for OpenStack's and project's deadlines

2015-12-14 Thread Thierry Carrez
Sean McGinnis wrote:
> On Thu, Dec 10, 2015 at 06:20:44PM +, Flavio Percoco wrote:
>>
>> With the new home for the release schedule, and it being a good place
>> for projects to add their own deadlines as well, I believe it would be
>> good for people that use calendars to have these .ics being generated
>> and linked there as well.
>>
>> Has this been attempted? Any objections? Is there something I'm not
>> considering?
> 
> I really like this idea. If we get something in place, I'll definitely
> add any Cinder related dates the the schedule.

NB: you already can and should !

Please propose changes to:

http://git.openstack.org/cgit/openstack/releases/tree/doc/source/schedules/mitaka.rst

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Bulat Gaifulin for fuel-web & fuel-mirror cores

2015-12-14 Thread Aleksey Kasatkin
+1.


Aleksey Kasatkin


On Mon, Dec 14, 2015 at 12:49 PM, Vladimir Sharshov 
wrote:

> Hi,
>
> +1 from me to Bulat.
>
> On Mon, Dec 14, 2015 at 1:03 PM, Igor Kalnitsky 
> wrote:
>
>> Hi Fuelers,
>>
>> I'd like to nominate Bulat Gaifulin [1] for
>>
>> * fuel-web-core [2]
>> * fuel-mirror-core [3]
>>
>> Bulat's doing a really good review with detailed feedback and he's a
>> regular participant in IRC. He's co-author of packetary and
>> fuel-mirror projects, and he made valuable contribution to fuel-web
>> (e.g. task-based deployment engine).
>>
>> Fuel Cores, please reply back with +1/-1.
>>
>> - Igor
>>
>> [1] http://stackalytics.com/?module=fuel-web_id=bgaifullin
>> [2] http://stackalytics.com/report/contribution/fuel-web/90
>> [3] http://stackalytics.com/report/contribution/fuel-mirror/90
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Mistral team meeting reminder

2015-12-14 Thread Nikolay Makhotkin
Hi,

This is a reminder that we’ll have a team meeting today at #openstack-
meeting at 16.00 UTC.

Agenda:

   - Review action items
   - Current status (progress, issues, roadblocks, further plans)
   - M-2 status and planning
   - Open discussion


-- 
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-14 Thread Bartłomiej Piotrowski
On 2015-12-14 13:12, Igor Kalnitsky wrote:
> My opinion here is that I don't like that we're going to build and
> maintain one more custom package (just take a look at this patch [4]
> if you don't believe me), but I'd like to hear more opinion here.
> 
> Thanks,
> Igor
> 
> [1] https://bugs.launchpad.net/fuel/+bug/1523544
> [2] https://review.openstack.org/#/c/249656/
> [3] http://goo.gl/forms/Hk1xolKVP0
> [4] https://review.fuel-infra.org/#/c/14623/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

I also think we should stay with what CentOS provides. Increasing
maintenance burden for something that can be implemented without bells
and whistles sounds like a no-go.

Bartłomiej

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Separate master node provisioning and deployment

2015-12-14 Thread Igor Kalnitsky
Vladimir,

Thanks for raising this question. I totally support idea of separating
provisioning and deployment steps. I believe it'll simplify a lot of
things.

However I have some comments regarding this topic, see them inline. :)

> For a package it is absolutely normal to throw a user dialog.

It kills idea of fuel-menu, since each package will need to implement
configuration menu of its own. Moreover, having such configuration
menu in fuel-menu and in each package is too expensive, it will
require more effort that I'd like to have.

> Fuel package could install default astute.yaml (I'd like to rename it
> into /etc/fuel.yaml or /etc/fuel/config.yaml) and use values from the
> file by default not running fuelmenu

I don't like idea of having one common configuration file for Fuel
components. I think it'd be better when each component (subproject)
has its own configuration file, and knows nothing about external ones.

Meantime we can provide fuel-menu which will become a configuration
gate for different subprojects. Perhaps we could consider to use
pluggable approach, so each component will export plugin for fuel-menu
with own settings.

> What is wrong with 'deployment script' approach?

The wrong thing is that with such approach it would be impossible to
setup Fuel with just something like

$ yum install fuel

In my opinion we should go into the following approach:

* yum install fuel
* fuel-menu

The first command should install a basic Fuel setup, and everything
should work when it's done.

While the second one prompts a configuration menu where one might
change default settings (reconfigure default installation).

Thanks,
Igor

On Mon, Dec 14, 2015 at 9:30 AM, Vladimir Kozhukalov
 wrote:
> Oleg,
>
> Thanks a lot for your opinion. Here are some more thoughts on this topic.
>
> 1) For a package it is absolutely normal to throw a user dialog. But
> probably there is kind of standard for the dialog that does not allow to use
> fuelmenu. AFAIK, for DEB packages it is debconf and there is a tutorial [0]
> how to get user input during post install. I don't know if there is such a
> standard for RPM packages. In some MLs it is written that any command line
> program could be run in %post section including those like fuel-menu.
>
> 2) Fuel package could install default astute.yaml (I'd like to rename it
> into /etc/fuel.yaml or /etc/fuel/config.yaml) and use values from the file
> by default not running fuelmenu. A user then is supposed to run fuelmenu if
> he/she needs to re-configure fuel installation. However, it is gonna be
> quite intrusive. What if a user installs fuel and uses it for a while with
> default configuration. What if some clusters are already in use and then the
> user decides to re-configure the master node. Will it be ok?
>
> 3) What is wrong with 'deployment script' approach? Why can not fuel just
> install kind of deployment script? Fuel is not a service, it consists of
> many components. Moreover some of these components could be optional (not
> currently but who knows?), some of this components could be run on an
> external node (after all Fuel components use REST, AMQP, XMLRPC to interact
> with each other).
> Imagine you want to install OpenStack. It also consists of many components.
> Some components like database or AMQP service could be deployed using HA
> architecture. What if one needs Fuel to be run with external HA database,
> amqp? From this perspective I'd say Fuel package should not exist at all.
> Let's maybe think of Fuel package as a convenient way to deploy Fuel on a
> single node, i.e single node deployment script.
>
> 4) If Fuel is just a deployment script, then I'd say we should not run any
> post install dialog. Deployment script is to run this dialog (fuelmenu) and
> then run puppet. IMO it sounds reasonable.
>
>
> [0] http://www.fifi.org/doc/debconf-doc/tutorial.html
>
> Vladimir Kozhukalov
>
> On Fri, Dec 11, 2015 at 11:14 PM, Oleg Gelbukh 
> wrote:
>>
>> For the package-based deployment, we need to get rid of 'deployment
>> script' whatsoever. All configuration stuff should be done in package specs,
>> or by the user later on (maybe via some fuelmenu-like lightweight UI, or via
>> WebUI).
>>
>> Thus, fuel package must install everything that is required for running
>> base Fuel as it's dependencies (or dependencies of it's dependencies, as it
>> could be more complicated with cross-deps between our components).
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Fri, Dec 11, 2015 at 10:45 PM, Vladimir Kozhukalov
>>  wrote:
>>>
>>> Dear colleagues,
>>>
>>> At the moment part of the Fuel master deployment logic is located in ISO
>>> kickstart file, which is bad. We'd better carefully split provisioning and
>>> deployment stages so as to install base operating system during provisioning
>>> stage and then everything else on the deployment stage. That would make it
>>> possible to deploy 

Re: [openstack-dev] [Fuel] Configuration management for Fuel 7.0

2015-12-14 Thread Roman Sokolkov
Dmitry,

Q1. Yes.

> where do you plan to actually perform settings manipulation?

It was one of the critical blockers. Most of the settings are baked inside
fuel-library. Your feature [1] partially fixes this BTW. Which is good.
Partially, because only limited number of tasks has defined overrides.

> scheduled basis run nailgun-cm-agent

Currently i see better way. nailgun-cm-agent (or whatever) should just check
system status (i.e. puppet apply --noop) and report back. User will decide
apply changes or not.

Q2.
Yes, i did. One more use case covered. Please see first table in [2].

Q4. Agree. Here is the bug [3]

Q3,Q5, Q6
Good.

[1] https://blueprints.launchpad.net/fuel/+spec/openstack-config-change
[2]
https://docs.google.com/document/d/1bVVZJR73pBWB_WbfOzC-fh84pVsi20v4n3rvn38F4OI/edit
(limited access)
[3] https://bugs.launchpad.net/fuel/+bug/1525872



On Mon, Dec 14, 2015 at 4:39 AM, Dmitriy Novakovskiy <
dnovakovs...@mirantis.com> wrote:

> Roman,
>
> Thanks a lot for the feedback. We'll be planning improvements for [1] in
> upcoming 9.0 cycle, so your input and this discussion are very helpful and
> much appreciated.
>
> In overall, the concept for nailgun-cm-agent looks interesting, but I
> think you'll face some problems with it:
> - idempotency of puppet modules
> - lack of exposed parameters (fuel-lib hacking)
> - speed of re-runs of configuration mgmt (that we're already working on in
> 8.0)
>
> Now, my comments and questions.
>
> *1) nailgun-cm-agent concept*
> Q1. Do I understand correctly that the planned UX is:
> - Allow user to change configuration as dictated by Fuel (btw, where do
> you plan to actually perform settings manipulation? Directly in Puppet
> modules/manifests?)
> - On scheduled basis run nailgun-cm-agent and let it bring overall system
> state to be consistent with latest changes
> ?
>
> *2) "Advanced settings" [1] feature feedback*
> Q2. Please share the details about 13 real world tasks that you used for
> testing. Have you had a chance to test this same list against [1], as you
> did with fuel-cm-agent approach? I need to know what from real world is
> doable and what not with current state of [1]
> Q3. "It allows just apply, not track changes" - that's true, 8.0 has
> first MVP of this feature in place, and we don't yet have much tracking
> capability (other than looking at logs in DB, when what config change yaml
> was uploaded). We will be improving it in 9.0 cycle
> Q4. "Moreover works weird, if multiple changes uploaded, applying not the
> latest, but initial config change." - can you please share the detailed
> example? I'm not sure I understood it, but so far sounds like a bug that
> needs to be fixed.
> Q5. "Just limited number[1] of resources/tasks has support." - this is
> the limitation of what configs are shipped out of the box. When 8.0 is
> released, we'll have a documented way to add support for any OpenStack
> config file that Fuel tasks can reach
> Q6. "Can we  start moving all (non orchestrating) data into CMDB? yaml
> under git or any existing solution." We're now discussing major
> refactoring effort to be done in Fuel to integrate with Solar and solve
> some of the long standing
>
> [1] https://blueprints.launchpad.net/fuel/+spec/openstack-config-change
>
> On Fri, Dec 11, 2015 at 6:21 PM, Roman Sokolkov 
> wrote:
>
>> Oleg,
>>
>> thanks. I've tried it [1], looks like it works.
>>
>> - GOOD. "override_resource" resource. Like "back door" into puppet
>> modules.
>> - BAD. It allows just apply, not track changes. Moreover works weird,
>> if multiple changes uploaded, applying not the latest, but initial
>> config change.
>> - BAD. Just limited number[1] of resources/tasks has support.
>>
>> BTW, my feeling that we should NOT develop this approach in the same way.
>>
>> I'm not an expert, but as long-term
>> - Can we  start moving all (non orchestrating) data into CMDB? yaml under
>> git
>> or any existing solution.
>> - Can we track nodes state? For example, start by cron all puppet tasks
>> with --noop option
>> and check puppet state. Then if "out of sync" node start blinking YELLOW
>> and user
>> can push button, if needed.
>>
>> Thanks
>>
>> [1] https://blueprints.launchpad.net/fuel/+spec/openstack-config-change
>> [2] http://paste.openstack.org/show/481677/
>>
>> On Fri, Dec 11, 2015 at 4:34 PM, Oleg Gelbukh 
>> wrote:
>>
>>> Roman,
>>>
>>> Changing arbitrary parameters supported by respective Puppet manifests
>>> for OpenStack services is implemented in this blueprint [1]. It is being
>>> landed in release 8.0.
>>>
>>> [1] https://blueprints.launchpad.net/fuel/+spec/openstack-config-change
>>>
>>> --
>>> Best regards,
>>> Oleg Gelbukh
>>>
>>> On Thu, Dec 3, 2015 at 5:28 PM, Roman Sokolkov 
>>> wrote:
>>>
 Folks,

 little bit more research done in regards #2 usability.

 I've selected 13 real-world tasks from customer (i.e. update flag X in
 

Re: [openstack-dev] [Fuel] Nominate Bulat Gaifulin for fuel-web & fuel-mirror cores

2015-12-14 Thread Roman Vyalov
+1

On Mon, Dec 14, 2015 at 3:05 PM, Aleksey Kasatkin 
wrote:

> +1.
>
>
> Aleksey Kasatkin
>
>
> On Mon, Dec 14, 2015 at 12:49 PM, Vladimir Sharshov <
> vshars...@mirantis.com> wrote:
>
>> Hi,
>>
>> +1 from me to Bulat.
>>
>> On Mon, Dec 14, 2015 at 1:03 PM, Igor Kalnitsky 
>> wrote:
>>
>>> Hi Fuelers,
>>>
>>> I'd like to nominate Bulat Gaifulin [1] for
>>>
>>> * fuel-web-core [2]
>>> * fuel-mirror-core [3]
>>>
>>> Bulat's doing a really good review with detailed feedback and he's a
>>> regular participant in IRC. He's co-author of packetary and
>>> fuel-mirror projects, and he made valuable contribution to fuel-web
>>> (e.g. task-based deployment engine).
>>>
>>> Fuel Cores, please reply back with +1/-1.
>>>
>>> - Igor
>>>
>>> [1] http://stackalytics.com/?module=fuel-web_id=bgaifullin
>>> [2] http://stackalytics.com/report/contribution/fuel-web/90
>>> [3] http://stackalytics.com/report/contribution/fuel-mirror/90
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Experimental Task Based Deployment Landed into Master

2015-12-14 Thread Evgeniy L
+1 It's really good job folks.

On Sat, Dec 12, 2015 at 2:25 AM, Vladimir Kuklin 
wrote:

> Fuelers
>
> I am thrilled to announce that task based deployment engine [0] has been
> just merged into Fuel master. We checked it against existing BVT test cases
> for regressions as well as against functional testing for several cases of
> deployment. All the OSTF and network verification tests have successfully
> passed.
>
> We will obviously need to polish it and fix bugs which will arise, but
> this is a gigantic step forward for our orchestration engine which should
> allow us to drastically increase our development velocity as well as end
> user experience.
>
> Thanks to all who participated in development testing and review:
>
> Dmitry Ilyin
> Vladimir Sharshov
> Bulat Gaifullin
> Alexey Shtokolov
> Igor Kalnitsky
> Evgeniy Li
> Sergii Golovatiuk
> Dmitry Shulyak
>
> and many-many others
>
> I am pretty confident that this will allow us to develop and test faster
> as well as introduce support of some of Life-Cycle Management scenarios in
> 8.0 release.
>
> Once again, thank you all, folks, for your dedicated work and efforts on
> making Fuel better.
>
> [0]
> https://blueprints.launchpad.net/fuel/+spec/task-based-deployment-astute
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Multiple repos UX

2015-12-14 Thread Fedor Zhadaev
Hi, Vladimir,

Please be informed that we'll have to also make an appropriate changes on
the fuel-agent side. But yes, it's possible to do it before SCF.

2015-12-11 20:05 GMT+03:00 Vladimir Kozhukalov :

> If there are no any objections, let's do fix fuel-menu ASAP. As Fedor said
> this approach was suggested first, but then it was rejected during review
> process. It should not be so hard to get it back. Fedor, could you please
> confirm that it is possible to do this before SCF? Here is the bug
> https://bugs.launchpad.net/fuel/+bug/1525323
>
> Vladimir Kozhukalov
>
> On Fri, Dec 11, 2015 at 5:48 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> BTW, here you can see an example http://demo.fuel-infra.org:8000 Just go
>> to any cluster and see Repositories section on the settings tab.
>>
>> Vladimir Kozhukalov
>>
>> On Fri, Dec 11, 2015 at 5:46 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> I'd like this module
>>> https://github.com/openstack/fuel-menu/blob/master/fuelmenu/modules/bootstrapimg.py
>>> to be fixed so a user can define several repos independently. This
>>> particular ML thread is not about internal repo data format, it is not
>>> about particular format that we expose to end user. This thread is rather
>>> about flexibility of repo configuration. Whether we expose Fuel internal
>>> format or native format, UI must be flexible enough to allow a user to
>>> define repos independently. That is it.
>>>
>>> There is no reason to think that repository structure will always follow
>>> the pattern suite suite-updates suite-security, there is no reason to think
>>> that sections will always be main, universe, multiverse, restricted, there
>>> is no reason to think that all suites will be located on the same host.
>>>
>>> I am not a big expert in UX. I like what we currently have on Web UI (is
>>> native format). I don't suggest to change this. I suggest to use something
>>> similar of what we have on Web UI in our fuel-menu.
>>>
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Fri, Dec 11, 2015 at 5:10 PM, Alexander Kostrikov <
>>> akostri...@mirantis.com> wrote:
>>>
 Hello, Vladimir.
 Seems nothing is better for end-user in UI/fuel-mirror/image-bootstrap
 than 'You Get What You See' because system administrator should not learn
 new standard:
 http://url trusty main
 http://anotherurl trusty universe multiverse restricted
 http://yet-another-url trusty-my-favorite-updates my-favorite-section

 Can You point to difference between current scheme rpm/deb libraries in
 Python and 'New Format' If You are talking about their representation in
 fuel code to understand pros of such format.

 Like generalization of algorithms in such way:
 >I'd like to focus on the fact that these repositories should be
 defined independently (no base url, no base suite, etc.) That makes little
 sense to speculate about consistency of a particular repository. We only
 should talk about consistency of the whole list of repositories together.


 On Fri, Dec 11, 2015 at 2:44 PM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> Regarding to UI. Of course, we could provide native format to a user
> on UI. Although I don't think it would be much easier to edit, but it is
> flexible enough to define something like this:
>
> http://url trusty main
> http://anotherurl trusty universe multiverse restricted
> http://yet-another-url trusty-my-favorite-updates my-favorite-section
>
> While we (for some reasons) limited our UI to define only base url and
> base suite. That should be fixed.
>
>
> Vladimir Kozhukalov
>
> On Fri, Dec 11, 2015 at 2:33 PM, Igor Kalnitsky <
> ikalnit...@mirantis.com> wrote:
>
>> > Do we really need a custom format? Why can not we use native format
>> > for yum.conf and apt.sources files
>>
>> Because we don't want to parse this format each time we want to verify
>> / handle particular component of this format. Moreover, there's no,
>> for example, priority in Debian repo format. Priority is used by apt
>> preference (not by repo itself).
>>
>> We're talking about Fuel internal representation, and it would be nice
>> to have one internal format across various Fuel projects.
>>
>>
>> > But UI, in my opinion, should follow practices that already exist,
>> not define something new.
>>
>> AFAIU, the idea is to unified internal representation and keep UI as
>> close to distributive standards.
>>
>> On Fri, Dec 11, 2015 at 12:53 PM, Aleksandra Fedorova
>>  wrote:
>> > Hi,
>> >
>> > I agree with the idea of unification for repo configurations, but it
>> > looks like we are developing yet another standard.
>> >
>> > Do we really need a custom format? Why can not we use 

Re: [openstack-dev] [all] tox 2.3.0 broke tempest jobs

2015-12-14 Thread Jordan Pittier
Tox 2.3.1 was released on pypi a few minutes ago, and it fixes this issue.

Jordan

On Mon, Dec 14, 2015 at 12:55 AM, Robert Collins 
wrote:

> On 13 December 2015 at 03:20, Yuriy Taraday  wrote:
> > Tempest jobs in all our projects seem to become broken after tox 2.3.0
> > release yesterday. It's a regression in tox itself:
> > https://bitbucket.org/hpk42/tox/issues/294
> >
> > I suggest us to add tox to upper-constraints to avoid this breakage for
> now
> > and in the future: https://review.openstack.org/256947
> >
> > Note that we install tox in gate with no regard to global-requirements,
> so
> > only upper-constraints can save us from tox releases.
>
> Ah, friday releases. Gotta love them... on my saturday :(.
>
> So - tl;dr AIUI:
>
>  - the principle behind gating changes to tooling applies to tox as well
>  - existing implementation of jobs in the gate precludes applying
> upper-constraints systematically as a way to gate these changes
>  - the breakage we experienced was due to already known-bad system images
>
> Assuming that thats correct, my suggestion would be that we either
> make tox pip installed during jobs (across the board), so that we can
> in fact control it with upper-constraints, or we work on functional
> tests of new images before they go-live
>
> -Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-Announce List

2015-12-14 Thread Tom Fifield

On 14/12/15 19:33, Thierry Carrez wrote:

Tom Fifield wrote:

... and back to this thread after a few weeks :)

The conclusions I saw were:
* Audience for openstack-announce should be "users/non-dev"
* Service project releases announcements are good
* Client library release announcements good
* Security announcements are good
* Internal library (particularly oslo) release announcements don't fit

Open Questions:
* Where do Internal library release announcements go? [-dev or new
-release list or batched inside the weekly newsletter]


I'd say -dev + batched inside the weekly -dev digest from thingee (and
crosspost that one to -announce). Even if the audience is "users" I
think getting a weekly digest from the -dev ML can't hurt ?


Yup, feedback I have says it's enjoyed cross-discipline :)


* Do SDK releases fit on -announce?


I guess they could -- how many of those are we expecting ?



So far it looks close to zero emails :) PythonSDK is the only one that's 
in the OpenStack namespace I can see at quick search.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-14 Thread Igor Kalnitsky
Hi Fuelers,

As you might know, recently we moved to CentOS 7 and as a result we
got a small regression with PostgreSQL:

* Fuel 7 runs on CentOS 6.6 and uses  manually built PostgreSQL 9.3.
* Fuel 8 runs on CentOS 7 and uses PostgreSQL 9.2 from CentOS upstream repos.

There are different opinions whether this regression is acceptable or
not (see details in bug [1]).

The things I want to notice are:

* Currently we aren't tied up to PostgreSQL 9.3.
* There's a patch [2] that ties Fuel up to PostgreSQL 9.3+ by using a
set of JSON operations.

So the question is: Should we drop compatibility with upstream CentOS
7 in favor of using new features of PostgreSQL?

I've prepared a small poll, so please vote [3].

My opinion here is that I don't like that we're going to build and
maintain one more custom package (just take a look at this patch [4]
if you don't believe me), but I'd like to hear more opinion here.

Thanks,
Igor

[1] https://bugs.launchpad.net/fuel/+bug/1523544
[2] https://review.openstack.org/#/c/249656/
[3] http://goo.gl/forms/Hk1xolKVP0
[4] https://review.fuel-infra.org/#/c/14623/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][serial-console-proxy]

2015-12-14 Thread Prathyusha Guduri
Hi Markus,

Thanks a lot for a detailed document. I had a problem with installing
websocket but using the git repo u shared, I could install successfully and
got a console.

Regards,
Prathyusha

On Mon, Dec 14, 2015 at 11:34 PM, Markus Zoeller 
wrote:

> Prathyusha Guduri  wrote on 12/11/2015
> 06:37:02 AM:
>
> > From: Prathyusha Guduri 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: 12/11/2015 06:39 AM
> > Subject: [openstack-dev] [nova][serial-console-proxy]
> >
> > Hi All,
>
> > I have set up open stack on an Arm64 machine and all the open stack
> > related services are running fine. Also am able to launch an instance
> > successfully. Now that I need to get a console for my instance. The
> > noVNC console is not supported in the machine am using. So I have to
> > use a serial-proxy console or spice-proxy console.
>
> > After rejoining the stack, I have stopped the noVNC service and
> > started the serial proxy service in  /usr/local/bin  as
> >
> > ubuntu@ubuntu:~/devstack$ /usr/local/bin/nova-serialproxy --config-
> > file /etc/nova/nova.conf
> > 2015-12-10 19:07:13.786 21979 INFO nova.console.websocketproxy [-]
> > WebSocket server settings:
> > 2015-12-10 19:07:13.786 21979 INFO nova.console.websocketproxy [-]
> -Listen on
> > 0.0.0.0:6083
> > 2015-12-10 19:07:13.787 21979 INFO nova.console.websocketproxy [-]   -
> > Flash security policy server
> > 2015-12-10 19:07:13.787 21979 INFO nova.console.websocketproxy [-]   -
> > No SSL/TLS support (no cert file)
> > 2015-12-10 19:07:13.790 21979 INFO nova.console.websocketproxy [-]   -
> > proxying from 0.0.0.0:6083 to None:None
>
> > But
> > ubuntu@ubuntu:~/devstack$ nova get-serial-console vm20
> > ERROR (ClientException): The server has either erred or is incapable
> > of performing the requested operation. (HTTP 500) (Request-ID: req-
> > cfe7d69d-3653-4d62-ad0b-50c68f1ebd5e)
>
> >
> > the problem seems to be that the nova-compute is not able to
> > communicate with nova-serial-proxy. The IP and port for serial proxy
> > that I have given in nova.conf is correct.
>
> > I really dont understand where am going wrong. Some help would be very
> > grateful.
> >
>
> > My nova.conf -
> >
> >
> > [DEFAULT]
> > vif_plugging_timeout = 300
> > vif_plugging_is_fatal = True
> > linuxnet_interface_driver =
> > security_group_api = neutron
> > network_api_class = nova.network.neutronv2.api.API
> > firewall_driver = nova.virt.firewall.NoopFirewallDriver
> > compute_driver = libvirt.LibvirtDriver
> > default_ephemeral_format = ext4
> > metadata_workers = 24
> > ec2_workers = 24
> > osapi_compute_workers = 24
> > rpc_backend = rabbit
> > keystone_ec2_url = http://10.167.103.101:5000/v2.0/ec2tokens
> > ec2_dmz_host = 10.167.103.101
> > vncserver_proxyclient_address = 127.0.0.1
> > vncserver_listen = 127.0.0.1
> > vnc_enabled = false
> > xvpvncproxy_base_url = http://10.167.103.101:6081/console
> > novncproxy_base_url = http://10.167.103.101:6080/vnc_auto.html
> > logging_context_format_string = %(asctime)s.%(msecs)03d %(levelname)s
> > %(name)s [%(request_id)s %(user_name)s %(project_name)s]
> %(instance)s%(message)s
> > force_config_drive = True
> > instances_path = /opt/stack/data/nova/instances
> > state_path = /opt/stack/data/nova
> > enabled_apis = ec2,osapi_compute,metadata
> > instance_name_template = instance-%08x
> > my_ip = 10.167.103.101
> > s3_port = 
> > s3_host = 10.167.103.101
> > default_floating_pool = public
> > force_dhcp_release = True
> > dhcpbridge_flagfile = /etc/nova/nova.conf
> > scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
> > rootwrap_config = /etc/nova/rootwrap.conf
> > api_paste_config = /etc/nova/api-paste.ini
> > allow_migrate_to_same_host = True
> > allow_resize_to_same_host = True
> > debug = True
> > verbose = True
> >
> > [database]
> > connection = mysql://root:open@127.0.0.1/nova?charset=utf8
> >
> > [osapi_v3]
> > enabled = True
> >
> > [keystone_authtoken]
> > signing_dir = /var/cache/nova
> > cafile = /opt/stack/data/ca-bundle.pem
> > auth_uri = http://10.167.103.101:5000
> > project_domain_id = default
> > project_name = service
> > user_domain_id = default
> > password = open
> > username = nova
> > auth_url = http://10.167.103.101:35357
> > auth_plugin = password
> >
> > [oslo_concurrency]
> > lock_path = /opt/stack/data/nova
> >
> > [spice]
> > #agent_enabled = True
> > enabled = false
> > html5proxy_base_url = http://10.167.103.101:6082/spice_auto.html
> > #server_listen = 127.0.0.1
> > #server_proxyclient_address = 127.0.0.1
> >
> > [oslo_messaging_rabbit]
> > rabbit_userid = stackrabbit
> > rabbit_password = open
> > rabbit_hosts = 10.167.103.101
> >
> > [glance]
> > api_servers = http://10.167.103.101:9292
> >
> > [cinder]
> > os_region_name = RegionOne
> >
> > [libvirt]
> > vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
> > 

Re: [openstack-dev] [Group Based Policy] [Policy] [GBP]

2015-12-14 Thread Sumit Naiksatam
Hi, Thanks for your question, but we haven’t explored this option. We
will be happy to discuss this and provide any help/pointers you may
need. Please feel free to join our weekly IRC meeting and/or drop into
the #openstack-gbp channel to discuss further.

~Sumit.

On Sun, Dec 13, 2015 at 9:10 AM, Ernesto Valentino
 wrote:
> Hello,
> how can i write an application with gbp using the libcloud? Thanks in
> advance. Best regards,
>
> ernesto
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Separate master node provisioning and deployment

2015-12-14 Thread Igor Kalnitsky
> One of potential disadvantages is that it is harder to track package 
> dependencies, but I think
> a deployment script should be a root of the package dependency tree.

That's something I'd try to avoid. Let's be close to distro upstream
practice. I never saw something like "fuel-deploy" that runs puppet
that runs package installation. You usually install packages you want
to use. If you want to use something beyond default distribution, you
install some additional optional packages and that's it.

On Mon, Dec 14, 2015 at 3:21 PM, Vladimir Kozhukalov
 wrote:
>> Meantime we can provide fuel-menu which will become a configuration
>> gate for different subprojects. Perhaps we could consider to use
>> pluggable approach, so each component will export plugin for fuel-menu
>> with own settings.
>
> fuel-menu could be a configuration gate for fuel deployment script
>
>> The wrong thing is that with such approach it would be impossible to
>> setup Fuel with just something like
>
>>$ yum install fuel
>
> I see nothing wrong here. 'yum install fuel' would be appropriate approach
> if fuel was a service,
> not a bunch of services some of which are not even limited to be installed
> on the master node.
>
> when you run
>
> # yum install fuel
> # fuel-menu
>
> it the same as you run
>
> # yum install fuel
> # fuel_deploy_script (which runs fuel-menu and then runs puppet which
> installs everything else)
>
> I like the idea when fuel (let's rename it into fuel-deploy) package
> provides just a deployment script.
> It does not require a lot of changes and it corresponds to what we really
> do. Besides, it is more flexible
> because deployment could be modular (several stages).
>
> One of potential disadvantages is that it is harder to track package
> dependencies, but I think
> a deployment script should be a root of the package dependency tree.
>
>
>
> Vladimir Kozhukalov
>
> On Mon, Dec 14, 2015 at 12:53 PM, Igor Kalnitsky 
> wrote:
>>
>> Vladimir,
>>
>> Thanks for raising this question. I totally support idea of separating
>> provisioning and deployment steps. I believe it'll simplify a lot of
>> things.
>>
>> However I have some comments regarding this topic, see them inline. :)
>>
>> > For a package it is absolutely normal to throw a user dialog.
>>
>> It kills idea of fuel-menu, since each package will need to implement
>> configuration menu of its own. Moreover, having such configuration
>> menu in fuel-menu and in each package is too expensive, it will
>> require more effort that I'd like to have.
>>
>> > Fuel package could install default astute.yaml (I'd like to rename it
>> > into /etc/fuel.yaml or /etc/fuel/config.yaml) and use values from the
>> > file by default not running fuelmenu
>>
>> I don't like idea of having one common configuration file for Fuel
>> components. I think it'd be better when each component (subproject)
>> has its own configuration file, and knows nothing about external ones.
>>
>> Meantime we can provide fuel-menu which will become a configuration
>> gate for different subprojects. Perhaps we could consider to use
>> pluggable approach, so each component will export plugin for fuel-menu
>> with own settings.
>>
>> > What is wrong with 'deployment script' approach?
>>
>> The wrong thing is that with such approach it would be impossible to
>> setup Fuel with just something like
>>
>> $ yum install fuel
>>
>> In my opinion we should go into the following approach:
>>
>> * yum install fuel
>> * fuel-menu
>>
>> The first command should install a basic Fuel setup, and everything
>> should work when it's done.
>>
>> While the second one prompts a configuration menu where one might
>> change default settings (reconfigure default installation).
>>
>> Thanks,
>> Igor
>>
>> On Mon, Dec 14, 2015 at 9:30 AM, Vladimir Kozhukalov
>>  wrote:
>> > Oleg,
>> >
>> > Thanks a lot for your opinion. Here are some more thoughts on this
>> > topic.
>> >
>> > 1) For a package it is absolutely normal to throw a user dialog. But
>> > probably there is kind of standard for the dialog that does not allow to
>> > use
>> > fuelmenu. AFAIK, for DEB packages it is debconf and there is a tutorial
>> > [0]
>> > how to get user input during post install. I don't know if there is such
>> > a
>> > standard for RPM packages. In some MLs it is written that any command
>> > line
>> > program could be run in %post section including those like fuel-menu.
>> >
>> > 2) Fuel package could install default astute.yaml (I'd like to rename it
>> > into /etc/fuel.yaml or /etc/fuel/config.yaml) and use values from the
>> > file
>> > by default not running fuelmenu. A user then is supposed to run fuelmenu
>> > if
>> > he/she needs to re-configure fuel installation. However, it is gonna be
>> > quite intrusive. What if a user installs fuel and uses it for a while
>> > with
>> > default configuration. What if some clusters are already in use and then
>> 

Re: [openstack-dev] [glance][keystone][artifacts] Service Catalog name for Glance Artifact Repository API

2015-12-14 Thread Jay Pipes

On 12/11/2015 08:25 PM, Alexander Tivelkov wrote:

Hi folks!

As it was decided during the Mitaka design summit, we are separating the
experimental Artifact Repository API from the main Glance API. This API
will have a versioning sequence independent from the main Glance API and
will be run as a standalone optional service, listening on the port
different from the standard glance-api port (currently the proposed
default is 9393). Meanwhile, it will remain an integral part of the
larger Glance project, sharing the database, implementation roadmap,
development and review teams etc.

Since this API will be consumed by both end-users and other Openstack
services, its endpoint should be discoverable via regular service
catalog API. This rises the question: what should be the service name
and service type for the appropriate entree in the service catalog?

We've came out with the idea to call the service "glare" (this is our
internal codename for the artifacts initiative, being an acronym for
"GLance Artifact REpository") and set its type to "artifacts". Other
alternatives for the name may be "arti" or "glance_artifacts" and for
the type - "assets" or "objects" (the latter may be confusing since
swift's type is object-store, so I personally don't like it).


I don't care about the name. In fact, I don't think the "name" should 
even exist in the service catalog at all.


I think the type should be "artifact".

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][keystone][artifacts] Service Catalog name for Glance Artifact Repository API

2015-12-14 Thread Brant Knudson
On Mon, Dec 14, 2015 at 9:28 AM, Ian Cordasco 
wrote:

>
>
> On 12/14/15, 02:18, "Kuvaja, Erno"  wrote:
>
> >> -Original Message-
> >> From: McLellan, Steven
> >> Sent: Friday, December 11, 2015 6:37 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [glance][keystone][artifacts] Service
> >>Catalog
> >> name for Glance Artifact Repository API
> >>
> >> Hi Alex,
> >>
> >> Searchlight uses port 9393 (it also made sense to us when we spun out of
> >> Glance!), so we would prefer it if there's another one that makes sense.
> >> Regarding the three hardest things in computer science, searchlight's
> >>already
> >> dealing with cache invalidation so I'll stay out of the naming
> >>discussion.
> >>
> >> Thanks!
> >>
> >> Steve
> >
> >Thanks for the heads up Steve,
> >
> >Mind to make sure that it gets registered for Searchlight as well. It's
> >not listed in config-reference [0] nor iana [1] (seems that at least
> >glance ports are not registered in iana either fwiw):
>
> Are any of the OpenStack projects actually listed on IANA?
>
> Searching through
> https://www.iana.org/assignments/service-names-port-numbers/service-names-p
> ort-numbers.txt for service names (nova, swift, etc.) I don't see *any*
> openstack services.
>
>

We've got Identity's port 35357 reserved:

openstack-id   35357   tcpOpenStack ID Service
[Rackspace_Hosting]   [Ziad_Sawalha]
 2011-08-15

Despite the reservation it still caused issues since this port is in
Linux's default range for ephemeral ports. So some other process might get
this port before keystone did and then keystone would fail to start.

Also note that this is the "admin" port, which keystone only needs for v2
(to be deprecated) since we've got policy support in v3.

Just goes to show that we shouldn't be using distinct ports for web
services, put them on a path on :443 or :80 like the web was designed for.

- Brant



> >
> >[0]
> >
> http://docs.openstack.org/liberty/config-reference/content/firewalls-defau
> >lt-ports.html
> >[1]
> >
> http://www.iana.org/assignments/service-names-port-numbers/service-names-p
> >ort-numbers.xhtml
> >
> >- Erno
> >>
> >> From: Alexander Tivelkov
> >> >
> >> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> >>  >> d...@lists.openstack.org>>
> >> Date: Friday, December 11, 2015 at 11:25 AM
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >>  >> d...@lists.openstack.org>>
> >> Subject: [openstack-dev] [glance][keystone][artifacts] Service Catalog
> >>name
> >> for Glance Artifact Repository API
> >>
> >> Hi folks!
> >>
> >> As it was decided during the Mitaka design summit, we are separating the
> >> experimental Artifact Repository API from the main Glance API. This API
> >>will
> >> have a versioning sequence independent from the main Glance API and will
> >> be run as a standalone optional service, listening on the port
> >>different from
> >> the standard glance-api port (currently the proposed default is 9393).
> >> Meanwhile, it will remain an integral part of the larger Glance
> >>project, sharing
> >> the database, implementation roadmap, development and review teams
> >> etc.
> >>
> >> Since this API will be consumed by both end-users and other Openstack
> >> services, its endpoint should be discoverable via regular service
> >>catalog API.
> >> This rises the question: what should be the service name and service
> >>type for
> >> the appropriate entree in the service catalog?
> >>
> >> We've came out with the idea to call the service "glare" (this is our
> >>internal
> >> codename for the artifacts initiative, being an acronym for "GLance
> >>Artifact
> >> REpository") and set its type to "artifacts". Other alternatives for
> >>the name
> >> may be "arti" or "glance_artifacts" and for the type - "assets" or
> >>"objects"
> >> (the latter may be confusing since swift's type is object-store, so I
> >>personally
> >> don't like it).
> >>
> >> Well... we all know, naming is complicated... anyway, I'll appreciate
> >>any
> >> feedback on this. Thanks!
> >>
> >> --
> >> Regards,
> >> Alexander Tivelkov
> >>
> >> __
> >> 
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: OpenStack-dev-
> >> requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-14 Thread Sandro Mathys
On Tue, Dec 15, 2015 at 12:02 AM, Ryu Ishimoto  wrote:
> On Mon, Dec 14, 2015 at 6:34 PM, Sandro Mathys  wrote:
>> On Thu, Dec 10, 2015 at 4:46 PM, Galo Navarro  wrote:
>>>
>> Honestly, I don't think this discussion is leading anywhere.
>> Therefore, I'd like to request a decision by the MidoNet PTL as per
>> [1].
>
> I apologize for jumping in a bit late.  Clearly, we have those feeling
> passionate about both solutions, with good points made on both sides,
> so let me try to do the impossible task of making everyone happy (or
> at least, not completely ticked off!).
>
> I believe what we need is to come up with a solution that minimizes
> inconvenience to the developers and packagers alike, and not a
> one-sided solution.  MidoNet client is currently a mix of the low
> level MidoNet API client and high level (Neutron) API client, and they
> are mutually exclusive, and the code can be cleanly separated easily.
> I propose that we extract the high level API client code and make it
> it's own project, and leave the low level MidoNet API client code as
> is.
>
> My reasons are as follows:
>
> * We should try our best to follow the conventions of the OSt model as
> much as possible.  Without embracing their model, we are distancing
> ourselves further from becoming part of the Big Tent.   So let's move
> the client code that the Neutron plugin depends on to a separate
> project (python-os-midonetclient?) so that it follows the convention,
> and will simplify things for OSt packagers.  From OSt's point of view,
> python-midonetclient should be completely forgotten.
> * We should not cause inconvenience to the current developers of the
> low level MidoNet API, who develop python-midonetclient and
> midonet-cli for testing purposes (MDTS, for example).  Because the
> testing framework is part of midonet, moving python-midonetclient out
> of midonet will cause awkward development process for the midonet
> developers who will need to go back and forth between the projects.
> Also, by keeping them inside midonet, no change is required for
> packaging of python-midonetclient.  There are still users of the low
> level midonet API, so we will have to keep releasing the
> python-midonetclient package as we do now, but it does not necessarily
> have to be published for the OSt distributors.
>
> We have a clear separation of preferences among those that are from
> the OpenStack background and those that are not.  Thus, it makes the
> most sense to separate the projects the same way so that each party is
> responsible for the part that they consume.
>
> I hope this achieves the right balance.  Let me know if there are
> strong objections against this proposal.

So if I understand you correctly, you suggest:
1) the (midonet/internal) low level API stays where it is and will
still be called python-midonetclient.
2) the (neutron/external) high level API is moved into it's own
project and will be called something like python-os-midonetclient.

Sounds like a good compromise which addresses the most important
points, thanks Ryu! I wasn't aware that these parts of the
python-midonetclient are so clearly distinguishable/separable but if
so, this makes perfect sense. Not perfectly happy with the naming, but
I figure it's the way to go.

-- Sandro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Volume attachment APIs CRUD inconsistencies

2015-12-14 Thread Sam Matzek
Thanks.  So it looks like os_volume-attachments has now been linked
into the 2.1 doc.  The lack of documentation for the PUT / update
attachment doc was also noted on the etherpad.  So my only remaining
questions revolve around the dual and differing API implementations of
the read volume attachments operation.  More info is placed inline.


On Fri, Dec 11, 2015 at 8:41 PM, Anne Gentle
 wrote:
>
>
> On Fri, Dec 11, 2015 at 8:38 PM, Anne Gentle 
> wrote:
>>
>>
>>
>> On Fri, Dec 11, 2015 at 7:33 PM, Matt Riedemann
>>  wrote:
>>>
>>>
>>>
>>> On 12/11/2015 1:48 PM, Sam Matzek wrote:

 The CRUD operations for Nova volume attachments have inconsistencies
 between documentation and implementation.  Additionally, the read/get
 operation is implemented twice under different URIs.  What is Nova's
 direction for volume attachment APIs and how should the following
 discrepancies be resolved?

 The current state of affairs is:
 CREATE (volume attach) is documented twice under two different URIs: [1]
 and [2], but only os-volume_attachments [1] is implemented [3].
>>
>>
>> Matt, can you look a little deeper into what happened to
>> os-volume_attachments? I'm worried we've missed one of the extensions.
>>
>> As for the docs, I thought we put in redirects from v2 to v2.1 but I need
>> to investigate.
>>
>
> And to answer my own question, yes, line 226 of the Etherpad indicates that
> doc is missing. Easy enough to add if anyone wants to grab a low-hanging
> (read:easy) patch.
>
> I'm going to hold off on the redirects until much more of that Etherpad
> indicates cleanup.
>
> Anne
>
>>
>> Anne
>>

 Attach volume as an action on the servers URI appears to have been part
 of the Nova V3 API, but its implementation no longer exists.
 Is it the future direction to have volume attach and detach done as
 server actions?

 READ is implemented twice and documented twice under two different URIs:
 os-volume_attachments [5] and server details [6]
 The two implementations do not return the same information and the only
 bit of information that is common between them is the volume ID.
 Why do we have two implementations and is one preferred over the other?
 Should one be deprecated and eventually removed with all enhancements
 going into the other?

What if anything, should we do about the competing read
implementations? I think they should be made to have some amount of
common source and return the same information for volume attachments.
GET /v2.1/{tenant_id}/servers/{server_id} returns this, which includes
the delete_on_termination flag with microversion 2.3:
...
"os-extended-volumes:volumes_attached": [
{"id": "f350528f-408d-4ac6-8fe3-981c0aef3dc8",
"delete_on_termination": true}
], ...

GET  /v2.1/{tenant_id}/servers/{server_id}/os-volume_attachments
{
"volumeAttachments": [
{"device": "/dev/sda",
"bootIndex": 0,
"serverId": "15f2acd0-e254-4ce6-b490-f70154fbd481",
"id": "f350528f-408d-4ac6-8fe3-981c0aef3dc8",
"volumeId": "f350528f-408d-4ac6-8fe3-981c0aef3dc8"
}
]
}


 UPDATE is implemented [4] but not documented.

 DELETE (detach) only appears to be implemented and documented once: [7]

 A blueprint proposal exists [8] to enhance the attach and update APIs to
 set and modify the delete_on_termination flag.  The discrepancies in the
 create and read operations calls into question whether the update change
 should be on the PUT /servers API to match the server's read [6] or if
 the os-volume_attachments update API should be modified to line up with
 os-volume_attachments read.


 [1]
 http://developer.openstack.org/api-ref-compute-v2-ext.html#attachVolume
 [2] http://developer.openstack.org/api-ref-compute-v2.1.html#attach
 [3]

 https://ask.openstack.org/en/question/85242/the-api-action-attach-is-missing/
 [4]

 https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/volumes.py#L318
 [5]
 http://developer.openstack.org/api-ref-compute-v2-ext.html#attachVolume
 [6]

 http://developer.openstack.org/api-ref-compute-v2.1.html#listDetailServers
 [7]

 http://developer.openstack.org/api-ref-compute-v2-ext.html#deleteVolumeAttachment
 [8]

 https://blueprints.launchpad.net/nova/+spec/delete-on-termination-modification




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>> Several of the different paths you're pointing out are v2 legacy (e.g.
>>> anything with *-v2-ext.html). Anything with v2.1 is, well, the v2.1 API and
>>> is current.

Re: [openstack-dev] [ceilometer] status of distil?

2015-12-14 Thread Andreas Jaeger

On 12/14/2015 10:01 AM, Steve Martinelli wrote:

While I was trying to submit patches for projects that had old
keystoneclient references (distil was one of the projects), I noticed
that there hasn't been much action on this project [0]. It's been a year
since a commit [1], no releases [2], and I can't submit a patch since
the .gitreview file doesn't point to review.openstack.org [3].

Is distil alive?

[0] https://github.com/openstack/distil
[1] https://github.com/openstack/distil/commits/master
[2] https://github.com/openstack/distil/releases
[3] https://github.com/openstack/distil/blob/master/.gitreview



There was not a single commit since the project has been imported over a 
year ago into our CI, I suggest it's time to declare it orphaned and 
retire it...


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][serial-console-proxy]

2015-12-14 Thread Prathyusha Guduri
Hi Tony,

Your reply gave me a hint that nova-consoleauth service must be running. I
did not start that service before so no token authentication was done. Now
after starting the service,
$nova get-serial-console vm-check-6
++-+
| Type   | Url |
++-+
| serial | ws://127.0.0.1:6083/?token=c2fb4073-79a0-44b9-a977-cc7a1fa074f6 |

Following
http://docs.openstack.org/developer/nova/testing/serial-console.html

$ python client.py ws://
127.0.0.1:6083/?token=c2fb4073-79a0-44b9-a977-cc7a1fa074f6

..hangs

Log at consoleauth shows that the token authentication is done.

Received Token: c2fb4073-79a0-44b9-a977-cc7a1fa074f6, {'instance_uuid':
u'5a725707-440e-4cf1-b262-fdb6492ac4d7', 'access_url': u'ws://
127.0.0.1:6083/?token=c2fb4073-79a0-44b9-a977-cc7a1fa074f6', 'token':
u'c2fb4073-79a0-44b9-a977-cc7a1fa074f6', 'last_activity_at':
1450111951.325766, 'internal_access_path': None, 'console_type': u'serial',
'host': u'127.0.0.1', 'port': 10005}
2015-12-14 16:53:23.083 INFO nova.consoleauth.manager
[req-2996a251-7fea-4b72-8fd4-30c8505224a3 None None] Checking Token:
c2fb4073-79a0-44b9-a977-cc7a1fa074f6, True

Log at serial-proxy shows that it waits forever connecting to 127.0.0.1

127.0.0.1 - - [14/Dec/2015 16:53:22] "GET
/?token=c2fb4073-79a0-44b9-a977-cc7a1fa074f6 HTTP/1.1" 101 -
127.0.0.1 - - [14/Dec/2015 16:53:22] 127.0.0.1: Plain non-SSL (ws://)
WebSocket connection
127.0.0.1 - - [14/Dec/2015 16:53:22] 127.0.0.1: Version hybi-13, base64:
'False'
127.0.0.1 - - [14/Dec/2015 16:53:22] 127.0.0.1: Path:
'/?token=c2fb4073-79a0-44b9-a977-cc7a1fa074f6'
2015-12-14 16:53:23.011 INFO oslo_messaging._drivers.impl_rabbit
[req-2996a251-7fea-4b72-8fd4-30c8505224a3 None None] Connecting to AMQP
server on 10.167.103.101:5672
2015-12-14 16:53:23.040 INFO oslo_messaging._drivers.impl_rabbit
[req-2996a251-7fea-4b72-8fd4-30c8505224a3 None None] Connected to AMQP
server on 10.167.103.101:5672
2015-12-14 16:53:23.048 INFO oslo_messaging._drivers.impl_rabbit
[req-2996a251-7fea-4b72-8fd4-30c8505224a3 None None] Connecting to AMQP
server on 10.167.103.101:5672
2015-12-14 16:53:23.076 INFO oslo_messaging._drivers.impl_rabbit
[req-2996a251-7fea-4b72-8fd4-30c8505224a3 None None] Connected to AMQP
server on 10.167.103.101:5672
2015-12-14 16:53:23.220 INFO nova.console.websocketproxy
[req-2996a251-7fea-4b72-8fd4-30c8505224a3 None None]   2: connect info:
{u'instance_uuid': u'5a725707-440e-4cf1-b262-fdb6492ac4d7',
u'internal_access_path': None, u'last_activity_at': 1450111951.325766,
u'console_type': u'serial', u'host': u'127.0.0.1', u'token':
u'c2fb4073-79a0-44b9-a977-cc7a1fa074f6', u'access_url': u'ws://
127.0.0.1:6083/?token=c2fb4073-79a0-44b9-a977-cc7a1fa074f6', u'port': 10005}
2015-12-14 16:53:23.221 INFO nova.console.websocketproxy
[req-2996a251-7fea-4b72-8fd4-30c8505224a3 None None]   2: connecting to:
127.0.0.1:10005


In nova.conf I have only made enabled=true under [serial-proxy] section.
Other values are not specified so that letting the nova take default
values. I have read that with default values single node set up will not
have any issues.

I thought the issue might be because of websocket, so I was not able to
install websocket module of python in my system.

system-specifications :

uname -m = aarch64
uname -r = 4.2.0-03373-gdded870-dirty
uname -s = Linux
uname -v = #4 SMP Tue Dec 8 14:47:17 IST 2015

Error while installing :
libev/ev.c:45:22: fatal error: config.h: No such file or directory
 #  include "config.h"
  ^
compilation terminated.
error: command 'aarch64-linux-gnu-gcc' failed with exit status 1


Please help me how to proceed further.

Thanks,
Prathyusha







On Mon, Dec 14, 2015 at 12:41 PM, Prathyusha Guduri <
prathyushaconne...@gmail.com> wrote:

> Hi Tony,
>
>
> Thanks a lot for your response.
> I actually did a rejoin-stack.sh which will also restart n-api and all
> other services. But still the same issue.
>
> Anyway now that I've to run all over again, will change my local.conf
> according to the guide and run stack.
>
> Will keep you updated.
>
> Thanks,
> Prathyusha
>
>
>
> On Mon, Dec 14, 2015 at 4:09 AM, Tony Breeds 
> wrote:
>
>> On Fri, Dec 11, 2015 at 11:07:02AM +0530, Prathyusha Guduri wrote:
>> > Hi All,
>> >
>> > I have set up open stack on an Arm64 machine and all the open stack
>> related
>> > services are running fine. Also am able to launch an instance
>> successfully.
>> > Now that I need to get a console for my instance. The noVNC console is
>> not
>> > supported in the machine am using. So I have to use a serial-proxy
>> console
>> > or spice-proxy console.
>> >
>> > After rejoining the stack, I have stopped the noVNC service and started
>> the
>> > serial proxy service in  /usr/local/bin  as
>> >
>> 

Re: [openstack-dev] [nova] Better tests for nova scheduler(esp. race conditions)?

2015-12-14 Thread Nikola Đipanov
On 12/14/2015 08:20 AM, Cheng, Yingxin wrote:
> Hi All,
> 
>  
> 
> When I was looking at bugs related to race conditions of scheduler
> [1-3], it feels like nova scheduler lacks sanity checks of schedule
> decisions according to different situations. We cannot even make sure
> that some fixes successfully mitigate race conditions to an acceptable
> scale. For example, there is no easy way to test whether server-group
> race conditions still exists after a fix for bug[1], or to make sure
> that after scheduling there will be no violations of allocation ratios
> reported by bug[2], or to test that the retry rate is acceptable in
> various corner cases proposed by bug[3]. And there will be much more in
> this list.
> 
>  
> 
> So I'm asking whether there is a plan to add those tests in the future,
> or is there a design exist to simplify writing and executing those kinds
> of tests? I'm thinking of using fake databases and fake interfaces to
> isolate the entire scheduler service, so that we can easily build up a
> disposable environment with all kinds of fake resources and fake compute
> nodes to test scheduler behaviors. It is even a good way to test whether
> scheduler is capable to scale to 10k nodes without setting up 10k real
> compute nodes.
>

This would be a useful effort - however do not assume that this is going
to be an easy task. Even in the paragraph above, you fail to take into
account that in order to test the scheduling you also need to run all
compute services since claims work like a kind of 2 phase commit where a
scheduling decision gets checked on the destination compute host
(through Claims logic), which involves locking in each compute process.

>  
> 
> I'm also interested in the bp[4] to reduce scheduler race conditions in
> green-thread level. I think it is a good start point in solving the huge
> racing problem of nova scheduler, and I really wish I could help on that.
> 

I proposed said blueprint but am very unlikely to have any time to work
on it this cycle, so feel free to take a stab at it. I'd be more than
happy to prioritize any reviews related to the above BP.

Thanks for your interest in this

N.

>  
> 
>  
> 
> [1] https://bugs.launchpad.net/nova/+bug/1423648
> 
> [2] https://bugs.launchpad.net/nova/+bug/1370207
> 
> [3] https://bugs.launchpad.net/nova/+bug/1341420
> 
> [4] https://blueprints.launchpad.net/nova/+spec/host-state-level-locking
> 
>  
> 
>  
> 
> Regards,
> 
> -Yingxin
> 
>  
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][keystone][artifacts] Service Catalog name for Glance Artifact Repository API

2015-12-14 Thread Jay Pipes

On 12/11/2015 08:51 PM, Alexander Tivelkov wrote:

Hi Steve,

Thanks for the note on port. Any objections on glare using 9494 then?
Anyone?


Yes, I object to 9494. There should be no need to use any custom port 
for Glare's API service. Just use 80/443, which are HTTP(S)'s standard 
ports. It was a silly thing that we ever used custom port numbers to 
begin with, IMHO.


Best,
-jay


Пт, 11 дек. 2015 г. в 21:39, McLellan, Steven >:

Hi Alex,

Searchlight uses port 9393 (it also made sense to us when we spun
out of Glance!), so we would prefer it if there's another one that
makes sense. Regarding the three hardest things in computer science,
searchlight's already dealing with cache invalidation so I'll stay
out of the naming discussion.

Thanks!

Steve

From: Alexander Tivelkov >>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" >>
Date: Friday, December 11, 2015 at 11:25 AM
To: "OpenStack Development Mailing List (not for usage questions)"
>>
Subject: [openstack-dev] [glance][keystone][artifacts] Service
Catalog name for Glance Artifact Repository API

Hi folks!

As it was decided during the Mitaka design summit, we are separating
the experimental Artifact Repository API from the main Glance API.
This API will have a versioning sequence independent from the main
Glance API and will be run as a standalone optional service,
listening on the port different from the standard glance-api port
(currently the proposed default is 9393). Meanwhile, it will remain
an integral part of the larger Glance project, sharing the database,
implementation roadmap, development and review teams etc.

Since this API will be consumed by both end-users and other
Openstack services, its endpoint should be discoverable via regular
service catalog API. This rises the question: what should be the
service name and service type for the appropriate entree in the
service catalog?

We've came out with the idea to call the service "glare" (this is
our internal codename for the artifacts initiative, being an acronym
for "GLance Artifact REpository") and set its type to "artifacts".
Other alternatives for the name may be "arti" or "glance_artifacts"
and for the type - "assets" or "objects" (the latter may be
confusing since swift's type is object-store, so I personally don't
like it).

Well... we all know, naming is complicated... anyway, I'll
appreciate any feedback on this. Thanks!

--
Regards,
Alexander Tivelkov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Regards,
Alexander Tivelkov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][keystone][artifacts] Service Catalog name for Glance Artifact Repository API

2015-12-14 Thread Ian Cordasco


On 12/14/15, 02:18, "Kuvaja, Erno"  wrote:

>> -Original Message-
>> From: McLellan, Steven
>> Sent: Friday, December 11, 2015 6:37 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [glance][keystone][artifacts] Service
>>Catalog
>> name for Glance Artifact Repository API
>> 
>> Hi Alex,
>> 
>> Searchlight uses port 9393 (it also made sense to us when we spun out of
>> Glance!), so we would prefer it if there's another one that makes sense.
>> Regarding the three hardest things in computer science, searchlight's
>>already
>> dealing with cache invalidation so I'll stay out of the naming
>>discussion.
>> 
>> Thanks!
>> 
>> Steve
>
>Thanks for the heads up Steve,
>
>Mind to make sure that it gets registered for Searchlight as well. It's
>not listed in config-reference [0] nor iana [1] (seems that at least
>glance ports are not registered in iana either fwiw):

Are any of the OpenStack projects actually listed on IANA?

Searching through 
https://www.iana.org/assignments/service-names-port-numbers/service-names-p
ort-numbers.txt for service names (nova, swift, etc.) I don't see *any*
openstack services.

>
>[0] 
>http://docs.openstack.org/liberty/config-reference/content/firewalls-defau
>lt-ports.html
>[1] 
>http://www.iana.org/assignments/service-names-port-numbers/service-names-p
>ort-numbers.xhtml
>
>- Erno
>> 
>> From: Alexander Tivelkov
>> >
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> > d...@lists.openstack.org>>
>> Date: Friday, December 11, 2015 at 11:25 AM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> > d...@lists.openstack.org>>
>> Subject: [openstack-dev] [glance][keystone][artifacts] Service Catalog
>>name
>> for Glance Artifact Repository API
>> 
>> Hi folks!
>> 
>> As it was decided during the Mitaka design summit, we are separating the
>> experimental Artifact Repository API from the main Glance API. This API
>>will
>> have a versioning sequence independent from the main Glance API and will
>> be run as a standalone optional service, listening on the port
>>different from
>> the standard glance-api port (currently the proposed default is 9393).
>> Meanwhile, it will remain an integral part of the larger Glance
>>project, sharing
>> the database, implementation roadmap, development and review teams
>> etc.
>> 
>> Since this API will be consumed by both end-users and other Openstack
>> services, its endpoint should be discoverable via regular service
>>catalog API.
>> This rises the question: what should be the service name and service
>>type for
>> the appropriate entree in the service catalog?
>> 
>> We've came out with the idea to call the service "glare" (this is our
>>internal
>> codename for the artifacts initiative, being an acronym for "GLance
>>Artifact
>> REpository") and set its type to "artifacts". Other alternatives for
>>the name
>> may be "arti" or "glance_artifacts" and for the type - "assets" or
>>"objects"
>> (the latter may be confusing since swift's type is object-store, so I
>>personally
>> don't like it).
>> 
>> Well... we all know, naming is complicated... anyway, I'll appreciate
>>any
>> feedback on this. Thanks!
>> 
>> --
>> Regards,
>> Alexander Tivelkov
>> 
>> __
>> 
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Custom fields for versioned objects

2015-12-14 Thread Ryan Rossiter
Hi everyone,

I have a change submitted that lays the groundwork for using custom enums and 
fields that are used by versioned objects [1]. These custom fields allow for 
verification on a set of valid values, which prevents the field from being 
mistakenly set to something invalid. These custom fields are best suited for 
StringFields that are only assigned certain exact strings (such as a status, 
format, or type). Some examples for Nova: PciDevice.status, 
ImageMetaProps.hw_scsi_model, and BlockDeviceMapping.source_type.

These new enums (that are consumed by the fields) are also great for 
centralizing constants for hard-coded strings throughout the code. For example 
(using [1]):

Instead of
if backup.status == ‘creating’:


We now have
if backup.status == fields.BackupStatus.CREATING:


Granted, this causes a lot of brainless line changes that make for a lot of 
+/-, but it centralizes a lot. In changes like this, I hope I found all of the 
occurrences of the different backup statuses, but GitHub search and grep can 
only do so much. If it turns out this gets in and I missed a string or two, 
it’s not the end of the world, just push up a follow-up patch to fix up the 
missed strings. That part of the review is not affected in any way by the 
RPC/object versioning.

Speaking of object versioning, notice in cinder/objects/backup.py the version 
was updated to appropriate the new field type. The underlying data passed over 
RPC has not changed, but this is done for compatibility with older versions 
that may not have obeyed the set of valid values.

[1] https://review.openstack.org/#/c/256737/


-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-14 Thread Ryu Ishimoto
On Mon, Dec 14, 2015 at 6:34 PM, Sandro Mathys  wrote:
> On Thu, Dec 10, 2015 at 4:46 PM, Galo Navarro  wrote:
>>
> Honestly, I don't think this discussion is leading anywhere.
> Therefore, I'd like to request a decision by the MidoNet PTL as per
> [1].

I apologize for jumping in a bit late.  Clearly, we have those feeling
passionate about both solutions, with good points made on both sides,
so let me try to do the impossible task of making everyone happy (or
at least, not completely ticked off!).

I believe what we need is to come up with a solution that minimizes
inconvenience to the developers and packagers alike, and not a
one-sided solution.  MidoNet client is currently a mix of the low
level MidoNet API client and high level (Neutron) API client, and they
are mutually exclusive, and the code can be cleanly separated easily.
I propose that we extract the high level API client code and make it
it's own project, and leave the low level MidoNet API client code as
is.

My reasons are as follows:

* We should try our best to follow the conventions of the OSt model as
much as possible.  Without embracing their model, we are distancing
ourselves further from becoming part of the Big Tent.   So let's move
the client code that the Neutron plugin depends on to a separate
project (python-os-midonetclient?) so that it follows the convention,
and will simplify things for OSt packagers.  From OSt's point of view,
python-midonetclient should be completely forgotten.
* We should not cause inconvenience to the current developers of the
low level MidoNet API, who develop python-midonetclient and
midonet-cli for testing purposes (MDTS, for example).  Because the
testing framework is part of midonet, moving python-midonetclient out
of midonet will cause awkward development process for the midonet
developers who will need to go back and forth between the projects.
Also, by keeping them inside midonet, no change is required for
packaging of python-midonetclient.  There are still users of the low
level midonet API, so we will have to keep releasing the
python-midonetclient package as we do now, but it does not necessarily
have to be published for the OSt distributors.

We have a clear separation of preferences among those that are from
the OpenStack background and those that are not.  Thus, it makes the
most sense to separate the projects the same way so that each party is
responsible for the part that they consume.

I hope this achieves the right balance.  Let me know if there are
strong objections against this proposal.

Best,
Ryu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MidoNet] Fwd: [MidoNet-Dev] Sync virtual topology data from neutron DB to Zookeeper?

2015-12-14 Thread Ryu Ishimoto
Hi Nick,

We have already designed the data sync feature[1], but this
development was suspended temporarily in favor of completing the v5.0
development of MidoNet.

We will be resuming development work on this project soon (with high priority).

It sounds to me like you need a completed, mature tool immediately to
achieve what you want, which we cannot provide right now.  There is a
networking-midonet meeting tomorrow on IRC at 07:00UTC [2] if you want
to discuss this further.  We could try to brainstorm possible
solutions together.

Ryu

[1] 
https://github.com/openstack/networking-midonet/blob/master/specs/kilo/data_sync.rst
[2] http://eavesdrop.openstack.org/#Networking_Midonet_meeting

On Mon, Dec 14, 2015 at 11:08 PM, Galo Navarro  wrote:
> Hi Li,
>
> Sorry for the late reply. Unrelated point: please note that we've
> moved the mailing lists to Openstack infra
> (openstack-dev@lists.openstack.org  - I'm ccing the list here).
>
> At the moment we don't support syncing the full Neutron DB, there has
> been work done for this that would allow this use case, but it's still
> not complete or released.
>
> @Ryu may be able to provide recommendations to do this following a
> manual process.
>
> Cheers,
> g
>
>
>
> On 4 December 2015 at 09:27, Li Ma  wrote:
>> Hi midoers,
>>
>> I have an OpenStack cloud with neutron ML2+OVS. I'd like to switch
>> from OVS to MidoNet in that cloud.
>>
>> Actually the neutron DB stores all the existing virtual topology. I
>> wonder if there's some guides or ops tools for MidoNet to sync data
>> from the neutron DB to Zookeeper.
>>
>> Thanks a lot,
>> --
>>
>> Li Ma (Nick)
>> Email: skywalker.n...@gmail.com
>> ___
>> MidoNet mailing list
>> mido...@lists.midonet.org
>> http://lists.midonet.org/listinfo/midonet
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ansible] One or more undefined variables: 'dict object' has no attribute 'bridge'

2015-12-14 Thread Kevin Carter
The port binding issues are usually related to a neutron physical interface 
mapping issue however based on your previous config I don't think that was the 
problem. If you're deploying Liberty/Master(Mitaka) there was was a fix that 
went in that resolved an issue within neutron and the use of L2/multicast 
groups [0] and if your on the stable tag the fix has not been released yet and 
will be there for the 12.0.3 tag, coming soon. To resolve the issue the fix is 
to simply to add the following to your `user_variables.yml` file:

== If you don't want to use l2 population add the following ==
neutron_l2_population: "False"
neutron_vxlan_group: "239.1.1.1"
 
== If you want to use l2 population add the following ==
neutron_l2_population: "True"

As for the neutron services on your compute nodes, they should be running 
within the host namespace. In liberty/Master the python bits will be within a 
vent using an upstart init script to control the service. If your not seeing 
the neutron service running its likely due to this bug [2] which is resolve by 
dropping the previously mentioned user variable options. 

I hope this helps and let me know how it goes. 

[0] https://review.openstack.org/#/c/255624
[1] https://github.com/openstack/openstack-ansible/commits/liberty
[2] https://bugs.launchpad.net/neutron/+bug/1470584

--

Kevin Carter
IRC: cloudnull



From: Mark Korondi 
Sent: Sunday, December 13, 2015 9:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ansible] One or more undefined variables: 'dict 
object' has no attribute 'bridge'

Thanks cloudnull,

This solved the installation issue. I commented out all non-flat
related networks before, to investigate my main problem, which is

> PortBindingFailed: Binding failed for port 
> fe67a2d5-6d6a-4440-80d0-acbe2ff5c27f, please check neutron logs for more 
> information.

I still have this problem; I created the flat external network with no
errors, still I get this when trying to launch an instance. What's
really interesting to me, is that no neutron microservices are
deployed and running on the compute node.

Mark (kmARC)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Mid Cycle Sprint

2015-12-14 Thread Curtis
FYI it looks like Ansible Fest London is Feb 18th.

On Thu, Dec 10, 2015 at 1:29 PM, Amy Marrich  wrote:
> I'd be game to join in if it's in San Antonio. While I'd love to go to
> London I don't think I'd make.
>
> Like Major I'd like to see some doc work.
>
> Amy Marrich
> 
> From: Jesse Pretorius 
> Sent: Wednesday, December 9, 2015 6:45:56 AM
> To: openstack-dev@lists.openstack.org;
> openstack-operat...@lists.openstack.org
> Subject: [openstack-dev] [openstack-ansible] Mid Cycle Sprint
>
> Hi everyone,
>
> At the Mitaka design summit in Tokyo we had some corridor discussions about
> doing a mid-cycle meetup for the purpose of continuing some design
> discussions and doing some specific sprint work.
>
> ***
> I'd like indications of who would like to attend and what
> locations/dates/topics/sprints would be of interest to you.
> ***
>
> For guidance/background I've put some notes together below:
>
> Location
> 
> We have contributors, deployers and downstream consumers across the globe so
> picking a venue is difficult. Rackspace have facilities in the UK (Hayes,
> West London) and in the US (San Antonio) and are happy for us to make use of
> them.
>
> Dates
> -
> Most of the mid-cycles for upstream OpenStack projects are being held in
> January. The Operators mid-cycle is on February 15-16.
>
> As I feel that it's important that we're all as involved as possible in
> these events, I would suggest that we schedule ours after the Operators
> mid-cycle.
>
> It strikes me that it may be useful to do our mid-cycle immediately after
> the Ops mid-cycle, and do it in the UK. This may help to optimise travel for
> many of us.
>
> Format
> --
> The format of the summit is really for us to choose, but typically they're
> formatted along the lines of something like this:
>
> Day 1: Big group discussions similar in format to sessions at the design
> summit.
>
> Day 2: Collaborative code reviews, usually performed on a projector, where
> the goal is to merge things that day (if a review needs more than a single
> iteration, we skip it. If a review needs small revisions, we do them on the
> spot).
>
> Day 3: Small group / pair programming.
>
> Topics
> --
> Some topics/sprints that come to mind that we could explore/do are:
>  - Install Guide Documentation Improvement [1]
>  - Development Documentation Improvement (best practises, testing, how to
> develop a new role, etc)
>  - Upgrade Framework [2]
>  - Multi-OS Support [3]
>
> [1] https://etherpad.openstack.org/p/oa-install-docs
> [2] https://etherpad.openstack.org/p/openstack-ansible-upgrade-framework
> [3] https://etherpad.openstack.org/p/openstack-ansible-multi-os-support
>
> --
> Jesse Pretorius
> IRC: odyssey4me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Blog: serverascode.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Better tests for nova scheduler(esp. race conditions)?

2015-12-14 Thread Cheng, Yingxin

> -Original Message-
> From: Nikola Đipanov [mailto:ndipa...@redhat.com]
> Sent: Monday, December 14, 2015 11:11 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Better tests for nova scheduler(esp. race
> conditions)?
> 
> On 12/14/2015 08:20 AM, Cheng, Yingxin wrote:
> > Hi All,
> >
> >
> >
> > When I was looking at bugs related to race conditions of scheduler
> > [1-3], it feels like nova scheduler lacks sanity checks of schedule
> > decisions according to different situations. We cannot even make sure
> > that some fixes successfully mitigate race conditions to an acceptable
> > scale. For example, there is no easy way to test whether server-group
> > race conditions still exists after a fix for bug[1], or to make sure
> > that after scheduling there will be no violations of allocation ratios
> > reported by bug[2], or to test that the retry rate is acceptable in
> > various corner cases proposed by bug[3]. And there will be much more
> > in this list.
> >
> >
> >
> > So I'm asking whether there is a plan to add those tests in the
> > future, or is there a design exist to simplify writing and executing
> > those kinds of tests? I'm thinking of using fake databases and fake
> > interfaces to isolate the entire scheduler service, so that we can
> > easily build up a disposable environment with all kinds of fake
> > resources and fake compute nodes to test scheduler behaviors. It is
> > even a good way to test whether scheduler is capable to scale to 10k
> > nodes without setting up 10k real compute nodes.
> >
> 
> This would be a useful effort - however do not assume that this is going to 
> be an
> easy task. Even in the paragraph above, you fail to take into account that in
> order to test the scheduling you also need to run all compute services since
> claims work like a kind of 2 phase commit where a scheduling decision gets
> checked on the destination compute host (through Claims logic), which involves
> locking in each compute process.
> 

Yes, the final goal is to test the entire scheduling process including 2PC. 
As scheduler is still in the process to be decoupled, some parts such as RT 
and retry mechanism are highly coupled with nova, thus IMO it is not a good 
idea to
include them in this stage. Thus I'll try to isolate filter-scheduler as the 
first step,
hope to be supported by community.


> >
> >
> > I'm also interested in the bp[4] to reduce scheduler race conditions
> > in green-thread level. I think it is a good start point in solving the
> > huge racing problem of nova scheduler, and I really wish I could help on 
> > that.
> >
> 
> I proposed said blueprint but am very unlikely to have any time to work on it 
> this
> cycle, so feel free to take a stab at it. I'd be more than happy to 
> prioritize any
> reviews related to the above BP.
> 
> Thanks for your interest in this
> 
> N.
> 

Many thanks nikola! I'm still looking at the claim logic and try to find a way 
to merge
it with scheduler host state, will upload patches as soon as I figure it out. 


> >
> >
> >
> >
> > [1] https://bugs.launchpad.net/nova/+bug/1423648
> >
> > [2] https://bugs.launchpad.net/nova/+bug/1370207
> >
> > [3] https://bugs.launchpad.net/nova/+bug/1341420
> >
> > [4]
> > https://blueprints.launchpad.net/nova/+spec/host-state-level-locking
> >
> >
> >
> >
> >
> > Regards,
> >
> > -Yingxin
> >



Regards,
-Yingxin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Regarding v2 LoadBalancer's status(es)

2015-12-14 Thread Brandon Logan
Looks like that is only for v1 though.

On Tue, 2015-12-15 at 05:38 +, Phillip Toohill wrote:
> >Yeah this needs to be better documented.  I would say all of those
> >statuses in the docs pertain to provisioning_status, except for
> >INACTIVE, which I'm actually not sure where that is being used. ...
> 
> There is this patch to utilize the INACTIVE status: 
> https://review.openstack.org/#/c/255875/ 
> 
> 
> 
> From: Brandon Logan 
> Sent: Monday, December 14, 2015 6:25 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Regarding v2 LoadBalancer's 
> status(es)
> 
> Hi Bryan,
> 
> On Mon, 2015-12-14 at 15:19 -0600, Bryan Jones wrote:
> > Hi All,
> >
> > I had a few issues/questions regarding the statuses
> > (provisioning_status and operating_status) of a v2 LoadBalancer. To
> > preface these, I am working on the LBaaS v2 support in Heat.
> >
> > The first question regards the allowed values for each of
> > provisioning_status and operating status. Here it seems the
> > documentation is ambiguous. [1] provides a list of possible statuses,
> > but does not mention if they are options for provisioning_status or
> >  operating_status. [2] provides much clearer options for each status,
> > but does not show the INACTIVE status mention in [1]. Should INACTIVE
> > be included in the possible options for one of the statuses, or should
> > it be removed from [1] altogether?
> 
> Yeah this needs to be better documented.  I would say all of those
> statuses in the docs pertain to provisioning_status, except for
> INACTIVE, which I'm actually not sure where that is being used.  I have
> to plead ignorance on this.  I was initially thinking operating_status
> but I don't see it being used.  So that probably needs to just be pulled
> out of the docs entirely.  The operating_status statuses are listed in
> code here [1].  They are pretty self explanatory, except for maybe
> DEGRADED.  DEGRADED basically means that one or more of its descendants
> are in an OFFLINE operating_status.  NO_MONITOR means no health monitor
> so operating_status can't be evaluated.  DISABLED means admin_state_up
> on that entity is set to False.
> 
> >
> > Second, [1] also mentions that an error_details attribute will be
> > provided if the status is ERROR. I do not see any error_details
> > attribute in the LoadBalancer code [3], so I am wondering where that
> > attribute comes from?
> 
> This is actually something that was in v1 (status_description) that we
> have not added to v2.  It would be nice to have but its not there yet.
> The docs should be updated to remove this.
> >
> > Finally, I'm curious what operations can be performed on the
> > LoadBalancer if the operating_status is OFFLINE and the
> > provisioning_status is ACTIVE. First is this state possible? And
> > second, can the LoadBalancer be manipulated (i.e. add a Listener to
> > the LoadBalancer) if it is in this state?
> 
> Operations on a load balancer are only restricted based on the
> provisioning_status.  operating_status is purely for information.  If
> the load balancer's provisioning status is ACTIVE then you can do any
> operation on it, regardless of operating_status.
> 
> I don't know of a current scenario where ACTIVE/OFFLINE status is
> actually possible for a load balancer, but a driver could decide to do
> that, though I'd like to understand that use case first.
> 
> >
> > [1]
> > http://developer.openstack.org/api-ref-networking-v2-ext.html#lbaas-v2.0
> > [2]
> > http://developer.openstack.org/api-ref-networking-v2-ext.html#showLoadBalancerv2
> > [3]
> > https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/services/loadbalancer/data_models.py#L503
> >
> > Thanks,
> >
> > BRYAN JONES
> > Software Engineer - OpenStack Development
> >
> > ___
> > Phone: 1-507-253-2620
> > E-mail: jone...@us.ibm.com
> > Find me on: LinkedIn:
> > http://www.linkedin.com/in/bjones17/
> > IBM
> >
> >   3605 Hwy 52 N
> >Rochester, MN 55901-1407
> >   United States
> >
> >
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> [1]
> https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/services/loadbalancer/constants.py#L100
> 
> Thanks,
> Brandon
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Re: [openstack-dev] [Neutron][LBaaS] Regarding v2 LoadBalancer's status(es)

2015-12-14 Thread Phillip Toohill
>Yeah this needs to be better documented.  I would say all of those
>statuses in the docs pertain to provisioning_status, except for
>INACTIVE, which I'm actually not sure where that is being used. ...

There is this patch to utilize the INACTIVE status: 
https://review.openstack.org/#/c/255875/ 



From: Brandon Logan 
Sent: Monday, December 14, 2015 6:25 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Regarding v2 LoadBalancer's 
status(es)

Hi Bryan,

On Mon, 2015-12-14 at 15:19 -0600, Bryan Jones wrote:
> Hi All,
>
> I had a few issues/questions regarding the statuses
> (provisioning_status and operating_status) of a v2 LoadBalancer. To
> preface these, I am working on the LBaaS v2 support in Heat.
>
> The first question regards the allowed values for each of
> provisioning_status and operating status. Here it seems the
> documentation is ambiguous. [1] provides a list of possible statuses,
> but does not mention if they are options for provisioning_status or
>  operating_status. [2] provides much clearer options for each status,
> but does not show the INACTIVE status mention in [1]. Should INACTIVE
> be included in the possible options for one of the statuses, or should
> it be removed from [1] altogether?

Yeah this needs to be better documented.  I would say all of those
statuses in the docs pertain to provisioning_status, except for
INACTIVE, which I'm actually not sure where that is being used.  I have
to plead ignorance on this.  I was initially thinking operating_status
but I don't see it being used.  So that probably needs to just be pulled
out of the docs entirely.  The operating_status statuses are listed in
code here [1].  They are pretty self explanatory, except for maybe
DEGRADED.  DEGRADED basically means that one or more of its descendants
are in an OFFLINE operating_status.  NO_MONITOR means no health monitor
so operating_status can't be evaluated.  DISABLED means admin_state_up
on that entity is set to False.

>
> Second, [1] also mentions that an error_details attribute will be
> provided if the status is ERROR. I do not see any error_details
> attribute in the LoadBalancer code [3], so I am wondering where that
> attribute comes from?

This is actually something that was in v1 (status_description) that we
have not added to v2.  It would be nice to have but its not there yet.
The docs should be updated to remove this.
>
> Finally, I'm curious what operations can be performed on the
> LoadBalancer if the operating_status is OFFLINE and the
> provisioning_status is ACTIVE. First is this state possible? And
> second, can the LoadBalancer be manipulated (i.e. add a Listener to
> the LoadBalancer) if it is in this state?

Operations on a load balancer are only restricted based on the
provisioning_status.  operating_status is purely for information.  If
the load balancer's provisioning status is ACTIVE then you can do any
operation on it, regardless of operating_status.

I don't know of a current scenario where ACTIVE/OFFLINE status is
actually possible for a load balancer, but a driver could decide to do
that, though I'd like to understand that use case first.

>
> [1]
> http://developer.openstack.org/api-ref-networking-v2-ext.html#lbaas-v2.0
> [2]
> http://developer.openstack.org/api-ref-networking-v2-ext.html#showLoadBalancerv2
> [3]
> https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/services/loadbalancer/data_models.py#L503
>
> Thanks,
>
> BRYAN JONES
> Software Engineer - OpenStack Development
>
> ___
> Phone: 1-507-253-2620
> E-mail: jone...@us.ibm.com
> Find me on: LinkedIn:
> http://www.linkedin.com/in/bjones17/
> IBM
>
>   3605 Hwy 52 N
>Rochester, MN 55901-1407
>   United States
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[1]
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/services/loadbalancer/constants.py#L100

Thanks,
Brandon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Call for review focus

2015-12-14 Thread Carl Baldwin
On Mon, Dec 14, 2015 at 12:41 PM, Rossella Sblendido
 wrote:
> On 11/25/2015 11:05 PM, Assaf Muller wrote:
>> We could then consider running the script automatically on a daily
>> basis and publishing the
>> resulting URL in a nice bookmarkable place.
>
> An update on this. The easiest bookmarkable place that I found it's my blog
> [1]. I have a script that updates the url every day, I can do that more
> often. I'd love to have the url on the wiki but I think it requires to
> create a patch every day and approve it...not nice at all. Any suggestion?
>
> [1] http://rossella-sblendido.net/2015/12/14/gerrit-url-neutron-reviews/

I bet we could find a permanent url to give us a nice "307 Temporary
Redirect" so that we could bookmark the page and still have the URL
updated daily from the script.  With such a redirect, the bookmark
should always go to the fresh dashboard through it.  I wonder if infra
has any ideas on where this could be hosted.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-14 Thread Tim Bell
Can we have nested project quotas in from the beginning ? Nested projects are 
in Keystone V3 from Kilo onwards and retrofitting this is hard work.



For details, see the Nova functions at 
https://review.openstack.org/#/c/242626/. Cinder now also has similar 
functions.



Tim



From: Vilobh Meshram [mailto:vilobhmeshram.openst...@gmail.com]
Sent: 15 December 2015 01:59
To: OpenStack Development Mailing List (not for usage questions) 
; OpenStack Mailing List (not for usage 
questions) 
Subject: [openstack-dev] [openstack][magnum] Quota for Magnum Resources



Hi All,



Currently, it is possible to create unlimited number of resource like 
bay/pod/service/. In Magnum, there should be a limitation for user or project 
to create Magnum resource,
and the limitation should be configurable[1].



I proposed following design :-



1. Introduce new table magnum.quotas

++--+--+-+-++

| Field  | Type | Null | Key | Default | Extra  |

++--+--+-+-++

| id | int(11)  | NO   | PRI | NULL| auto_increment |

| created_at | datetime | YES  | | NULL||

| updated_at | datetime | YES  | | NULL||

| deleted_at | datetime | YES  | | NULL||

| project_id | varchar(255) | YES  | MUL | NULL||

| resource   | varchar(255) | NO   | | NULL||

| hard_limit | int(11)  | YES  | | NULL||

| deleted| int(11)  | YES  | | NULL||

++--+--+-+-++

resource can be Bay, Pod, Containers, etc.



2. API controller for quota will be created to make sure basic CLI commands 
work.

quota-show, quota-delete, quota-create, quota-update

3. When the admin specifies a quota of X number of resources to be created the 
code should abide by that. For example if hard limit for Bay is 5 (i.e. a 
project can have maximum 5 Bay's) if a user in a project tries to exceed that 
hardlimit it won't be allowed. Similarly goes for other resources.

4. Please note the quota validation only works for resources created via 
Magnum. Could not think of a way that Magnum to know if a COE specific 
utilities created a resource in background. One way could be to see the 
difference between whats stored in magnum.quotas and the information of the 
actual resources created for a particular bay in k8s/COE.

5. Introduce a config variable to set quotas values.

If everyone agrees will start the changes by introducing quota restrictions on 
Bay creation.

Thoughts ??



-Vilobh

[1] https://blueprints.launchpad.net/magnum/+spec/resource-quota



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Next vitrage meeting

2015-12-14 Thread AFEK, Ifat (Ifat)
Hi,

Vitrage next weekly meeting will be tomorrow, Wednesday at 9:00 UTC, on 
#openstack-meeting-3 channel.

Agenda:

* Current status and progress from last week
* Review action items
* Next steps 
* Open Discussion

You are welcome to join.

Thanks, 
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Tempest] Asking for reviews from Tempest cores.

2015-12-14 Thread Sheng Bo Hou
Hi Tempest folks,

https://review.openstack.org/#/c/195443/

I am asking you to take a review on this patch, which is the integration 
test for volume retype with migration in cinder. It has been taken quite a 
while and quite some cycles to
get mature. It is a very important test for volume migration feature in 
Cinder.

Thank you for your attention.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FW: [vitrage] Gerrit Upgrade 12/16

2015-12-14 Thread AFEK, Ifat (Ifat)
Hi,

Reminder: Gerrit upgrade is scheduled for tomorrow at 17:00 UTC.

Ifat.


-Original Message-
From: Spencer Krum [mailto:n...@spencerkrum.com] 
Sent: Monday, December 14, 2015 9:53 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Gerrit Upgrade 12/16

This is a gentle reminder that the downtime will be this Wednesday starting at 
17:00 UTC.

Thank you for your patience,
Spencer

--
  Spencer Krum
  n...@spencerkrum.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] status of distil?

2015-12-14 Thread gord chung

wasn't aware this project existed. i've contacted flwang.

just for reference, for those that extend Telemetry projects, it's good 
to promote it here: 
https://wiki.openstack.org/wiki/Telemetry#Externally_Managed


On 14/12/2015 4:09 AM, Andreas Jaeger wrote:

On 12/14/2015 10:01 AM, Steve Martinelli wrote:

While I was trying to submit patches for projects that had old
keystoneclient references (distil was one of the projects), I noticed
that there hasn't been much action on this project [0]. It's been a year
since a commit [1], no releases [2], and I can't submit a patch since
the .gitreview file doesn't point to review.openstack.org [3].

Is distil alive?

[0] https://github.com/openstack/distil
[1] https://github.com/openstack/distil/commits/master
[2] https://github.com/openstack/distil/releases
[3] https://github.com/openstack/distil/blob/master/.gitreview



There was not a single commit since the project has been imported over 
a year ago into our CI, I suggest it's time to declare it orphaned and 
retire it...


Andreas


--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tox 2.3.0 broke tempest jobs

2015-12-14 Thread Jeremy Stanley
On 2015-12-14 12:18:07 +0100 (+0100), Jordan Pittier wrote:
> Tox 2.3.1 was released on pypi a few minutes ago, and it fixes
> this issue.

Thanks for testing it--I've gone ahead and unblocked our image
update automation.

If anyone notices new abnormalities from tox, please let the Infra
team know as soon as possible and we can roll back to pre-2.3.x
images again.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Dropping Python 2.6

2015-12-14 Thread Roman Prykhodchenko
Fuelers,

Since Mitaka OpenStack Infra has no resources to test python 2.6 support so the 
corresponding jobs are not running anymore. Since Fuel master node is on CentOS 
7 now, let’s drop Python 2.6 support in Fuel.


- romcheg


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Microversions support for extensions without Controller

2015-12-14 Thread Alex Xu
Hi, Alexandre,

Yes, I think we need pass the version into `server_update` extension point.
My irc nick is alex_xu, let me know if you have any trouble with this.

Thanks
Alex

2015-12-13 2:34 GMT+08:00 Alexandre Levine :

> Hi all,
>
> os-user-data extension implements server_create method to add user_data
> for server creation. No Controller is used for this, only "class
> UserData(extensions.V21APIExtensionBase)".
>
> I want to add server_update method allowing to update the user_data.
> Obviously I have to add it as a microversioned functionality.
>
> And here is the problem: there is no information about the incoming
> request version in this code. It is available for Controllers only. But
> checking the version in controller would be too late, because the instance
> is already updated (non-generator extensions are post-processed).
>
> Can anybody guide me how to resolve this collision?
>
> Would it be possible to just retroactively add the user_data modification
> for the whole 2.1 version skipping the microversioning? Or we need to
> change nova so that request version is passed through to extension?
>
> Best regards,
>   Alex Levine
>
> P.S. Sorry for the second attempt - previous letter went with [openstack]
> instead of [openstack-dev] in the Subject.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MidoNet] Fwd: [MidoNet-Dev] Sync virtual topology data from neutron DB to Zookeeper?

2015-12-14 Thread Galo Navarro
Hi Li,

Sorry for the late reply. Unrelated point: please note that we've
moved the mailing lists to Openstack infra
(openstack-dev@lists.openstack.org  - I'm ccing the list here).

At the moment we don't support syncing the full Neutron DB, there has
been work done for this that would allow this use case, but it's still
not complete or released.

@Ryu may be able to provide recommendations to do this following a
manual process.

Cheers,
g



On 4 December 2015 at 09:27, Li Ma  wrote:
> Hi midoers,
>
> I have an OpenStack cloud with neutron ML2+OVS. I'd like to switch
> from OVS to MidoNet in that cloud.
>
> Actually the neutron DB stores all the existing virtual topology. I
> wonder if there's some guides or ops tools for MidoNet to sync data
> from the neutron DB to Zookeeper.
>
> Thanks a lot,
> --
>
> Li Ma (Nick)
> Email: skywalker.n...@gmail.com
> ___
> MidoNet mailing list
> mido...@lists.midonet.org
> http://lists.midonet.org/listinfo/midonet

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python 2.6

2015-12-14 Thread Roman Prykhodchenko
Sorry for duplicating discussions. os-dev subsription was broken for me for a 
while so I missed a lot :(

> 14 груд. 2015 р. о 15:23 Evgeniy L  написав(ла):
> 
> Hi Roman,
> 
> We've discussed it [1], so +1
> 
> [1] 
> https://openstack.nimeyo.com/67521/openstack-dev-fuel-dropping-python2-6-compatibility
>  
> 
> 
> On Mon, Dec 14, 2015 at 5:05 PM, Roman Prykhodchenko  > wrote:
> Fuelers,
> 
> Since Mitaka OpenStack Infra has no resources to test python 2.6 support so 
> the corresponding jobs are not running anymore. Since Fuel master node is on 
> CentOS 7 now, let’s drop Python 2.6 support in Fuel.
> 
> 
> - romcheg
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tox 2.3.0 broke tempest jobs

2015-12-14 Thread Jeremy Stanley
On 2015-12-14 12:55:45 +1300 (+1300), Robert Collins wrote:
[...]
> my suggestion would be that we either make tox pip installed
> during jobs (across the board), so that we can in fact control it
> with upper-constraints,


That's a lot of added complication to deal with possible regressions
in just one tool. Why not any of the myriad of other
non-requirements-listed things which have also impacted us in the
past?

> or we work on functional tests of new images before they go-live

This seems like a more holistic approach, though running random
projects' jobs is not the way to accomplish it (as we discovered
back when we used to run DevStack smoke tests to validate our images
and frequently rejected perfectly good images due to
nondeterministic errors in the tests).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python 2.6

2015-12-14 Thread Maciej Kwiek
+1

On Mon, Dec 14, 2015 at 3:05 PM, Roman Prykhodchenko  wrote:

> Fuelers,
>
> Since Mitaka OpenStack Infra has no resources to test python 2.6 support
> so the corresponding jobs are not running anymore. Since Fuel master node
> is on CentOS 7 now, let’s drop Python 2.6 support in Fuel.
>
>
> - romcheg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python 2.6

2015-12-14 Thread Evgeniy L
Hi Roman,

We've discussed it [1], so +1

[1]
https://openstack.nimeyo.com/67521/openstack-dev-fuel-dropping-python2-6-compatibility

On Mon, Dec 14, 2015 at 5:05 PM, Roman Prykhodchenko  wrote:

> Fuelers,
>
> Since Mitaka OpenStack Infra has no resources to test python 2.6 support
> so the corresponding jobs are not running anymore. Since Fuel master node
> is on CentOS 7 now, let’s drop Python 2.6 support in Fuel.
>
>
> - romcheg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][QA] New version of fuel-devops (2.9.15)

2015-12-14 Thread Dennis Dmitriev
+openstack-dev

On 12/14/2015 02:40 PM, Dennis Dmitriev wrote:
> Hi All,
>
> We have updated the version of 'fuel-devops' framework to the 2.9.15.
>
> This is mainly bugfix update.
>
> Version 2.9.15 will be updated on our product CI during next several days.
>
> Changes since 2.9.13:
>
> - Process boolean environment variables like in fuel-qa, new method
> get_var_as_bool();
>
> - Use CPU mode 'host-passthrough' to get fuel-devops working on newest
> CPUs [1];
>
> - Fix role names in Node model [2]:
> 'admin' => 'fuel_master'
> 'slave' => 'fuel-slave'
>
> - Fix 'dos.py create' CLI command [3];
>
> - Fix time synchronization for nodes with systemd [4];
>
> - Partial support for external snapshots [5] (Disabled by default)
>
> - Minor fixes and documentation update
>
> List of all changes can be found on github [6].
>
> [1] - https://bugs.launchpad.net/fuel/+bug/1485047
> [2] - https://bugs.launchpad.net/fuel/+bug/1521271
> [3] - https://bugs.launchpad.net/fuel/+bug/1521520
> [4] - https://bugs.launchpad.net/fuel/+bug/1523523
> [5] - https://review.openstack.org/#/c/235566/
> [6] - https://github.com/openstack/fuel-devops/compare/2.9.13...master
>

-- 
Regards,
Dennis Dmitriev
QA Engineer,
Mirantis Inc. http://www.mirantis.com
e-mail/jabber: dis.x...@gmail.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2015-12-14 Thread Alex Xu
Hi,

We have weekly Nova API meeting this week. The meeting is being held
Tuesday UTC1200.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] notification subteam meeting

2015-12-14 Thread Balázs Gibizer
Hi, 

The next meeting of the nova notification subteam will happen 2015-12-15 
Tuesday 20:00 UTC [1] on #openstack-meeting-alt on freenode 

Agenda:
- Status of the outstanding specs and code reviews
- Subteam meeting during the vacation period
- AOB

See you there.

Cheers,
Gibi

 [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20151215T20 
 [2] https://wiki.openstack.org/wiki/Meetings/NovaNotification


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Separate master node provisioning and deployment

2015-12-14 Thread Vladimir Kozhukalov
> Meantime we can provide fuel-menu which will become a configuration
> gate for different subprojects. Perhaps we could consider to use
> pluggable approach, so each component will export plugin for fuel-menu
> with own settings.

fuel-menu could be a configuration gate for fuel deployment script

> The wrong thing is that with such approach it would be impossible to
> setup Fuel with just something like

>$ yum install fuel

I see nothing wrong here. 'yum install fuel' would be appropriate approach
if fuel was a service,
not a bunch of services some of which are not even limited to be installed
on the master node.

when you run

# yum install fuel
# fuel-menu

it the same as you run

# yum install fuel
# fuel_deploy_script (which runs fuel-menu and then runs puppet which
installs everything else)

I like the idea when fuel (let's rename it into fuel-deploy) package
provides just a deployment script.
It does not require a lot of changes and it corresponds to what we really
do. Besides, it is more flexible
because deployment could be modular (several stages).

One of potential disadvantages is that it is harder to track package
dependencies, but I think
a deployment script should be a root of the package dependency tree.



Vladimir Kozhukalov

On Mon, Dec 14, 2015 at 12:53 PM, Igor Kalnitsky 
wrote:

> Vladimir,
>
> Thanks for raising this question. I totally support idea of separating
> provisioning and deployment steps. I believe it'll simplify a lot of
> things.
>
> However I have some comments regarding this topic, see them inline. :)
>
> > For a package it is absolutely normal to throw a user dialog.
>
> It kills idea of fuel-menu, since each package will need to implement
> configuration menu of its own. Moreover, having such configuration
> menu in fuel-menu and in each package is too expensive, it will
> require more effort that I'd like to have.
>
> > Fuel package could install default astute.yaml (I'd like to rename it
> > into /etc/fuel.yaml or /etc/fuel/config.yaml) and use values from the
> > file by default not running fuelmenu
>
> I don't like idea of having one common configuration file for Fuel
> components. I think it'd be better when each component (subproject)
> has its own configuration file, and knows nothing about external ones.
>
> Meantime we can provide fuel-menu which will become a configuration
> gate for different subprojects. Perhaps we could consider to use
> pluggable approach, so each component will export plugin for fuel-menu
> with own settings.
>
> > What is wrong with 'deployment script' approach?
>
> The wrong thing is that with such approach it would be impossible to
> setup Fuel with just something like
>
> $ yum install fuel
>
> In my opinion we should go into the following approach:
>
> * yum install fuel
> * fuel-menu
>
> The first command should install a basic Fuel setup, and everything
> should work when it's done.
>
> While the second one prompts a configuration menu where one might
> change default settings (reconfigure default installation).
>
> Thanks,
> Igor
>
> On Mon, Dec 14, 2015 at 9:30 AM, Vladimir Kozhukalov
>  wrote:
> > Oleg,
> >
> > Thanks a lot for your opinion. Here are some more thoughts on this topic.
> >
> > 1) For a package it is absolutely normal to throw a user dialog. But
> > probably there is kind of standard for the dialog that does not allow to
> use
> > fuelmenu. AFAIK, for DEB packages it is debconf and there is a tutorial
> [0]
> > how to get user input during post install. I don't know if there is such
> a
> > standard for RPM packages. In some MLs it is written that any command
> line
> > program could be run in %post section including those like fuel-menu.
> >
> > 2) Fuel package could install default astute.yaml (I'd like to rename it
> > into /etc/fuel.yaml or /etc/fuel/config.yaml) and use values from the
> file
> > by default not running fuelmenu. A user then is supposed to run fuelmenu
> if
> > he/she needs to re-configure fuel installation. However, it is gonna be
> > quite intrusive. What if a user installs fuel and uses it for a while
> with
> > default configuration. What if some clusters are already in use and then
> the
> > user decides to re-configure the master node. Will it be ok?
> >
> > 3) What is wrong with 'deployment script' approach? Why can not fuel just
> > install kind of deployment script? Fuel is not a service, it consists of
> > many components. Moreover some of these components could be optional (not
> > currently but who knows?), some of this components could be run on an
> > external node (after all Fuel components use REST, AMQP, XMLRPC to
> interact
> > with each other).
> > Imagine you want to install OpenStack. It also consists of many
> components.
> > Some components like database or AMQP service could be deployed using HA
> > architecture. What if one needs Fuel to be run with external HA database,
> > amqp? From this perspective I'd 

Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-14 Thread Aleksandr Didenko
Hi,

> Downgrading for no reason could bring us to big trouble and bad user
experience

+1 to this. Let's keep PostgreSQL 9.3.

Regards,
Alex

On Mon, Dec 14, 2015 at 2:04 PM, Artem Silenkov 
wrote:

> Hello!
>
> Vote for update.
>
> 1. We have already shipped 9.3 in fuel-7.0. Downgrading such complicated
> package without any reason is not good thing at all. User experience could
> suffer a lot.
> 2. The next reason is tests. We have tested only 9.3, 9.2 was not tested
> at all. I'm sure we could bring serious regressions by downgrading,
> 3. Postgres-9.3 is not custom. It was taken from KOJI packages and
> backported without any modification. It means that this package is
> officially tested and supported by Fedora, which is good.
> 4. One shipped package more is not a huge burden for us. It was officially
> backported from official sources, tested and suits our need perfectly. Why
> do we need to play such dangerous games downgrading for no reasons?
>
> Let me notice that all packages are maintained by mos-packaging team now
> And we are perfectly ok with postgres-9.3.
>
> Downgrading for no reason could bring us to big trouble and bad user
> experience.
>
> Regards,
> Artem Silenkov
> ---
> MOs-Packaging
>
> On Mon, Dec 14, 2015 at 3:41 PM, Bartłomiej Piotrowski <
> bpiotrow...@mirantis.com> wrote:
>
>> On 2015-12-14 13:12, Igor Kalnitsky wrote:
>> > My opinion here is that I don't like that we're going to build and
>> > maintain one more custom package (just take a look at this patch [4]
>> > if you don't believe me), but I'd like to hear more opinion here.
>> >
>> > Thanks,
>> > Igor
>> >
>> > [1] https://bugs.launchpad.net/fuel/+bug/1523544
>> > [2] https://review.openstack.org/#/c/249656/
>> > [3] http://goo.gl/forms/Hk1xolKVP0
>> > [4] https://review.fuel-infra.org/#/c/14623/
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> I also think we should stay with what CentOS provides. Increasing
>> maintenance burden for something that can be implemented without bells
>> and whistles sounds like a no-go.
>>
>> Bartłomiej
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #63

2015-12-14 Thread Emilien Macchi
Hello!

Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
in #openstack-meeting-4:

https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20151215

See you there!
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-14 Thread Artem Silenkov
Hello!

Vote for update.

1. We have already shipped 9.3 in fuel-7.0. Downgrading such complicated
package without any reason is not good thing at all. User experience could
suffer a lot.
2. The next reason is tests. We have tested only 9.3, 9.2 was not tested at
all. I'm sure we could bring serious regressions by downgrading,
3. Postgres-9.3 is not custom. It was taken from KOJI packages and
backported without any modification. It means that this package is
officially tested and supported by Fedora, which is good.
4. One shipped package more is not a huge burden for us. It was officially
backported from official sources, tested and suits our need perfectly. Why
do we need to play such dangerous games downgrading for no reasons?

Let me notice that all packages are maintained by mos-packaging team now
And we are perfectly ok with postgres-9.3.

Downgrading for no reason could bring us to big trouble and bad user
experience.

Regards,
Artem Silenkov
---
MOs-Packaging

On Mon, Dec 14, 2015 at 3:41 PM, Bartłomiej Piotrowski <
bpiotrow...@mirantis.com> wrote:

> On 2015-12-14 13:12, Igor Kalnitsky wrote:
> > My opinion here is that I don't like that we're going to build and
> > maintain one more custom package (just take a look at this patch [4]
> > if you don't believe me), but I'd like to hear more opinion here.
> >
> > Thanks,
> > Igor
> >
> > [1] https://bugs.launchpad.net/fuel/+bug/1523544
> > [2] https://review.openstack.org/#/c/249656/
> > [3] http://goo.gl/forms/Hk1xolKVP0
> > [4] https://review.fuel-infra.org/#/c/14623/
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> I also think we should stay with what CentOS provides. Increasing
> maintenance burden for something that can be implemented without bells
> and whistles sounds like a no-go.
>
> Bartłomiej
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs][stable][ironic] Stable branch docs

2015-12-14 Thread Jim Rollenhagen
Hi all,

In the big tent, project teams are expected to maintain their own
install guides within their projects' source tree. There's a
conversation going on over in the docs list[1] about changing this, but
in the meantime...

Ironic (and presumably other projects) publish versioned documentation,
which includes the install guide. For example, our kilo install guide is
here[2]. However, there's no way to update those, as stable branch
policy[3] only allows for important bug fixes to be backported. For
example, this patch[4] was blocked for this reason (among others).

So, I'd like to propose that in the new world, where projects maintain
their own deployer/operator docs, that we allow documentation backports
(or even changes that are not part of a backport, for changes that only
make sense on the stable branch and not master). They're extremely low
risk, and can be very useful for operators. The alternative is making
sure people are always reading the most up-to-date docs, and in places
that have changed, having "in kilo [...], in liberty [...]", etc, which
is a bit of a maintenance burden.

What do folks think? I'm happy to write up a patch for the project team
guide if there's support for this.

// jim

[1] 
http://lists.openstack.org/pipermail/openstack-docs/2015-December/008051.html
[2] http://docs.openstack.org/developer/ironic/kilo/deploy/install-guide.html
[3] http://docs.openstack.org/project-team-guide/stable-branches.html
[4] https://review.openstack.org/#/c/219603/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][stable][ironic] Stable branch docs

2015-12-14 Thread Dmitry Tantsur

On 12/14/2015 03:42 PM, Jim Rollenhagen wrote:

Hi all,

In the big tent, project teams are expected to maintain their own
install guides within their projects' source tree. There's a
conversation going on over in the docs list[1] about changing this, but
in the meantime...

Ironic (and presumably other projects) publish versioned documentation,
which includes the install guide. For example, our kilo install guide is
here[2]. However, there's no way to update those, as stable branch
policy[3] only allows for important bug fixes to be backported. For
example, this patch[4] was blocked for this reason (among others).

So, I'd like to propose that in the new world, where projects maintain
their own deployer/operator docs, that we allow documentation backports
(or even changes that are not part of a backport, for changes that only
make sense on the stable branch and not master). They're extremely low
risk, and can be very useful for operators. The alternative is making
sure people are always reading the most up-to-date docs, and in places
that have changed, having "in kilo [...], in liberty [...]", etc, which
is a bit of a maintenance burden.


+1 I would prefer us landing important documentation patches on stable 
branches.




What do folks think? I'm happy to write up a patch for the project team
guide if there's support for this.

// jim

[1] 
http://lists.openstack.org/pipermail/openstack-docs/2015-December/008051.html
[2] http://docs.openstack.org/developer/ironic/kilo/deploy/install-guide.html
[3] http://docs.openstack.org/project-team-guide/stable-branches.html
[4] https://review.openstack.org/#/c/219603/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-14 Thread Ryu Ishimoto
On Tue, Dec 15, 2015 at 1:00 AM, Sandro Mathys  wrote:
> On Tue, Dec 15, 2015 at 12:02 AM, Ryu Ishimoto  wrote:
>
> So if I understand you correctly, you suggest:
> 1) the (midonet/internal) low level API stays where it is and will
> still be called python-midonetclient.
> 2) the (neutron/external) high level API is moved into it's own
> project and will be called something like python-os-midonetclient.
>
> Sounds like a good compromise which addresses the most important
> points, thanks Ryu! I wasn't aware that these parts of the
> python-midonetclient are so clearly distinguishable/separable but if
> so, this makes perfect sense. Not perfectly happy with the naming, but
> I figure it's the way to go.

Thanks for the endorsement.  Yes, it is trivial to separate them (less
than a day of work) because they are pretty much already separated.

As for the naming, I think it's better to take a non-disruptive
approach so that it's transparent to those currently developing the
low level midonet client.  To your question, however, I have another
suggestion, which is that for the high level client code, it may also
make sense to just include that as part of the plugin.  It's such
small code that it might not make sense to separate, and also likely
to be used only by the plugin in the future.  Which basically means
that the plugin need not depend on any python client library at all.
I think this will simplify even further.  It should also be ok to be
tied to the plugin release cycles as well assuming that's the only
place the client is needed.

Cheers,
Ryu



>
> -- Sandro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-Announce List

2015-12-14 Thread Dean Troyer
On Mon, Dec 14, 2015 at 6:28 AM, Tom Fifield  wrote:

> On 14/12/15 19:33, Thierry Carrez wrote:
>
>> Tom Fifield wrote:
>
> * Do SDK releases fit on -announce?
>>>
>>
>> I guess they could -- how many of those are we expecting ?
>>
>>
> So far it looks close to zero emails :) PythonSDK is the only one that's
> in the OpenStack namespace I can see at quick search.


I think they do, the Python SDK is intended for application and user-facing
use, that fits the -announce audience.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [Backport Request] GPFS Cinder module to Kilo branch

2015-12-14 Thread Christopher Brown
Hello,

Please could you backport the following commit:

2d983e20b0015ed685a874bb6117261dc1af5661

to the puppet-cinder Kilo branch?

Thank you

-- 
Christopher Brown
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Gate failure with grenade

2015-12-14 Thread Armando M.
Hi folks,

Something snuck in past the gate last night [1]. Please stop rechecking and
pushing in the merge queue until the matter is resolved.

I will follow up with details, if someone knows more, please find me on IRC.

Thanks,
Armando

[1]
http://logs.openstack.org/00/254900/4/gate/gate-grenade-dsvm-neutron/a9216c9/logs/grenade.sh.txt.gz#_2015-12-14_12_24_12_561
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] what are the key errors with volume detach

2015-12-14 Thread Andrea Rosa


On 10/12/15 14:42, Sean Dague wrote:
> On 12/02/2015 12:37 PM, Rosa, Andrea (HP Cloud Services) wrote:
>> Hi
>>
>> thanks Sean for bringing this point, I have been working on the change and 
>> on the (abandoned) spec.
>> I'll try here to summarize all the discussions we had and what we decided.
>>
>>> From: Sean Dague [mailto:s...@dague.net]
>>> Sent: 02 December 2015 13:31
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: [openstack-dev] [nova] what are the key errors with volume detach
>>>
>>> This patch to add a bunch of logic to nova-manage for forcing volume detach
>>> raised a bunch of questions
>>> https://review.openstack.org/#/c/184537/24/nova/cmd/manage.py,cm
>>
>> On this specific review there are some valid concerns that I am happy to 
>> address, but first we need to understand if we want this change.
>> FWIW I think it is still a valid change, please see below.
>>
>>> In thinking about this for the last day, I think the real concern is that 
>>> we have
>>> so many safety checks on volume delete, that if we failed with a partially
>>> setup volume, we have too many safety latches to tear it down again.
>>>
>>> Do we have some detailed bugs about how that happens? Is it possible to
>>> just fix DELETE to work correctly even when we're in these odd states?
>>
>> In a simplified view of a detach volume we can say that the nova code does:
>> 1 detach the volume from the instance
>> 2 Inform cinder about the detach and call the terminate_connection on the 
>> cinder API. 
>> 3 delete the dbm recod in the nova DB
>>
>> If 2 fails the volumes get stuck in a detaching status and any further 
>> attempt to delete or detach the volume will fail:
>> "Delete for volume  failed: Volume  is still attached, 
>> detach volume first. (HTTP 400)"
> 
> So why isn't this handled in a "finally" pattern.
> 
> Ensure that you always do 2 (a) & (b) and 3, collect errors that happen
> during 2 (a) & (b), report them back to the user.
> What state does that leave things in? Both from the server and the volume.
> 

The detach volume in cinder (2.a 2.b) is an async call, if Nova can talk
to the Cinder API it sends the request and if the detach on the Cinder
side fails Nova doesn't know about it.
--
Andrea Rosa



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - availability zone performance regression and discussion about added network field

2015-12-14 Thread Armando M.
On 13 December 2015 at 23:01, Kevin Benton  wrote:

> Yes, as I'm starting to understand the use case, I think it would actually
> make more sense to add an AZ-network mapping table. Then whatever
> implementation can populate them based on the criteria it is using
> (reference would just do it on agent updates).
>

I guess this would be leading to have AZ being first class (ie. being in a
table of its own) and associate it 1-N to agents and N-M to networks. It
might not be worth going down this path for killing the performance penalty
introduced by this feature, though it might be worth considering the model
change to accommodate other features where we could extend the grouping to
other resources like L2.


>
> On Sun, Dec 13, 2015 at 9:53 PM, Hong Hui Xiao 
> wrote:
>
>> Hi,
>>
>> Can we just add "availability_zones" as one Column in Network? And
>> update it when "NetworkDhcpAgentBinding" updates. The code will be a bit
>> more complex, but it can save the time when retrieving Network resource.
>>
>>
>>
>> [image: Inactive hide details for Hirofumi Ichihara ---12/14/2015
>> 13:33:41---Hi Kevin, On 2015/12/14 11:10, Kevin Benton wrote:]Hirofumi
>> Ichihara ---12/14/2015 13:33:41---Hi Kevin, On 2015/12/14 11:10, Kevin
>> Benton wrote:
>>
>> From: Hirofumi Ichihara 
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Date: 12/14/2015 13:33
>> Subject: Re: [openstack-dev] [neutron] - availability zone performance
>> regression and discussion about added network field
>> --
>>
>>
>>
>> Hi Kevin,
>>
>> On 2015/12/14 11:10, Kevin Benton wrote:
>>
>>Hi all,
>>
>>   The availability zone code added a new field to the network API
>>   that shows the availability zones of a network. This caused a pretty 
>> big
>>   performance impact to get_networks calls because it resulted in a 
>> database
>>   lookup for every network.[1]
>>
>>   I already put a patch up to join the information ahead of time in
>>   the network model.[2]
>>
>> I agree with your suggestion. I believe that the patch can solve the
>> performance issue.
>>
>>However, before we go forward with that, I think we should consider
>>   the removal of that field from the API.
>>
>>   Having to always join to the DHCP agents table to lookup which
>>   zones a network has DHCP agents on is expensive and is duplicating
>>   information available with other API calls.
>>
>>   Additionally, the field is just called 'availability_zones' but
>>   it's being derived solely from AZ definitions in DHCP agent bindings 
>> for
>>   that network. To me that doesn't represent where the network is 
>> available,
>>   it just says which zones its scheduled DHCP instances live in. If 
>> that's
>>   the purpose, then we should just be using the DHCP agent API for this 
>> info
>>   and not impact the network API.
>>
>> I don't think so. I have three points.
>>
>> 1. Availability zone is implemented in just a case with Agent now, but
>> it's reference implementation. For example, we should expect that
>> availability zone will be used by plugin without agent.
>>
>> 2. In users view, availability zone is related to network resource. On
>> the other hand, users doesn't need to consider Agent or operators doesn't
>> like to enable users to do in the first place. So I don't agree with using
>> Agent API.
>>
>> 3. We should consider whether users want to know the field. Originally,
>> the field doesn't exist in Spec[3] but I added it according with reviewer's
>> opinion(maybe Akihiro?). This is about discussion of use case. After users
>> create resources via API with availability_zone_hints so that they achieve
>> HA for their service, they want to know which zones are their resources
>> hosted on because their resources might not be distributed on multiple
>> availability zones by any reasons. In the case, they need to know
>> "availability_zones" for the resources via Network API.
>>
>> Thanks,
>> Hirofumi
>>
>> [3]: *https://review.openstack.org/#/c/169612/31*
>> 
>>
>>
>>   Thoughts?
>>
>>   1. *https://bugs.launchpad.net/neutron/+bug/1525740*
>>   
>>   2. *https://review.openstack.org/#/c/257086/*
>>   
>>
>>   --
>>   Kevin Benton
>>
>>
>>
>>   
>> __
>>   OpenStack Development Mailing List (not for usage questions)
>>   Unsubscribe:
>>   *openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
>>   
>>   *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>>   

Re: [openstack-dev] neutron metadata-agent HA

2015-12-14 Thread Fox, Kevin M
What about the case where your not running ha routers? Should you still run 
more then one?

Thanks,
Kevin

From: Assaf Muller [amul...@redhat.com]
Sent: Saturday, December 12, 2015 12:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] neutron metadata-agent HA

The neutron metadata agent is stateless. It takes requests from the
metadata proxies running in the router namespaces and moves the
requests on to the nova server. If you're using HA routers, start the
neutron-metadata-agent on every machine the L3 agent runs, and just
make sure that the metadata-agent is restarted in case it crashes and
you're done. Nothing else you need to do.

On Fri, Dec 11, 2015 at 3:24 PM, Fabrizio Soppelsa
 wrote:
>
> On Dec 10, 2015, at 12:56 AM, Alvise Dorigo 
> wrote:
>
> So my question is: is there any progress on this topic ? is there a way
> (something like a cronjob script) to make the metadata-agent redundant
> without involving the clustering software Pacemaker/Corosync ?
>
>
> Reason for such a dirty solution instead of rely onto pacemaker?
>
> I’m not aware of such initiatives - just checked the blueprints in Neutron
> and I found no relevant. I can suggest to file a proposal to the
> correspondent launchpad page, by elaborating your idea.
>
> F.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ansible] One or more undefined variables: 'dict object' has no attribute 'bridge'

2015-12-14 Thread Mark Korondi
I am using the liberty branch. Unfortunately this did not help, I get
the same error.

I also don't understand where the neutron service should run. This is
the output on my compute node:

root@os-compute-1:~# ps aux | grep neutron
root 18782  0.0  0.0  11748  2232 pts/0S+   17:56   0:00 grep
--color=auto neutron
root@os-compute-1:~# ip netns list
root@os-compute-1:~#

Is there a step-by step guide that shows how to set up a simple flat
networking with OSA? I guess this whole thing is optimized around vlan
provider networking which I don't have on my playground environment.



On Mon, Dec 14, 2015 at 4:46 PM, Kevin Carter
 wrote:
> The port binding issues are usually related to a neutron physical interface 
> mapping issue however based on your previous config I don't think that was 
> the problem. If you're deploying Liberty/Master(Mitaka) there was was a fix 
> that went in that resolved an issue within neutron and the use of 
> L2/multicast groups [0] and if your on the stable tag the fix has not been 
> released yet and will be there for the 12.0.3 tag, coming soon. To resolve 
> the issue the fix is to simply to add the following to your 
> `user_variables.yml` file:
>
> == If you don't want to use l2 population add the following ==
> neutron_l2_population: "False"
> neutron_vxlan_group: "239.1.1.1"
>
> == If you want to use l2 population add the following ==
> neutron_l2_population: "True"
>
> As for the neutron services on your compute nodes, they should be running 
> within the host namespace. In liberty/Master the python bits will be within a 
> vent using an upstart init script to control the service. If your not seeing 
> the neutron service running its likely due to this bug [2] which is resolve 
> by dropping the previously mentioned user variable options.
>
> I hope this helps and let me know how it goes.
>
> [0] https://review.openstack.org/#/c/255624
> [1] https://github.com/openstack/openstack-ansible/commits/liberty
> [2] https://bugs.launchpad.net/neutron/+bug/1470584
>
> --
>
> Kevin Carter
> IRC: cloudnull
>
>
> 
> From: Mark Korondi 
> Sent: Sunday, December 13, 2015 9:10 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [ansible] One or more undefined variables: 'dict 
> object' has no attribute 'bridge'
>
> Thanks cloudnull,
>
> This solved the installation issue. I commented out all non-flat
> related networks before, to investigate my main problem, which is
>
>> PortBindingFailed: Binding failed for port 
>> fe67a2d5-6d6a-4440-80d0-acbe2ff5c27f, please check neutron logs for more 
>> information.
>
> I still have this problem; I created the flat external network with no
> errors, still I get this when trying to launch an instance. What's
> really interesting to me, is that no neutron microservices are
> deployed and running on the compute node.
>
> Mark (kmARC)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] what are the key errors with volume detach

2015-12-14 Thread Andrea Rosa


On 10/12/15 15:29, Matt Riedemann wrote:

>> In a simplified view of a detach volume we can say that the nova code
>> does:
>> 1 detach the volume from the instance
>> 2 Inform cinder about the detach and call the terminate_connection on
>> the cinder API.
>> 3 delete the dbm recod in the nova DB
> 
> We actually:
> 
> 1. terminate the connection in cinder:
> 
> https://github.com/openstack/nova/blob/c4ca1abb4a49bf0bce765acd3ce906bd117ce9b7/nova/compute/manager.py#L2312
> 
> 
> 2. detach the volume
> 
> https://github.com/openstack/nova/blob/c4ca1abb4a49bf0bce765acd3ce906bd117ce9b7/nova/compute/manager.py#L2315
> 
> 
> 3. delete the volume (if marked for delete_on_termination):
> 
> https://github.com/openstack/nova/blob/c4ca1abb4a49bf0bce765acd3ce906bd117ce9b7/nova/compute/manager.py#L2348
> 
> 
> 4. delete the bdm in the nova db:
> 
> https://github.com/openstack/nova/blob/c4ca1abb4a49bf0bce765acd3ce906bd117ce9b7/nova/compute/manager.py#L908
> 
> 

I am confused here, why are are you referring to the _shutdown_instance
code?


> So if terminate_connection fails, we shouldn't get to detach. And if
> detach fails, we shouldn't get to delete.
> 
>>
>> If 2 fails the volumes get stuck in a detaching status and any further
>> attempt to delete or detach the volume will fail:
>> "Delete for volume  failed: Volume  is still
>> attached, detach volume first. (HTTP 400)"
>>
>> And if you try to detach:
>> "EROR (BadRequest): Invalid input received: Invalid volume: Unable to
>> detach volume. Volume status must be 'in-use' and attach_status must
>> be 'attached' to detach. Currently: status: 'detaching',
>> attach_status: 'attached.' (HTTP 400)"
>>
>> at the moment the only way to clean up the situation is to hack the
>> nova DB for deleting the bdm record and do some hack on the cinder
>> side as well.
>> We wanted a way to clean up the situation avoiding the manual hack to
>> the nova DB.
> 
> Can't cinder rollback state somehow if it's bogus or failed an
> operation? For example, if detach failed, shouldn't we not be in
> 'detaching' state? This is like auto-reverting task_state on server
> instances when an operation fails so that we can reset or delete those
> servers if needed.

I think that is an option but probably it is part of the redesign of the
cinder API (see the solution proposed #3), but It would be nice to get
cinder guys commenting here.

>> Solution proposed #3
>> Ok, so the solution is to fix the Cinder API and makes the interaction
>> between Nova volume manager and that API robust.
>> This time I was right (YAY) but as you can imagine this fix is not
>> going to be an easy one and after talking with Cinder guys they
>> clearly told me that thatt is going to be a massive change in the
>> Cinder API and it is unlikely to land in the N(utella) or O(melette) 
>> release.

> As Sean pointed out in another reply, I feel like what we're really
> missing here is some rollback code in the case that delete fails so we
> don't get in this stuck state and have to rely on deleting the BDMs
> manually in the database just to delete the instance.
> 
> We should rollback on delete fail 1 so that delete request 2 can pass
> the 'check attach' checks again.

The communication with cinder is async, Nova doesn't wait or check if
the detach on cinder side has been executed correctly.

Thanks
--
Andrea Rosa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Mistral team meeting minutes 12/14/2015

2015-12-14 Thread Nikolay Makhotkin
Hi,


Thank you guys to join us today!

Meeting minutes -
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-12-14-16.01.html

Meeting full log -
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-12-14-16.01.log.html



Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-14 Thread Jaume Devesa
+1

I think it is good compromise. Thanks Ryu!

I understand the CLI will belong to the external part. I much prefer to have
it in a separate project rather than into the plugin. Even if the code is
tiny.

If you will want to just do midonet calls for debugging or check the MidoNet
virtual infrastructure, it will be cleaner to install it without
dependencies than
dragging the whole neutron project (networking-midonet depends on neutron).

Regards,

On 14 December 2015 at 17:32, Ryu Ishimoto  wrote:

> On Tue, Dec 15, 2015 at 1:00 AM, Sandro Mathys 
> wrote:
> > On Tue, Dec 15, 2015 at 12:02 AM, Ryu Ishimoto  wrote:
> >
> > So if I understand you correctly, you suggest:
> > 1) the (midonet/internal) low level API stays where it is and will
> > still be called python-midonetclient.
> > 2) the (neutron/external) high level API is moved into it's own
> > project and will be called something like python-os-midonetclient.
> >
> > Sounds like a good compromise which addresses the most important
> > points, thanks Ryu! I wasn't aware that these parts of the
> > python-midonetclient are so clearly distinguishable/separable but if
> > so, this makes perfect sense. Not perfectly happy with the naming, but
> > I figure it's the way to go.
>
> Thanks for the endorsement.  Yes, it is trivial to separate them (less
> than a day of work) because they are pretty much already separated.
>
> As for the naming, I think it's better to take a non-disruptive
> approach so that it's transparent to those currently developing the
> low level midonet client.  To your question, however, I have another
> suggestion, which is that for the high level client code, it may also
> make sense to just include that as part of the plugin.  It's such
> small code that it might not make sense to separate, and also likely
> to be used only by the plugin in the future.  Which basically means
> that the plugin need not depend on any python client library at all.
> I think this will simplify even further.  It should also be ok to be
> tied to the plugin release cycles as well assuming that's the only
> place the client is needed.
>
> Cheers,
> Ryu
>
>
>
> >
> > -- Sandro
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Jaume Devesa
Software Engineer at Midokura
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Is there anyone truly working on this issue https://bugs.launchpad.net/cinder/+bug/1520102?

2015-12-14 Thread mtanino

Thank you for the explanation, Gorka!

Mitsuhiro

On 12/14/2015 05:00 AM, Gorka Eguileor wrote:

On 11/12, mtanino wrote:

Hi Thang, Vincent,

I guess the root cause is that finish_volume_migration() still
handles a volume as a dictionary instead of volume object and
the method returns dict volume.

And then, 'rpcapi.delete_volume()' in migrate_volume_completion()
tries to delete dict volume but it fails due to the following error.



I believe that is not entirely correct, the issue is that
'finish_volume_migration' returns an ORM volume that then is passed by
'rpcapi.delete_volume' in the place of a Versioned Object Volume (this
is the recently added optional argument), so this is serialized and
deserialized as a normal dictionary (instead of as a VO dictionary), and
when the manager at the other end sees that it has received something in
the place of the VO Volume argument it tries to access the 'id'
attribute.

But since the ORM volume was not a VO it was passed as a normal
dictionary and therefore has no 'id' attribute.

For reference, Vincent has proposed a patch [1].

Cheers,
Gorka.

[1]: https://review.openstack.org/250216/


As far as you know, is there someone working on this issue? If not, I am gonna 
fix it.


Not yet. You can go ahead.

- Result of 'cinder migrate --force-host-copy True '

2015-12-11 20:36:33.395 ERROR oslo_messaging.rpc.dispatcher 
[req-2c271a5e-7e6a-4b38-97d1-22ef245c7892 f95ea885e1a34a81975c50be63444a0b 
56d8eb5cc90242178cf05aedab3c1612] Exception during message handling: 'dict' 
object has no attribute 'id'
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 129, 
in _do_dispatch
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher return f(*args, 
**kwargs)
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/cinder/cinder/volume/manager.py", line 152, in lvo_inner1
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher return 
lvo_inner2(inst, context, volume_id, **kwargs)
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in 
inner
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher return f(*args, 
**kwargs)
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/cinder/cinder/volume/manager.py", line 151, in lvo_inner2
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher return 
f(*_args, **_kwargs)
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/cinder/cinder/volume/manager.py", line 603, in delete_volume
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher volume_id = 
volume.id
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher AttributeError: 
'dict' object has no attribute 'id'
2015-12-11 20:36:33.395 TRACE oslo_messaging.rpc.dispatcher

Thanks,
Mitsuhiro Tanino

On 12/10/2015 11:24 PM, Thang Pham wrote:

I have to try it again myself.  What errors are you seeing?  Is it the same?  
Feel free to post a patch if you already have one that would solve it.

Regards,
Thang

On Thu, Dec 10, 2015 at 10:51 PM, Sheng Bo Hou > wrote:

Hi Mitsuhiro, Thang

The patch https://review.openstack.org/#/c/228916is merged, but sadly it 
does not cover the issue https://bugs.launchpad.net/cinder/+bug/1520102. This 
bug is still valid.
As far as you know, is there someone working on this issue? If not, I am 
gonna fix it.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 

Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West 
Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193





Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-14 Thread Sergii Golovatiuk
Hi,

If we can stick with upstream PostgresSQL that would be really nice.
Otherwise security updates and regular package update will be a burden of
package maintainers. Ideally we should have as less forked packages as
possible.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Dec 14, 2015 at 5:47 AM, Aleksandr Didenko 
wrote:

> Hi,
>
> > Downgrading for no reason could bring us to big trouble and bad user
> experience
>
> +1 to this. Let's keep PostgreSQL 9.3.
>
> Regards,
> Alex
>
> On Mon, Dec 14, 2015 at 2:04 PM, Artem Silenkov 
> wrote:
>
>> Hello!
>>
>> Vote for update.
>>
>> 1. We have already shipped 9.3 in fuel-7.0. Downgrading such complicated
>> package without any reason is not good thing at all. User experience could
>> suffer a lot.
>> 2. The next reason is tests. We have tested only 9.3, 9.2 was not tested
>> at all. I'm sure we could bring serious regressions by downgrading,
>> 3. Postgres-9.3 is not custom. It was taken from KOJI packages and
>> backported without any modification. It means that this package is
>> officially tested and supported by Fedora, which is good.
>> 4. One shipped package more is not a huge burden for us. It was
>> officially backported from official sources, tested and suits our need
>> perfectly. Why do we need to play such dangerous games downgrading for no
>> reasons?
>>
>> Let me notice that all packages are maintained by mos-packaging team now
>> And we are perfectly ok with postgres-9.3.
>>
>> Downgrading for no reason could bring us to big trouble and bad user
>> experience.
>>
>> Regards,
>> Artem Silenkov
>> ---
>> MOs-Packaging
>>
>> On Mon, Dec 14, 2015 at 3:41 PM, Bartłomiej Piotrowski <
>> bpiotrow...@mirantis.com> wrote:
>>
>>> On 2015-12-14 13:12, Igor Kalnitsky wrote:
>>> > My opinion here is that I don't like that we're going to build and
>>> > maintain one more custom package (just take a look at this patch [4]
>>> > if you don't believe me), but I'd like to hear more opinion here.
>>> >
>>> > Thanks,
>>> > Igor
>>> >
>>> > [1] https://bugs.launchpad.net/fuel/+bug/1523544
>>> > [2] https://review.openstack.org/#/c/249656/
>>> > [3] http://goo.gl/forms/Hk1xolKVP0
>>> > [4] https://review.fuel-infra.org/#/c/14623/
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> I also think we should stay with what CentOS provides. Increasing
>>> maintenance burden for something that can be implemented without bells
>>> and whistles sounds like a no-go.
>>>
>>> Bartłomiej
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] PostgreSQL 9.3 and JSON operations

2015-12-14 Thread Oleg Gelbukh
It's important to note that given the change in the upgrade method, there
will be no actual downgrade of the package, since Fuel 8.0 Admin Node will
be installed on a clean system. So, from the upgrade standpoint I see no
obstacles to have 9.2 in Fuel 8.0. I also greet any chance to reduce the
number of packages maintained in-house.

Depending on native packages is also important in the light of the
initiative to separate deployment of Fuel from installation of operating
system [1].

[1]
https://blueprints.launchpad.net/fuel/+spec/separate-fuel-node-provisioning

--
Best regards,
Oleg Gelbukh

On Mon, Dec 14, 2015 at 10:50 PM, Sergii Golovatiuk <
sgolovat...@mirantis.com> wrote:

> Hi,
>
> If we can stick with upstream PostgresSQL that would be really nice.
> Otherwise security updates and regular package update will be a burden of
> package maintainers. Ideally we should have as less forked packages as
> possible.
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Mon, Dec 14, 2015 at 5:47 AM, Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>> > Downgrading for no reason could bring us to big trouble and bad user
>> experience
>>
>> +1 to this. Let's keep PostgreSQL 9.3.
>>
>> Regards,
>> Alex
>>
>> On Mon, Dec 14, 2015 at 2:04 PM, Artem Silenkov 
>> wrote:
>>
>>> Hello!
>>>
>>> Vote for update.
>>>
>>> 1. We have already shipped 9.3 in fuel-7.0. Downgrading such complicated
>>> package without any reason is not good thing at all. User experience could
>>> suffer a lot.
>>> 2. The next reason is tests. We have tested only 9.3, 9.2 was not tested
>>> at all. I'm sure we could bring serious regressions by downgrading,
>>> 3. Postgres-9.3 is not custom. It was taken from KOJI packages and
>>> backported without any modification. It means that this package is
>>> officially tested and supported by Fedora, which is good.
>>> 4. One shipped package more is not a huge burden for us. It was
>>> officially backported from official sources, tested and suits our need
>>> perfectly. Why do we need to play such dangerous games downgrading for no
>>> reasons?
>>>
>>> Let me notice that all packages are maintained by mos-packaging team now
>>> And we are perfectly ok with postgres-9.3.
>>>
>>> Downgrading for no reason could bring us to big trouble and bad user
>>> experience.
>>>
>>> Regards,
>>> Artem Silenkov
>>> ---
>>> MOs-Packaging
>>>
>>> On Mon, Dec 14, 2015 at 3:41 PM, Bartłomiej Piotrowski <
>>> bpiotrow...@mirantis.com> wrote:
>>>
 On 2015-12-14 13:12, Igor Kalnitsky wrote:
 > My opinion here is that I don't like that we're going to build and
 > maintain one more custom package (just take a look at this patch [4]
 > if you don't believe me), but I'd like to hear more opinion here.
 >
 > Thanks,
 > Igor
 >
 > [1] https://bugs.launchpad.net/fuel/+bug/1523544
 > [2] https://review.openstack.org/#/c/249656/
 > [3] http://goo.gl/forms/Hk1xolKVP0
 > [4] https://review.fuel-infra.org/#/c/14623/
 >
 >
 __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >

 I also think we should stay with what CentOS provides. Increasing
 maintenance burden for something that can be implemented without bells
 and whistles sounds like a no-go.

 Bartłomiej


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] Gerrit Upgrade 12/16

2015-12-14 Thread Spencer Krum
This is a gentle reminder that the downtime will be this Wednesday
starting at 17:00 UTC.

Thank you for your patience,
Spencer

-- 
  Spencer Krum
  n...@spencerkrum.com

On Tue, Dec 1, 2015, at 10:19 PM, Stefano Maffulli wrote:
> On 12/01/2015 06:38 PM, Spencer Krum wrote:
> > There is a thread beginning here:
> > http://lists.openstack.org/pipermail/openstack-dev/2015-October/076962.html
> > which covers what to expect from the new software.
> 
> Nice! This is awesome: the new review panel lets you edit files on the
> web interface. No more `git review -d` and subsequent commit to fix a
> typo. I think this is huge for documentation and all sort of nitpicking
> :)
> 
> And while I'm at it, I respectfully bow to the infra team: keeping pace
> with frequent software upgrades at this size is no small feat and a rare
> accomplishment. Good job.
> 
> /stef
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] status of distil?

2015-12-14 Thread Fei Long Wang
Hi Steve,

Thanks for the heads up. I just worked this out with AJaejer. The
.gitreview file has been updated and we will disable the CI job
temporarily. Please let me know if there is any question. Cheers.


On 14/12/15 22:01, Steve Martinelli wrote:
>
> While I was trying to submit patches for projects that had old
> keystoneclient references (distil was one of the projects), I noticed
> that there hasn't been much action on this project [0]. It's been a
> year since a commit [1], no releases [2], and I can't submit a patch
> since the .gitreview file doesn't point to review.openstack.org [3].
>
> Is distil alive?
>
> [0] https://github.com/openstack/distil
> [1] https://github.com/openstack/distil/commits/master
> [2] https://github.com/openstack/distil/releases
> [3] https://github.com/openstack/distil/blob/master/.gitreview
>
> thanks,
> stevemar
>

-- 
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-14 Thread Sandro Mathys
On Thu, Dec 10, 2015 at 4:46 PM, Galo Navarro  wrote:
>
>
> On 10 December 2015 at 04:35, Sandro Mathys  wrote:
>>
>> On Thu, Dec 10, 2015 at 12:48 AM, Galo Navarro  wrote:
>> > Hi,
>> >
>> >> I think the goal of this split is well explained by Sandro in the first
>> >> mails of the chain:
>> >>
>> >> 1. Downstream packaging
>> >> 2. Tagging the delivery properly as a library
>> >> 3. Adding as a project on pypi
>> >
>> > Not really, because (1) and (2) are *a consequence* of the repo split.
>> > Not a
>> > cause. Please correct me if I'm reading wrong but he's saying:
>> >
>> > - I want tarballs
>> > - To produce tarballs, I want a separate repo, and separate repos have
>> > (1),
>> > (2) as requirements.
>>
>> No, they're all goals, no consequences. Sorry, I didn't notice it
>> could be interpreted differently
>
>
> I beg to disagree. The location of code is not a goal in itself. Producing
> artifacts such as tarballs is.

Really not sure what you're trying to say. You're right, the location
of the code is not a goal in itself and I don't think anyone said
otherwise.

(1), (2) and (3), as well as Takashi's additional point if it applies
to us, all make separate repositories necessary. They're the goals,
and splitting repositories is a "consequence" (I'd rather call it a
requirement or necessity, but I'm not here to discuss the
terminology).

>> > This looks more accurate: you're actually not asking for a tarball.
>> > You're
>> > asking for being compatible with a system that produces tarballs off a
>> > repo.
>> > This is very different :)
>> >
>> > So questions: we have a standalone mirror of the repo, that could be
>> > used
>> > for this purpose. Say we move the mirror to OSt infra, would things
>> > work?
>>
>> Good point. Actually, no. The mirror can't go into OSt infra as they
>> don't allow direct pushes to repos - they need to go through reviews.
>> Of course, we could still have a mirror on GitHub in midonet/ but that
>> might cause us a lot of trouble.
>
>
> I don't follow. Where a repo is hosted is orthogonal to how commits are
> added. If commits to the mirror must go via gerrit, this is perfectly
> doable.

Are you serious? You called it cheap in the paragraph just below, and
now you want all python-midonetclient code to be reviewed twice?

>> > But create a lot of other problems in development. With a very important
>> > difference: the pain created by the mirror solution is solved cheaply
>> > with
>> > software (e.g.: as you know, with a script). OTOH, the pain created by
>> > splitting the repo is paid in very costly human resources.
>>
>> Adding the PMC as a submodule should reduce this costs significantly,
>> no? Of course, when working on the PMC, sometimes (or often, even)
>>
>> there will be the need for two instead of one review requests but the
>> content and discussion of those should be nearly identical, so the
>> actual overhead is fairly small. Figure I'm missing a few things here
>> - what other pains would this add?
>
>
> No, it doesn't make things easier. We already tried.
>
> Guillermo explained a few reasons already in his email.
>
>>
>> > I do get this point and it's a major concern, IMO we should split to a
>> > different conversation as it's not related to where PYC lives, but to a
>> > more
>> > general question: do we really need a repo per package?
>>
>> No, we don't. Not per package as you outlined them earlier: agent,
>> cluster, etc.
>>
>> Like Jaume, I know the RPM side much better than the DEB side. So for
>> RPM, one source package (srpm) can create several binary packages
>> (rpm). Therfore, one repo/tarball (there's an expected 1:1 relation
>> between these two) can be used for several packages.
>>
>> But there's different policies for services and clients, e.g. the
>> services are only packaged for servers but the clients both for
>> servers and workstations. Therefore, they are kept in separate srpms.
>>
>> Additionally, it's much easier to maintain java and python code in
>> separate srpms/rpms - mostly due to (build) dependencies.
>
>
> What's your rationale for saying this? Could you point at specific
> maintenance points that are made easier by having different languages in
> separate repos?

Again, it's about packaging, not repos. Packaging gets complicated
easily, and there's a lot of complex things to take care of with every
single language and having both in the same srpm doesn't make this
easier at all. Also, if Java and python code are kept in separate
srpms, only the specific srpm has to be rebuilt if e.g. a Java
vulnerability makes it necessary.

Honestly, I don't think this discussion is leading anywhere.
Therefore, I'd like to request a decision by the MidoNet PTL as per
[1].

-- Sandro

[1] http://governance.openstack.org/reference/charter.html#project-team-leads

__
OpenStack Development Mailing List (not for 

[openstack-dev] [Fuel] Nominate Bulat Gaifulin for fuel-web & fuel-mirror cores

2015-12-14 Thread Igor Kalnitsky
Hi Fuelers,

I'd like to nominate Bulat Gaifulin [1] for

* fuel-web-core [2]
* fuel-mirror-core [3]

Bulat's doing a really good review with detailed feedback and he's a
regular participant in IRC. He's co-author of packetary and
fuel-mirror projects, and he made valuable contribution to fuel-web
(e.g. task-based deployment engine).

Fuel Cores, please reply back with +1/-1.

- Igor

[1] http://stackalytics.com/?module=fuel-web_id=bgaifullin
[2] http://stackalytics.com/report/contribution/fuel-web/90
[3] http://stackalytics.com/report/contribution/fuel-mirror/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-Announce List

2015-12-14 Thread Thierry Carrez
Tom Fifield wrote:
> ... and back to this thread after a few weeks :)
> 
> The conclusions I saw were:
> * Audience for openstack-announce should be "users/non-dev"
> * Service project releases announcements are good
> * Client library release announcements good
> * Security announcements are good
> * Internal library (particularly oslo) release announcements don't fit
> 
> Open Questions:
> * Where do Internal library release announcements go? [-dev or new
> -release list or batched inside the weekly newsletter]

I'd say -dev + batched inside the weekly -dev digest from thingee (and
crosspost that one to -announce). Even if the audience is "users" I
think getting a weekly digest from the -dev ML can't hurt ?

> * Do SDK releases fit on -announce?

I guess they could -- how many of those are we expecting ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][serial-console-proxy]

2015-12-14 Thread Markus Zoeller
Prathyusha Guduri  wrote on 12/11/2015 
06:37:02 AM:

> From: Prathyusha Guduri 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 12/11/2015 06:39 AM
> Subject: [openstack-dev] [nova][serial-console-proxy]
> 
> Hi All,

> I have set up open stack on an Arm64 machine and all the open stack 
> related services are running fine. Also am able to launch an instance 
> successfully. Now that I need to get a console for my instance. The 
> noVNC console is not supported in the machine am using. So I have to 
> use a serial-proxy console or spice-proxy console. 

> After rejoining the stack, I have stopped the noVNC service and 
> started the serial proxy service in  /usr/local/bin  as
> 
> ubuntu@ubuntu:~/devstack$ /usr/local/bin/nova-serialproxy --config-
> file /etc/nova/nova.conf
> 2015-12-10 19:07:13.786 21979 INFO nova.console.websocketproxy [-] 
> WebSocket server settings:
> 2015-12-10 19:07:13.786 21979 INFO nova.console.websocketproxy [-]   
-Listen on 
> 0.0.0.0:6083
> 2015-12-10 19:07:13.787 21979 INFO nova.console.websocketproxy [-]   -
> Flash security policy server
> 2015-12-10 19:07:13.787 21979 INFO nova.console.websocketproxy [-]   -
> No SSL/TLS support (no cert file)
> 2015-12-10 19:07:13.790 21979 INFO nova.console.websocketproxy [-]   -
> proxying from 0.0.0.0:6083 to None:None

> But 
> ubuntu@ubuntu:~/devstack$ nova get-serial-console vm20
> ERROR (ClientException): The server has either erred or is incapable 
> of performing the requested operation. (HTTP 500) (Request-ID: req-
> cfe7d69d-3653-4d62-ad0b-50c68f1ebd5e)

> 
> the problem seems to be that the nova-compute is not able to 
> communicate with nova-serial-proxy. The IP and port for serial proxy 
> that I have given in nova.conf is correct.

> I really dont understand where am going wrong. Some help would be very
> grateful.  
> 

> My nova.conf - 
> 
> 
> [DEFAULT]
> vif_plugging_timeout = 300
> vif_plugging_is_fatal = True
> linuxnet_interface_driver =
> security_group_api = neutron
> network_api_class = nova.network.neutronv2.api.API
> firewall_driver = nova.virt.firewall.NoopFirewallDriver
> compute_driver = libvirt.LibvirtDriver
> default_ephemeral_format = ext4
> metadata_workers = 24
> ec2_workers = 24
> osapi_compute_workers = 24
> rpc_backend = rabbit
> keystone_ec2_url = http://10.167.103.101:5000/v2.0/ec2tokens
> ec2_dmz_host = 10.167.103.101
> vncserver_proxyclient_address = 127.0.0.1
> vncserver_listen = 127.0.0.1
> vnc_enabled = false
> xvpvncproxy_base_url = http://10.167.103.101:6081/console
> novncproxy_base_url = http://10.167.103.101:6080/vnc_auto.html
> logging_context_format_string = %(asctime)s.%(msecs)03d %(levelname)s 
> %(name)s [%(request_id)s %(user_name)s %(project_name)s] 
%(instance)s%(message)s
> force_config_drive = True
> instances_path = /opt/stack/data/nova/instances
> state_path = /opt/stack/data/nova
> enabled_apis = ec2,osapi_compute,metadata
> instance_name_template = instance-%08x
> my_ip = 10.167.103.101
> s3_port = 
> s3_host = 10.167.103.101
> default_floating_pool = public
> force_dhcp_release = True
> dhcpbridge_flagfile = /etc/nova/nova.conf
> scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
> rootwrap_config = /etc/nova/rootwrap.conf
> api_paste_config = /etc/nova/api-paste.ini
> allow_migrate_to_same_host = True
> allow_resize_to_same_host = True
> debug = True
> verbose = True
> 
> [database]
> connection = mysql://root:open@127.0.0.1/nova?charset=utf8
> 
> [osapi_v3]
> enabled = True
> 
> [keystone_authtoken]
> signing_dir = /var/cache/nova
> cafile = /opt/stack/data/ca-bundle.pem
> auth_uri = http://10.167.103.101:5000
> project_domain_id = default
> project_name = service
> user_domain_id = default
> password = open
> username = nova
> auth_url = http://10.167.103.101:35357
> auth_plugin = password
> 
> [oslo_concurrency]
> lock_path = /opt/stack/data/nova
> 
> [spice]
> #agent_enabled = True
> enabled = false
> html5proxy_base_url = http://10.167.103.101:6082/spice_auto.html
> #server_listen = 127.0.0.1
> #server_proxyclient_address = 127.0.0.1
> 
> [oslo_messaging_rabbit]
> rabbit_userid = stackrabbit
> rabbit_password = open
> rabbit_hosts = 10.167.103.101
> 
> [glance]
> api_servers = http://10.167.103.101:9292
> 
> [cinder]
> os_region_name = RegionOne
> 
> [libvirt]
> vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
> inject_partition = -2
> live_migration_uri = qemu+ssh://ubuntu@%s/system
> use_usb_tablet = False
> cpu_mode = host-model
> virt_type = kvm
> 
> [neutron]
> service_metadata_proxy = True
> url = http://10.167.103.101:9696
> region_name = RegionOne
> admin_tenant_name = service
> auth_strategy = keystone
> admin_auth_url = http://10.167.103.101:35357/v2.0
> admin_password = open
> admin_username = neutron
> 
> [keymgr]
> fixed_key = 
c5861a510cda58d367a44fc0aee6405e8e03a70f58c03fdc263af8405cf9a0c6
> 
> 

Re: [openstack-dev] [TripleO] Stable/Liberty Backports & Reviews

2015-12-14 Thread Dan Prince
Nice job on this. Looking forward to reaping the benefits of the stable
branch stuff for our upgrades testing too.

Dan

On Fri, 2015-12-11 at 12:35 +, Steven Hardy wrote:
> Hi all,
> 
> So, after the painful process of getting CI working for
> stable/liberty,
> everything is now working pretty well, and I have a few small
> requests to
> hopefully help improve velocity for backports landing:
> 
> 1. Please use git "cherry-pick -x" when backporting from master -
> this is a
> small detail, but it makes it easier to spot when someone has
> accidentally
> picked the wrong version of a patch from master, because the ChangeId
> will
> still match in this case.
> 
> 2. Please either wait until a patch lands on master before proposing
> to
> stable, or mark the stable patch WIP until it does.  It's confusing
> to see
> a "ready for review" patch for stable which hasn't landed on master,
> and it
> will be very easy to accidentally land a patch too soon, or with the
> wrong/stale version (which is one reason why I care about (1) ;)
> 
> 3. Please review the stable branches!  I've created a new
> gerrit-dash-creator to help identify the reviews:
> https://review.openstack.org/#/c/256379/ - this can probably be
> improved,
> e.g it'd be nice to distinguish those patches which are passed CI,
> but it's a start.
> 
> 4. When merge conflicts happen on the cherry-pick, please leave the
> Conflicts: line in the commit message (move it to the body above the
> Change-Id) - this helps reviewers pay special attention to files
> where
> manual fixup was needed.
> 
> Also, please remember we agreed that if a tripleo-core is proposing a
> fix,
> and it's a clean backport (no conflicts), we can consider their
> propsing it
> as an implicit +2, thus only one reviewer is needed to approve (same
> as
> normal stable-maint process).  This should help improve review
> velocity and
> minimise the review burden for simple/clean backports.
> 
> Thanks!
> 
> Steve
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gate failure with grenade

2015-12-14 Thread Armando M.
On 14 December 2015 at 10:18, Paul Michali  wrote:

> Thanks Sean!
>
> On Mon, Dec 14, 2015 at 12:58 PM Armando M.  wrote:
>
>> On 14 December 2015 at 09:51, Sean Dague  wrote:
>>
>>> On 12/14/2015 12:32 PM, Armando M. wrote:
>>> > Hi folks,
>>> >
>>> > Something snuck in past the gate last night [1]. Please stop rechecking
>>> > and pushing in the merge queue until the matter is resolved.
>>> >
>>> > I will follow up with details, if someone knows more, please find me
>>> on IRC.
>>> >
>>> > Thanks,
>>> > Armando
>>> >
>>> > [1]
>>> >
>>> http://logs.openstack.org/00/254900/4/gate/gate-grenade-dsvm-neutron/a9216c9/logs/grenade.sh.txt.gz#_2015-12-14_12_24_12_561
>>>
>>> https://review.openstack.org/#/c/257303/ is the fix, it's top of gate
>>> right now. Apparently it wasn't noticed that those were deprecated
>>> during the liberty cycle, and fixed accordingly.
>>>
>>
This has merged.


>
>> Thanks Sean!
>>
>> Cheers,
>> Armando
>>
>>
>>>
>>> -Sean
>>>
>>> --
>>> Sean Dague
>>> http://dague.net
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pin some puppet dependencies on git clone

2015-12-14 Thread Dan Prince
On Fri, 2015-12-11 at 21:50 +0100, Jaume Devesa wrote:
> Hi all,
> 
> Today TripleO CI jobs failed because a new commit introduced on
> puppetlabs-mysql[1]. 
> Mr. Jiri Stransky solved it as a temporally fix by pinning the puppet
> module clone to a previous
> commit in the tripleo-common project[2].
> 
> source-repositories puppet element[3] allows you to pin the puppet
> module clone as well by 
> adding a reference commit in the source-repository-
> file. In this case,
> I am talking about the source-repository-puppet-modules[4].
> 
> I know you TripleO guys are brave people that live dangerously in the
> cutting edge, but I think
> the dependencies to puppet modules not managed by the OpenStack
> community should be
> pinned to last repo tag for the sake of stability. 
> 
> What do you think?

I've previously considered added a stable puppet modules element for
just this case:

https://review.openstack.org/#/c/184844/

Using stable branches of things like MySQL, Rabbit, etc might make
sense. However I would want to consider following what the upstream
Puppet community does as well specifically because we do want to
continue using upstream openstack/puppet-* modules as well. At least
for our upstream CI.

We also want to make sure our stable TripleO jobs use the stable
branches of openstack/puppet-* so we might need to be careful about
pinning those things too.

Dan


>  I can take care of this.
> 
> [1]: https://github.com/puppetlabs/puppetlabs-mysql/commit/bdf4d0f52d
> fc244d10bbd5b67efb791a39520ed2
> [2]: https://review.openstack.org/#/c/256572/
> [3]: https://github.com/openstack/diskimage-builder/tree/master/eleme
> nts/source-repositories
> [4]: https://github.com/openstack/tripleo-puppet-elements/blob/master
> /elements/puppet-modules/source-repository-puppet-modules
> 
> --
> Jaume Devesa
> Software Engineer at Midokura
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Call for review focus

2015-12-14 Thread Rossella Sblendido



On 11/25/2015 11:05 PM, Assaf Muller wrote:

We could then consider running the script automatically on a daily
basis and publishing the
resulting URL in a nice bookmarkable place.


An update on this. The easiest bookmarkable place that I found it's my 
blog [1]. I have a script that updates the url every day, I can do that 
more often. I'd love to have the url on the wiki but I think it requires 
to create a patch every day and approve it...not nice at all. Any 
suggestion?


cheers,

Rossella

[1] http://rossella-sblendido.net/2015/12/14/gerrit-url-neutron-reviews/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gate failure with grenade

2015-12-14 Thread Armando M.
On 14 December 2015 at 09:51, Sean Dague  wrote:

> On 12/14/2015 12:32 PM, Armando M. wrote:
> > Hi folks,
> >
> > Something snuck in past the gate last night [1]. Please stop rechecking
> > and pushing in the merge queue until the matter is resolved.
> >
> > I will follow up with details, if someone knows more, please find me on
> IRC.
> >
> > Thanks,
> > Armando
> >
> > [1]
> >
> http://logs.openstack.org/00/254900/4/gate/gate-grenade-dsvm-neutron/a9216c9/logs/grenade.sh.txt.gz#_2015-12-14_12_24_12_561
>
> https://review.openstack.org/#/c/257303/ is the fix, it's top of gate
> right now. Apparently it wasn't noticed that those were deprecated
> during the liberty cycle, and fixed accordingly.
>

Thanks Sean!

Cheers,
Armando


>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gate failure with grenade

2015-12-14 Thread Sean Dague
On 12/14/2015 12:32 PM, Armando M. wrote:
> Hi folks,
> 
> Something snuck in past the gate last night [1]. Please stop rechecking
> and pushing in the merge queue until the matter is resolved.
> 
> I will follow up with details, if someone knows more, please find me on IRC.
> 
> Thanks,
> Armando
> 
> [1]
> http://logs.openstack.org/00/254900/4/gate/gate-grenade-dsvm-neutron/a9216c9/logs/grenade.sh.txt.gz#_2015-12-14_12_24_12_561

https://review.openstack.org/#/c/257303/ is the fix, it's top of gate
right now. Apparently it wasn't noticed that those were deprecated
during the liberty cycle, and fixed accordingly.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Release Notes for Kuryr

2015-12-14 Thread Kyle Mestery
Howdy Kuryr developers!

Like other OpenStack projects, I've added the functionality to use release
notes with Kuryr. Once [1] merges, we can add release notes in the
"releasenotes/notes" directory. I encourage everyone who has added a
feature item to Kuryr to please add a release note for that feature. Reno
has some nice documentation on how to add a release note here [2], if you
have further questions let me know.

Thanks!
Kyle

[1] https://review.openstack.org/#/c/257450/
[2]
http://docs.openstack.org/developer/reno/usage.html#creating-new-release-notes
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-14 Thread Antoni Segura Puimedon
On Mon, Dec 14, 2015 at 6:07 PM, Jaume Devesa  wrote:

> +1
>
> I think it is good compromise. Thanks Ryu!
>
> I understand the CLI will belong to the external part. I much prefer to
> have
> it in a separate project rather than into the plugin. Even if the code is
> tiny.
>

Let me summarize it:

python-midonetclient: Low level API that lives and breathes in
midonet/midonet.
Has the current cli.
python-os-midonetclient: High level API that is in
openstack/python-midonetclient
 (can be packaged with a different
name).

Are you asking for python-os-midonetclient not to include the cli tool?

I would prefer to keep with the OpenStack practice [1] of having it
together. I don't
think developing a python cli client for the new python-os-midonetclient
that is
on par with the neutron cli tool would be that big of a task and I think it
would
make operation nicer. It could even find the midonet-api from the zookeeper
registry like the other tools do.

[1] https://github.com/openstack/python-neutronclient/blob/master/setup.cfg

>
> If you will want to just do midonet calls for debugging or check the
> MidoNet
> virtual infrastructure, it will be cleaner to install it without
> dependencies than
> dragging the whole neutron project (networking-midonet depends on neutron).
>
> Regards,
>
> On 14 December 2015 at 17:32, Ryu Ishimoto  wrote:
>
>> On Tue, Dec 15, 2015 at 1:00 AM, Sandro Mathys 
>> wrote:
>> > On Tue, Dec 15, 2015 at 12:02 AM, Ryu Ishimoto 
>> wrote:
>> >
>> > So if I understand you correctly, you suggest:
>> > 1) the (midonet/internal) low level API stays where it is and will
>> > still be called python-midonetclient.
>> > 2) the (neutron/external) high level API is moved into it's own
>> > project and will be called something like python-os-midonetclient.
>> >
>> > Sounds like a good compromise which addresses the most important
>> > points, thanks Ryu! I wasn't aware that these parts of the
>> > python-midonetclient are so clearly distinguishable/separable but if
>> > so, this makes perfect sense. Not perfectly happy with the naming, but
>> > I figure it's the way to go.
>>
>> Thanks for the endorsement.  Yes, it is trivial to separate them (less
>> than a day of work) because they are pretty much already separated.
>>
>> As for the naming, I think it's better to take a non-disruptive
>> approach so that it's transparent to those currently developing the
>> low level midonet client.  To your question, however, I have another
>> suggestion, which is that for the high level client code, it may also
>> make sense to just include that as part of the plugin.  It's such
>> small code that it might not make sense to separate, and also likely
>> to be used only by the plugin in the future.  Which basically means
>> that the plugin need not depend on any python client library at all.
>> I think this will simplify even further.  It should also be ok to be
>> tied to the plugin release cycles as well assuming that's the only
>> place the client is needed.
>>
>> Cheers,
>> Ryu
>>
>>
>>
>> >
>> > -- Sandro
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Jaume Devesa
> Software Engineer at Midokura
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >