Re: [Openstack] [ironic]ironic-python-agent fails to lookup node with 401 status code

2017-01-11 Thread Pavlo Shchelokovskyy
Hi,

you shouldn't use the latest master IPA version with ironic as of Mitaka
release.
The ironic API endpoint it tries to contact (v1/lookup...) was introduced
during Newton development and thus is present in ironic API from Newton
release onwards. The fallback to the old lookup endpoint (implemented as
vendor driver passthru) was removed recently from IPA in master branch
(after Newton release). That means your IPA version tries to contact the
ironic API via endpoint that does not exist in this ironic version. Use
ramdisk with IPA built from stable/mitaka or stable/newton branches.

As for the "without any authentication" point - yes, that's the way it
currently works, all communications between IPA and ironic API are not
using Keystone tokens as we still have to figure out a reliable and secure
way to pass tokens or credentials to get them into the ramdisk.

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com

On Thu, Jan 12, 2017 at 5:13 AM, int32bit  wrote:

> Hi, All,
>
> I'm a newcomer to Openstack Ironic. Recently, I'm work on deploy ironic
> manually, and I found that the node status 100% *blocked in `callback
> wait` status* until timeout. The ironic-api  log shows that:
>
> 2017-01-12 10:21:00.626 158262 INFO keystonemiddleware.auth_token [-]
> Rejecting request
> 2017-01-12 10:21:00.627 158262 INFO ironic_api [-] 10.0.81.31 "GET
> /v1/lookup?addresses=xxx HTTP/1
>
> I guess the problem is IPA, so I dug into IPA source and traced the
> request process and  found that the IPA client request *without any
> authentication* [1].
>
> [1] https://github.com/openstack/ironic-python-agent/
> blob/master/ironic_python_agent/ironic_api_client.py#L109-L111
>
>
> My ironic version is *5.1.1-1(mitaka) *and *IPA has updated to newest
> version from master branch*.
>
> My config as follows:
>
> ```
> [keystone_authtoken]
> auth_uri=http://:5000/
> auth_version=v3.0
> identity_uri=http://:35357/
> admin_user=ironic
> admin_password=IRONIC_PASSWORD
> admin_tenant_name=service
>
> [conductor]
> api_url=http://201.0.0.120:6385 # ensure the node can access
> ```
>
> I'm really not sure if I miss something or something wrong in config.
>
> Thanks for any help!
> krystism
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Infra] Gerrit performance time

2017-01-11 Thread Andreas Jaeger
On 2017-01-12 08:09, Lenny Verkhovsky wrote:
> +1
> 
>  
> 
> *From:*Gary Kotton [mailto:gkot...@vmware.com]
> *Sent:* Thursday, January 12, 2017 9:05 AM
> *To:* OpenStack List 
> *Subject:* [openstack-dev] [Infra] Gerrit performance time
> 
>  
> 
> Hi,
> 
> It takes forever to access gerrit. Anyone else hitting this issue?

Joshua just restarted gerrit and everything should be fine again.

Best to show up on #openstack-infra for such issues,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Openstack+KVM+overcommit, VM priority

2017-01-11 Thread Ivan Derbenev
no, that's not my question.

I'm already overcommiting, but now need to prioritize one instance above others 
in terms of perfomance.


От: Eugene Nikanorov 
Отправлено: 12 января 2017 г. 1:13:27
Кому: Ivan Derbenev; openstack@lists.openstack.org
Тема: Re: [Openstack] Openstack+KVM+overcommit, VM priority

Ivan,

see if it provides an answer: 
https://ask.openstack.org/en/question/55307/overcommitting-value-in-novaconf/

Regards,
Eugene.

On Wed, Jan 11, 2017 at 1:55 PM, James Downs 
> wrote:
On Wed, Jan 11, 2017 at 09:34:32PM +, Ivan Derbenev wrote:

> if both vms start using all 64gb memory, both of them start using swap

Don't overcommit RAM.

> So, the question is - is it possible to prioritize 1st vm above 2nd? so the 
> second one will fail before the 1st, to leave maximum possible perfomance to 
> the most importan one?

Do you mean CPU prioritization? There are facilities to allow one VM or
another to have CPU priority, but what, if a high priority VM wants RAM,
you want to OOM the other? That doesn't exist, AFAIK.

Cheers,
-j

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Infra] Gerrit performance time

2017-01-11 Thread Lenny Verkhovsky
+1

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Thursday, January 12, 2017 9:05 AM
To: OpenStack List 
Subject: [openstack-dev] [Infra] Gerrit performance time

Hi,
It takes forever to access gerrit. Anyone else hitting this issue?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Gerrit performance time

2017-01-11 Thread Gary Kotton
Hi,
It takes forever to access gerrit. Anyone else hitting this issue?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-11 Thread Saravanan KR
Thanks John, I would really appreciate if you could tag me on the
reviews. I will do the same for mine too.

Regards,
Saravanan KR

On Wed, Jan 11, 2017 at 8:03 PM, John Fulton  wrote:
> On 01/11/2017 12:56 AM, Saravanan KR wrote:
>>
>> Thanks Emilien and Giulio for your valuable feedback. I will start
>> working towards finalizing the workbook and the actions required.
>
>
> Saravanan,
>
> If you can add me to the review for your workbook, I'd appreciate it. I'm
> trying to solve a similar problem, of computing THT params for HCI
> deployments in order to isolate resources between CephOSDs and NovaComputes,
> and I was also looking to use a Mistral workflow. I'll add you to the review
> of any related work, if you don't mind. Your proposal to get NUMA info into
> Ironic [1] helps me there too. Hope to see you at the PTG.
>
> Thanks,
>   John
>
> [1] https://review.openstack.org/396147
>
>
>>> would you be able to join the PTG to help us with the session on the
>>> overcloud settings optimization?
>>
>> I will come back on this, as I have not planned for it yet. If it
>> works out, I will update the etherpad.
>>
>> Regards,
>> Saravanan KR
>>
>>
>> On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente 
>> wrote:
>>>
>>> On 01/04/2017 09:13 AM, Saravanan KR wrote:


 Hello,

 The aim of this mail is to ease the DPDK deployment with TripleO. I
 would like to see if the approach of deriving THT parameter based on
 introspection data, with a high level input would be feasible.

 Let me brief on the complexity of certain parameters, which are
 related to DPDK. Following parameters should be configured for a good
 performing DPDK cluster:
 * NeutronDpdkCoreList (puppet-vswitch)
 * ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under
 review)
 * NovaVcpuPinset (puppet-nova)

 * NeutronDpdkSocketMemory (puppet-vswitch)
 * NeutronDpdkMemoryChannels (puppet-vswitch)
 * ComputeKernelArgs (PreNetworkConfig [4]) (under review)
 * Interface to bind DPDK driver (network config templates)

 The complexity of deciding some of these parameters is explained in
 the blog [1], where the CPUs has to be chosen in accordance with the
 NUMA node associated with the interface. We are working a spec [2], to
 collect the required details from the baremetal via the introspection.
 The proposal is to create mistral workbook and actions
 (tripleo-common), which will take minimal inputs and decide the actual
 value of parameters based on the introspection data. I have created
 simple workbook [3] with what I have in mind (not final, only
 wireframe). The expected output of this workflow is to return the list
 of inputs for "parameter_defaults",  which will be used for the
 deployment. I would like to hear from the experts, if there is any
 drawbacks with this approach or any other better approach.
>>>
>>>
>>>
>>> hi, I am not an expert, I think John (on CC) knows more but this looks
>>> like
>>> a good initial step to me.
>>>
>>> once we have the workbook in good shape, we could probably integrate it
>>> in
>>> the tripleo client/common to (optionally) trigger it before every
>>> deployment
>>>
>>> would you be able to join the PTG to help us with the session on the
>>> overcloud settings optimization?
>>>
>>> https://etherpad.openstack.org/p/tripleo-ptg-pike
>>> --
>>> Giulio Fidente
>>> GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature review sprint on Wednesday 1/11

2017-01-11 Thread Matt Riedemann

On 1/5/2017 7:05 PM, Matt Riedemann wrote:

We agreed in the nova meeting today to hold a feature review sprint next
Wednesday 1/11.



It's about the end of the day for me here so I wanted to post that we 
got 5 features approved today during the review sprint. There are also 
several other changes that are up for review which are very close to 
being approved or have otherwise made good progress since last week. As 
such I've been shuffling things in the etherpad [1] based on progress 
and status.


Great work everyone, let's keep this momentum going.

[1] https://etherpad.openstack.org/p/nova-ocata-feature-freeze

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][tempest][api] community images, tempest tests, and API stability

2017-01-11 Thread GHANSHYAM MANN
Sorry I could not attend meeting due to TZ.

But from meeting logs, it looks like impression was(if am not wrong) that
Tempest test[1] is not doing the right thing and should be ok to change.
I do not think this is the case, Tempest test is doing what API
tells/behave. Below is what test does:
 1. Tenant A creates image with explicitly visibility as private.
 2. Tenant A add Tenant B as member of created image to allow Tenant B to
use that.

API [2] or Image sharing workflow [3] does not say/recommend anywhere that
Image should not be created with private visibility as explicitly.

For me this change breaks people "Creating Image with private visibility as
*explicitly* and adding member to that" which will be 409 after glance
proposal.

Also changing tempest tests does not solve the problem, backward
incompatible is still will be introduced in API.

.. 1
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/image/v2/test_images.py#n338

.. 2 http://developer.openstack.org/api-ref/image/v2/#create-an-image
.. 3
http://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html


Regards
Ghanshyam Mann
+818011120698

On Mon, Jan 9, 2017 at 10:30 PM, Brian Rosmaita 
wrote:

> On 1/5/17 10:19 AM, Brian Rosmaita wrote:
> > To anyone interested in this discussion: I put it on the agenda for
> > today's API-WG meeting (16:00 UTC):
> >
> > https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
>
> As you probably noticed in the API-WG newsletter [A], this issue was
> discussed at last week's API-WG meeting.  The complete discussion is in
> the meeting log [B], but the tldr; is that the proposed change is
> acceptable.  I'll address specific points inline for those who are
> interested, but the key request from the Glance team right now is that
> the QA team approve this patch:
>
> https://review.openstack.org/#/c/414261/
>
>
> [A]
> http://lists.openstack.org/pipermail/openstack-dev/2017-
> January/109698.html
> [B]
> http://eavesdrop.openstack.org/meetings/api_wg/2017/api_
> wg.2017-01-05-16.00.log.html
>
> > On 12/25/16 12:04 PM, GHANSHYAM MANN wrote:
> >> Thanks Brian for bringing this up, same we been discussing last week on
> QA
> >> channel and this patch[1] but I completely agree with Matthew opinion
> here.
> >> There is no doubt that this change(4-valued) are much better and clear
> than
> >> old one. This makes much defined and clear way of defining the image
> >> visibility by/for operator/user.
>
> Yes, we think that the change clarifies the visibility semantics of
> images and, in particular, fixes the problem of there being "private"
> images that aren't actually private.
>
> The current situation is easily misunderstood by end users, as evidenced
> by bugs that have been filed, for example,
> https://bugs.launchpad.net/glance/+bug/1394299
> https://bugs.launchpad.net/glance/+bug/1452443
>
> >> Upgrade procedure defined in all referenced ML/spec looks fine for
> >> redefining the visibility for images with or without members to
> >> shared/private. Operator feedback/acceptance for this change makes it
> >> acceptable.
>
> Thanks, we discussed this thoroughly and solicited operator feedback.
>
> >> But operator/users creating images with visibility as "private"
> >> *explicitly*, this changes is going to break them:
> >>
> >> - Images with member already added in that would not
> works
> >> as Tempest tests does [2].
> >>
> >> - They cannot add member as they used to do that.
>
> Yes, we recognize that this change will introduce an incompatibility in
> the workflow for users who are setting visibility explicitly to
> 'private' upon image creation.  The positive side to this change,
> however, is that when a user requests a private image, that's what
> they'll get.  It's important to note that the default visibility value
> will allow the current create-and-immediately-share workflow to continue
> exactly as it does now.
>
> One way to see how small this change is in context is to look at how it
> will appear in release notes:
>
> https://etherpad.openstack.org/p/glance-ocata-sharing-release-note-draft
>
> The incompatibility you're worried about is explained at line 8.
>
> >> First one is being something tested by Tempest and which is valid tests
> as
> >> per current behaviour of API
> >>
> >> There might be lot of operators will be doing the same and going to be
> >> broken after this. We really need to think about this change as API
> >> backward incompatible pov where upgrade Cloud with new visibility
> >> definition is all ok but it break the way of usage(Image of Private
> >> visibility explicitly with added members).
>
> It's possible that some scripts will break, but it's important to note
> that the default visibility upon image creation will allow the current
> workflow to succeed.  While that's small consolation to those whose
> scripts may break, the plus side is that image visibility changes will

Re: [openstack-dev] [nova] The py35 functional nova CI job failures

2017-01-11 Thread Davanum Srinivas
Matt,

Hoping this works - https://review.openstack.org/#/c/419250/

-- Dims

On Wed, Jan 11, 2017 at 10:04 PM, Matt Riedemann
 wrote:
> The gate-nova-tox-db-functional-py35-ubuntu-xenial job was recently added as
> non-voting to the nova check queue but I see that it's got a 100% failure
> rate. These are the tests that are failing:
>
> http://logs.openstack.org/07/282407/21/check/gate-nova-tox-db-functional-py35-ubuntu-xenial/c689c27/testr_results.html.gz
>
> Is there a patch up to fix or skip/blacklist these because if not we're just
> burning resources on a job that's totally busted which I'd rather no be
> doing as we head into feature freeze and the o-3 milestone.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [ironic]ironic-python-agent fails to lookup node with 401 status code

2017-01-11 Thread int32bit
Hi, All,

I'm a newcomer to Openstack Ironic. Recently, I'm work on deploy ironic
manually, and I found that the node status 100% *blocked in `callback wait`
status* until timeout. The ironic-api  log shows that:

2017-01-12 10:21:00.626 158262 INFO keystonemiddleware.auth_token [-]
Rejecting request
2017-01-12 10:21:00.627 158262 INFO ironic_api [-] 10.0.81.31 "GET
/v1/lookup?addresses=xxx HTTP/1

I guess the problem is IPA, so I dug into IPA source and traced the request
process and  found that the IPA client request *without any authentication*
[1].

[1]
https://github.com/openstack/ironic-python-agent/blob/master/ironic_python_agent/ironic_api_client.py#L109-L111


My ironic version is *5.1.1-1(mitaka) *and *IPA has updated to newest
version from master branch*.

My config as follows:

```
[keystone_authtoken]
auth_uri=http://:5000/
auth_version=v3.0
identity_uri=http://:35357/
admin_user=ironic
admin_password=IRONIC_PASSWORD
admin_tenant_name=service

[conductor]
api_url=http://201.0.0.120:6385 # ensure the node can access
```

I'm really not sure if I miss something or something wrong in config.

Thanks for any help!
krystism
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

2017-01-11 Thread Matt Riedemann

On 1/9/2017 8:11 AM, Armando M. wrote:

I would like to think that I was able to exercise
the influence on the goals I set out with my first self-nomination [2].



"just like nations are asked to keep their sovereign debt below a 
certain healthy threshold."


I'm not sure which Utopia you're living in, but it must be magical. Was 
that written before or after 2008, or Brexit, or Trump...?


Reading further into your Mitaka candidacy patch there is the mention of 
nova-network and migrating from that. Just looking back to the Vancouver 
summit (maybe even Paris) we were grappling with how to migrate to 
Neutron, or if people even wanted to migrate from nova-network at all 
(remember nova-network has been deprecated twice now). We're now at a 
point that nova-network is deprecated for the second time, Neutron is 
the default networking service in our CI jobs on master *except* for the 
wart that is cells v1, but we're handling that too. The point is, 
Neutron has come a LONG way in the past year and a half and I give a lot 
of credit to the persistent focused effort of you as PTL for Neutron and 
the way this group works, governs itself, and has fostered other leaders 
within the team. It doesn't even seem that long ago but time flies.


I'm glad you'll be staying on and I appreciate the work we've already 
done together between our two projects.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] The py35 functional nova CI job failures

2017-01-11 Thread Matt Riedemann
The gate-nova-tox-db-functional-py35-ubuntu-xenial job was recently 
added as non-voting to the nova check queue but I see that it's got a 
100% failure rate. These are the tests that are failing:


http://logs.openstack.org/07/282407/21/check/gate-nova-tox-db-functional-py35-ubuntu-xenial/c689c27/testr_results.html.gz

Is there a patch up to fix or skip/blacklist these because if not we're 
just burning resources on a job that's totally busted which I'd rather 
no be doing as we head into feature freeze and the o-3 milestone.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] placement job is busted in stable/newton (NO MORE HOSTS LEFT)

2017-01-11 Thread Matt Riedemann

On 1/11/2017 8:18 PM, Matt Riedemann wrote:

On 1/11/2017 9:19 AM, Jeremy Stanley wrote:


If you look in the _zuul_ansible/scripts directory you'll see that
shell script which exited nonzero is the one calling devstack-gate,
so we've got something broken near the end of the job as you
surmise. I think it might be the post_test_hook:

http://logs.openstack.org/57/416757/1/check/gate-tempest-dsvm-neutron-placement-full-ubuntu-xenial-nv/dfe0c38/logs/devstack-gate-post_test_hook.txt.gz


Looking in the nova repo, tools/hooks/post_test_hook.sh is a
relative symlink to gate/post_test_hook.sh but for some reason the
job doesn't seem to be following that. You might try recreating this
locally with the logs/reproduce.sh from that run and see if you get
the same behavior.



Hmm, I'm guessing this is somehow related to this:

https://review.openstack.org/#/c/378952/

But I'm not entirely sure how or why yet...I'll have to talk to Old Man
Dague in the morning.



Well I guess it's less sinister than all that, it was just a matter of 
when the nova change landed, which was meant for newton but happened in 
Ocata:


https://review.openstack.org/#/c/376567/

So the script isn't there in newton. I'll push a change to the job in 
project-config to make that only run the hook if it exists.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] networking-sfc stable/newton branch broken

2017-01-11 Thread Armando M.
Hi,

Please have a look at [1]. The branch has been broken for some time now.

Thanks,
Armando

[1]
https://review.openstack.org/#/q/status:open+project:openstack/networking-sfc+branch:stable/newton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] placement job is busted in stable/newton (NO MORE HOSTS LEFT)

2017-01-11 Thread Matt Riedemann

On 1/11/2017 9:19 AM, Jeremy Stanley wrote:


If you look in the _zuul_ansible/scripts directory you'll see that
shell script which exited nonzero is the one calling devstack-gate,
so we've got something broken near the end of the job as you
surmise. I think it might be the post_test_hook:

http://logs.openstack.org/57/416757/1/check/gate-tempest-dsvm-neutron-placement-full-ubuntu-xenial-nv/dfe0c38/logs/devstack-gate-post_test_hook.txt.gz

Looking in the nova repo, tools/hooks/post_test_hook.sh is a
relative symlink to gate/post_test_hook.sh but for some reason the
job doesn't seem to be following that. You might try recreating this
locally with the logs/reproduce.sh from that run and see if you get
the same behavior.



Hmm, I'm guessing this is somehow related to this:

https://review.openstack.org/#/c/378952/

But I'm not entirely sure how or why yet...I'll have to talk to Old Man 
Dague in the morning.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] placement job is busted in stable/newton (NO MORE HOSTS LEFT)

2017-01-11 Thread Matt Riedemann

On 1/11/2017 7:17 AM, Sylvain Bauza wrote:


On a separate change, I also have the placement job being -1 because of
the ComputeFilter saying that the service is disabled because of
'connection of libvirt lost' :

http://logs.openstack.org/20/415520/5/check/gate-tempest-dsvm-neutron-placement-full-ubuntu-xenial-nv/19fcab4/logs/screen-n-sch.txt.gz#_2017-01-11_04_33_35_995




That's probably due to one of:

http://status.openstack.org//elastic-recheck/index.html#1646779
http://status.openstack.org//elastic-recheck/index.html#1643911
http://status.openstack.org//elastic-recheck/index.html#1638982

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Unable to add new metrics using meters.yaml

2017-01-11 Thread Srikanth Vavilapalli
Hi

I was following the instructions @ 
http://docs.openstack.org/admin-guide/telemetry-data-collection.html#meter-definitions
 to add new meters to Ceilometer, but not able to make it work.

I verified meters.yaml file in meter/data folder:

ubuntu@mysite-ceilometer-3:/usr/lib/python2.7/dist-packages/ceilometer/meter/data$
 ls
meters.yaml


I add the following new meter to the end of that file:

  - name: $.payload.name
event_type: 'cord.dns.cache.size'
type: 'gauge'
unit: 'entries'
volume: $.payload.cache_size
user_id: $.payload.user_id
project_id: $.payload.project_id
resource_id: '"cord-" + $.payload.base_id'

When I inject 'cord.dns.cache.size' metric from a sample publisher to rabbitmq 
server (@ exchange 'openstack') on which the ceilometer notification agents are 
listening, I don't see these metrics appearing in 'ceilometer meter-list' 
output. Can any one plz let me know if I missing any config or change that 
prevents custom meter processing in Ceilometer?

Appreciate ur inputs.

Thanks
Srikanth

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] glance issue at QA team meeting on 12 January

2017-01-11 Thread GHANSHYAM MANN
Thanks Brian.

Update on meeting time, 12th Jan meeting will be at 0900 UTC.
I just sent mail about meeting invite.

​-gmann

On Thu, Jan 12, 2017 at 2:10 AM, Brian Rosmaita 
wrote:

> Hello QA Team,
>
> There wasn't an agenda for 12 January on the Meetings/QATeamMeeting page
> on the wiki, so I took the liberty of creating one.  I added an item
> under the "Tempest" section to discuss a patch to modify a Glance test
> that is currently blocking a few Glance Ocata priorities (community
> images and rolling upgrades).
>
> I put the following link on the agenda to provide some background about
> the issue and the subsequent discussion:
> http://lists.openstack.org/pipermail/openstack-dev/2017-
> January/109817.html
>
> So far a few QA team members have weighed in on the discussion, either
> on the ML or on the patch, but I need to get an official decision from
> the QA team on whether the patch is acceptable or not so that we can get
> moving.  As you know, O-3 is fast approaching!
>
> https://review.openstack.org/#/c/414261/
>
> thanks,
> brian
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday Jan 12th at 9:00 UTC

2017-01-11 Thread GHANSHYAM MANN
Hello everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, Jan 12th at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
*https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_January_12th_2017_.280900_UTC.29

*

Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones the next
meeting will be at:

04:00 EST

18:00 JST

18:30 ACST

11:00 CEST

04:00 CDT

02:00 PDT

-gmann
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

2017-01-11 Thread Sridar Kandaswamy (skandasw)
 Sad to see u step down, thanks so much Armando for taking a "How can I help 
remove any roadblocks so we can move forward" approach to leadership. As part 
of a smaller project, ur support has certainly helped us make progress.

Thanks

Sridar
PS: I was wondering if there will be a farewell speech somewhere to match other 
recent events. :-)

From: joehuang >
Reply-To: OpenStack List 
>
Date: Tuesday, January 10, 2017 at 4:52 PM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [neutron] PTL nominations deadline and 
non-candidacy

Sad to know that you will step down from Neutron PTL. Had several f2f talk with 
you, and got lots of valuable feedback from you. Thanks a lot!

Best Regards
Chaoyi Huang (joehuang)

From: Armando M. [arma...@gmail.com]
Sent: 09 January 2017 22:11
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

Hi neutrinos,

The PTL nomination week is fast approaching [0], and as you might have guessed 
by the subject of this email, I am not planning to run for Pike. If I look back 
at [1], I would like to think that I was able to exercise the influence on the 
goals I set out with my first self-nomination [2].

That said, when it comes to a dynamic project like neutron one can't never 
claim to be *done done* and for this reason, I will continue to be part of the 
neutron core team, and help the future PTL drive the next stage of the 
project's journey.

I must admit, I don't write this email lightly, however I feel that it is now 
the right moment for me to step down, and give someone else the opportunity to 
grow in the amazing role of neutron PTL! I have certainly loved every minute of 
it!

Cheers,
Armando

[0] https://releases.openstack.org/ocata/schedule.html
[1] 
https://review.openstack.org/#/q/project:openstack/election+owner:armando-migliaccio
[2] https://review.openstack.org/#/c/223764/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [nova] Do you use os-instance-usage-audit-log?

2017-01-11 Thread Matt Riedemann

On 1/11/2017 5:09 PM, Matt Riedemann wrote:


That table is populated in a periodic task from all computes that have
it enabled and by default it 'audits' instances created in the last
month (the time window is adjustable via the
'instance_get_active_by_window_joined' config option).



Oops, I meant the time window is adjustable via the 
'instance_usage_audit_period' config option.


--

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova] Do you use os-instance-usage-audit-log?

2017-01-11 Thread Matt Riedemann
Nova's got this REST API [1] which pulls task_log data from the nova 
database if the 'instance_usage_audit' config option value is True on 
any compute host.


That table is populated in a periodic task from all computes that have 
it enabled and by default it 'audits' instances created in the last 
month (the time window is adjustable via the 
'instance_get_active_by_window_joined' config option).


The periodic task also emits a 'compute.instance.exists' notification 
for each instance on that compute host which falls into the audit 
period. I'm fairly certain that notification is meant to be consumed by 
Ceilometer which is going to store it in it's own time-series database.


It just so happens that Nova is also storing this audit data in it's own 
database, and never cleaning it up - the only way in-tree to move that 
data out of the nova.task_log table is to archive it into shadow tables, 
but that doesn't cut down on the bloat in your database. That 
os-instance-usage-audit-log REST API is relying on the nova database though.


So my question is, is anyone using this in any shape or form, either via 
the Nova REST API or Ceilometer? Or are you using it in one form but not 
the other (maybe only via Ceilometer)? If you're using it, how are you 
controlling the table growth, i.e. are you deleting records over a 
certain age from the nova database using a cron job?


Mike Bayer was going to try and find some large production data sets to 
see how many of these records are in a big and busy production DB that's 
using this feature, but I'm also simply interested in how people use 
this, if it's useful at all, and if there is interest in somehow putting 
a limit on the data, i.e. we could add a config option to nova to only 
store records in the task_log table under a certain max age.


[1] 
http://developer.openstack.org/api-ref/compute/#server-usage-audit-log-os-instance-usage-audit-log


--

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [os-ansible-deployment] Periodic job in infra to test upgrades?

2017-01-11 Thread Sean M. Collins
OK - with https://review.openstack.org/#/c/418521/ we have at least a
working POC of what we can do.

The issue is that we're running into the Zuul timeout.

Depending on how quickly the AIO is built, we can get to the point where
we run the upgrade script[2].

However in some runs we don't get to the end of the AIO build[3].

So, the question is, how do we proceed? I'm not a real LXC expert but if
we could somehow cache stable builds of the LXC containers, so that
bootstrapping the AIO just means downloading and launching them, so that
we can use the majority of the Zuul runtime to execute the upgrade
script, that'd be great.

I know he have diskimage builder that does something sort of like this,
maybe we can do something similar for the LXC containers?


[1]: 
http://logs.openstack.org/21/418521/7/experimental/gate-openstack-ansible-openstack-ansible-upgrade-ubuntu-xenial-nv/6704087/console.html#_2017-01-11_05_13_16_114022
[2]: 
http://logs.openstack.org/21/418521/7/experimental/gate-openstack-ansible-openstack-ansible-upgrade-ubuntu-xenial-nv/6704087/console.html#_2017-01-11_05_13_24_895056
[3]: 
http://logs.openstack.org/21/418521/8/experimental/gate-openstack-ansible-openstack-ansible-upgrade-ubuntu-xenial-nv/ac09458/console.html#_2017-01-11_21_13_55_572404
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] diskimage-builder Package grub-pc is not available error running on Fedora 25 AArch64

2017-01-11 Thread dmarlin

On 01/11/2017 03:06 PM, Andre Florath wrote:

Hello!

Looks that you are testing a somewhat uncommon combination ;-)


Understood.  I am working primarily with 64-bit ARM, so much of what I 
try may be uncommon (being a relatively new architecture).



The error points into the direction that Ubuntu Xenial for arm does
not supply a 'grub-pc' package [1].

Would be good if you could file a ticket.


Done:
  https://bugs.launchpad.net/diskimage-builder/+bug/1655765


Thank you,

d.marlin
=



Kind regards

Andre


[1] 
http://packages.ubuntu.com/search?suite=xenial=arm64=names=grub-pc



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Openstack+KVM+overcommit, VM priority

2017-01-11 Thread Eugene Nikanorov
Ivan,

see if it provides an answer:
https://ask.openstack.org/en/question/55307/overcommitting-value-in-novaconf/

Regards,
Eugene.

On Wed, Jan 11, 2017 at 1:55 PM, James Downs  wrote:

> On Wed, Jan 11, 2017 at 09:34:32PM +, Ivan Derbenev wrote:
>
> > if both vms start using all 64gb memory, both of them start using swap
>
> Don't overcommit RAM.
>
> > So, the question is - is it possible to prioritize 1st vm above 2nd? so
> the second one will fail before the 1st, to leave maximum possible
> perfomance to the most importan one?
>
> Do you mean CPU prioritization? There are facilities to allow one VM or
> another to have CPU priority, but what, if a high priority VM wants RAM,
> you want to OOM the other? That doesn't exist, AFAIK.
>
> Cheers,
> -j
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Openstack+KVM+overcommit, VM priority

2017-01-11 Thread James Downs
On Wed, Jan 11, 2017 at 09:34:32PM +, Ivan Derbenev wrote:

> if both vms start using all 64gb memory, both of them start using swap

Don't overcommit RAM.

> So, the question is - is it possible to prioritize 1st vm above 2nd? so the 
> second one will fail before the 1st, to leave maximum possible perfomance to 
> the most importan one?

Do you mean CPU prioritization? There are facilities to allow one VM or
another to have CPU priority, but what, if a high priority VM wants RAM,
you want to OOM the other? That doesn't exist, AFAIK.

Cheers,
-j

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Openstack+KVM+overcommit, VM priority

2017-01-11 Thread Ivan Derbenev
Hello, guys!

Imagine we have a compute node with kvm hypervisor installed

It has 64gb RAM and quad core processor


We create 2 machines in nova on this host - both with 64gb and 4 VCPUs

if both vms start using all 64gb memory, both of them start using swap

same for cpu - they use it equally


So, the question is - is it possible to prioritize 1st vm above 2nd? so the 
second one will fail before the 1st, to leave maximum possible perfomance to 
the most importan one?

like production and secondary services running on the same node.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [infra][diskimage-builder] containers, Containers, CONTAINERS!

2017-01-11 Thread Paul Belanger
On Wed, Jan 11, 2017 at 04:04:10PM -0500, Paul Belanger wrote:
> On Sun, Jan 08, 2017 at 02:45:28PM -0600, Gregory Haynes wrote:
> > On Fri, Jan 6, 2017, at 09:57 AM, Paul Belanger wrote:
> > > On Fri, Jan 06, 2017 at 09:48:31AM +0100, Andre Florath wrote:
> > > > Hello Paul,
> > > > 
> > > > thank you very much for your contribution - it is very appreciated.
> > > > 
> > 
> > Seconded - I'm very excited for some effort to be put in to improving
> > the use case of making containers with DIB. Thanks :).
> > 
> > > > You addressed a topic with your patch set that was IMHO not in a wide
> > > > focus: generating images for containers.  The ideas in the patches are
> > > > good and should be implemented.
> > > > 
> > > > Nevertheless I'm missing the concept behind your patches. What I saw
> > > > are a couple of (independent?) patches - and it looks that there is
> > > > one 'big goal' - but I did not really get it.  My proposal is (as it
> > > > is done for other bigger changes or introducing new concepts) that
> > > > you write a spec for this first [1].  That would help other people
> > > > (see e.g. Matthew) to use the same blueprint also for other
> > > > distributions.
> > 
> > I strongly agree with the point that this is something were going to end
> > up repeating across many distros so we should make sure there's some
> > common patterns for doing so. A spec seems fine to me, but ideally the
> > end result involves some developer documentation. A spec is probably a
> > good place to get started on getting some consensus which we can turn in
> > to the dev docs.
> > 
> This plan is to start with ubuntu, then move to debian, then fedora and 
> finally
> centos. Fedora and CentOS are obviously harder, since a debootstrap tool 
> doesn't
> exist.
> 
I just created a tripleo-spec outlining the current implementation. We all agree
this is the first step.

https://review.openstack.org/#/c/419139/

> > > Sure, I can write a spec if needed but the TL;DR is:
> > > 
> > > Use diskimage-builder to build debootstrap --variant=minbase chroot, and
> > > nothing
> > > else. So I can then use take the generated tarball and do something else
> > > with
> > > it.
> > > 
> > > > One possibility would be to classify different element sets and define
> > > > the dependency between them.  E.g. to have a element class 'container'
> > > > which can be referenced by other classes, but is not able to reference
> > > > these (e.g. VM or hardware specific things).
> > > > 
> > 
> > It sounds like we need to step back a bit get a clear idea of how were
> > going to manage the full use case matrix of distro * (minimal / full) *
> > (container / vm / baremetal), which is something that would be nice to
> > get consensus on in a spec. This is something that keeps tripping up
> > both users and devs and I think adding containers to the matrix is sort
> > of a tipping point in terms of complexity so again, some docs after
> > figuring out our plan would be *awesome*.
> > 
> > Currently we have distro-minimal elements which are minimal
> > vm/baremetal, and distro elements which actually are full vm/baremetal
> > elements. I assume by adding an element class you mean add a set of
> > distro-container elements? If so, I worry that we might be falling in to
> > a common dib antipattern of making distro-specific elements. I have a
> > alternate proposal:
> > 
> > Lets make two elements: kernel, and minimal-userspace which,
> > respectively, install the kernel package and a minimal set of userspace
> > packages for dib to function (e.g. dependencies for dib-run-parts,
> > package-installs). The kernel package should be doable as basically a
> > package-installs and a pkg-map. The minimal-userspace element gets
> > tricky because it needs to install deps which are required for things
> > like package-installs to function (which is why the various distro
> > elements do this independently).  Even so, I think it would be nice to
> > take care of installing these from within the chroot rather than from
> > outside (see https://review.openstack.org/#/c/392253/ for a good reason
> > why). If we do this then the minimal-userspace element can have some
> > common logic to enter the chroot as part of root.d and then install the
> > needed deps.
> > 
> > The end result of this would be we have distro-minimal which depends on
> > kernel, minimal-userspace, and yum/debootstrap to build a vm/baremetal
> > capable image. We could also create a distro-container element which
> > only depends on minimal-userspace and yum/debootstrap and creates a
> > minimal container. The point being - the top level -container or
> > -minimal elements are basically convenience elements for exporting a few
> > vars and pulling in the proper elements at this point and the
> > elements/code are broken down by the functionality they provide rather
> > than use case.
> > 
> To be honest, this is a ton of work, just to create an debootstrap 'operating
> system' element. I'm 

[openstack-dev] [nova] No cells meeting next week (Jan 18)

2017-01-11 Thread Dan Smith
Hi all,

There will be no cells meeting next week, Jan 18 2017. I'll be in the
wilderness and nobody else was brave enough to run it in my absence.
Yeah, something like that.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[OpenStack-Infra] Announcing Nodepool 0.4.0

2017-01-11 Thread James E. Blair
Hi,

In December we released version 0.3.1 of Nodepool which is the last
version that doesn't use ZooKeeper[1].

Today we're releasing 0.4.0 which includes major changes to the image
building component of Nodepool.  We have been using this in production
for OpenStack for about a month and have been happy with the
performance.  We encourage anyone using Nodepool to begin using this
version now and report any problems.

Here's what you need to know:

* You should install ZooKeeper.  We don't currently support SSL or
  authenticated access (though we plan to; if you want to help add
  support for this, let me know).  You should keep that in
  mind and place sufficient access controls.

  The main nodepool host and any additional hosts used for image
  building need to be able to connect to ZooKeeper.

  You may run a single node ZooKeeper "cluster" on the nodepool server
  itself for a small installation.  We do this for OpenStack infra and
  it works well.  If using a single install you may want to reduce
  snapCount to a smaller value (we are using 1) in order to reduce
  recovery time. Larger installations, or ones which wish to take
  advantage of ZooKeeper's fault tolerance, may run clusters of 3 or
  more machines.  We plan to do this in OpenStack infra in the future.

  See additional related configuration settings in
  http://docs.openstack.org/infra/nodepool/configuration.html

* Previous versions of Nodepool allowed you to run the builder as a
  separate process, or you could choose to allow the main Nodepool
  daemon to control that process for you.  Because that was potentially
  confusing (especially considering that typically the image building
  and node launching processes are so separate that an operator would
  rarely want to stop both at once) we have removed the converged
  process option and now only support a separate builder process.

  There are two daemons that make up nodepool: nodepoold and
  nodepool-builder.  Each needs its own init script (or whatever you
  want to call it) and must be started independently.  You may run one
  without the other, and you may colocate them on the same host or on
  separate hosts.

  The main nodepool daemon (nodepoold) will log a deprecation warning
  message if you provide the '--no-builder' option.  Please remove it
  from your init scripts as it will be removed entirely in a future
  version.

* You may run as many nodepool-builder processes as necessary for your
  environment.  Each process allows you to configure the number of build
  threads and upload threads.  Due to potential limitations in how some
  diskimage-builder elements work, we don't generally recommend running
  more than one builder process on a machine (but if you know that will
  work for you, the ability is there).  Generally, if you want to build
  more than one image in parallel, add more builder hosts.

* If you are using our puppet modules to deploy Nodepool, the
  openstackci puppet module provides a nodepool_builder class which can
  be used to instantiate the builder.

* We have removed support for snapshot based image builds; the only
  supported method of image building is using diskimage-builder.  We do
  plan on adding support for using existing images supplied by the
  provider in the not-too-distant future.

  If you are currently using the snapshot based approach, please look
  into switching to using diskimage-builder.  We reliably use it to
  build images for many platforms in OpenStack infra.  One frequent
  use-case for snapshot builds is to take an existing local image and
  further customize it.  Note that diskimage-builder supports starting
  its process with any image, so that workflow can still be easily
  achieved.  If you need assistance with this, please feel free to ask
  on this list or in #openstack-infra and people will be happy to help
  with the transition.

* The nodepool builder is now far more agressive about building and
  uploading images.  If you need to temporarily stop an image build, see
  the new 'pause' attribute in
  http://docs.openstack.org/infra/nodepool/configuration.html

Thanks to all the folks who have helped work on this release and the
related roll-out in OpenStack infra!

-Jim

[1] 
http://lists.openstack.org/pipermail/openstack-infra/2016-December/004972.html

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [openstack-dev] diskimage-builder Package grub-pc is not available error running on Fedora 25 AArch64

2017-01-11 Thread Andre Florath
Hello!

Looks that you are testing a somewhat uncommon combination ;-)

The error points into the direction that Ubuntu Xenial for arm does
not supply a 'grub-pc' package [1].

Would be good if you could file a ticket.

Kind regards

Andre


[1] 
http://packages.ubuntu.com/search?suite=xenial=arm64=names=grub-pc



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][diskimage-builder] containers, Containers, CONTAINERS!

2017-01-11 Thread Paul Belanger
On Sun, Jan 08, 2017 at 02:45:28PM -0600, Gregory Haynes wrote:
> On Fri, Jan 6, 2017, at 09:57 AM, Paul Belanger wrote:
> > On Fri, Jan 06, 2017 at 09:48:31AM +0100, Andre Florath wrote:
> > > Hello Paul,
> > > 
> > > thank you very much for your contribution - it is very appreciated.
> > > 
> 
> Seconded - I'm very excited for some effort to be put in to improving
> the use case of making containers with DIB. Thanks :).
> 
> > > You addressed a topic with your patch set that was IMHO not in a wide
> > > focus: generating images for containers.  The ideas in the patches are
> > > good and should be implemented.
> > > 
> > > Nevertheless I'm missing the concept behind your patches. What I saw
> > > are a couple of (independent?) patches - and it looks that there is
> > > one 'big goal' - but I did not really get it.  My proposal is (as it
> > > is done for other bigger changes or introducing new concepts) that
> > > you write a spec for this first [1].  That would help other people
> > > (see e.g. Matthew) to use the same blueprint also for other
> > > distributions.
> 
> I strongly agree with the point that this is something were going to end
> up repeating across many distros so we should make sure there's some
> common patterns for doing so. A spec seems fine to me, but ideally the
> end result involves some developer documentation. A spec is probably a
> good place to get started on getting some consensus which we can turn in
> to the dev docs.
> 
This plan is to start with ubuntu, then move to debian, then fedora and finally
centos. Fedora and CentOS are obviously harder, since a debootstrap tool doesn't
exist.

> > Sure, I can write a spec if needed but the TL;DR is:
> > 
> > Use diskimage-builder to build debootstrap --variant=minbase chroot, and
> > nothing
> > else. So I can then use take the generated tarball and do something else
> > with
> > it.
> > 
> > > One possibility would be to classify different element sets and define
> > > the dependency between them.  E.g. to have a element class 'container'
> > > which can be referenced by other classes, but is not able to reference
> > > these (e.g. VM or hardware specific things).
> > > 
> 
> It sounds like we need to step back a bit get a clear idea of how were
> going to manage the full use case matrix of distro * (minimal / full) *
> (container / vm / baremetal), which is something that would be nice to
> get consensus on in a spec. This is something that keeps tripping up
> both users and devs and I think adding containers to the matrix is sort
> of a tipping point in terms of complexity so again, some docs after
> figuring out our plan would be *awesome*.
> 
> Currently we have distro-minimal elements which are minimal
> vm/baremetal, and distro elements which actually are full vm/baremetal
> elements. I assume by adding an element class you mean add a set of
> distro-container elements? If so, I worry that we might be falling in to
> a common dib antipattern of making distro-specific elements. I have a
> alternate proposal:
> 
> Lets make two elements: kernel, and minimal-userspace which,
> respectively, install the kernel package and a minimal set of userspace
> packages for dib to function (e.g. dependencies for dib-run-parts,
> package-installs). The kernel package should be doable as basically a
> package-installs and a pkg-map. The minimal-userspace element gets
> tricky because it needs to install deps which are required for things
> like package-installs to function (which is why the various distro
> elements do this independently).  Even so, I think it would be nice to
> take care of installing these from within the chroot rather than from
> outside (see https://review.openstack.org/#/c/392253/ for a good reason
> why). If we do this then the minimal-userspace element can have some
> common logic to enter the chroot as part of root.d and then install the
> needed deps.
> 
> The end result of this would be we have distro-minimal which depends on
> kernel, minimal-userspace, and yum/debootstrap to build a vm/baremetal
> capable image. We could also create a distro-container element which
> only depends on minimal-userspace and yum/debootstrap and creates a
> minimal container. The point being - the top level -container or
> -minimal elements are basically convenience elements for exporting a few
> vars and pulling in the proper elements at this point and the
> elements/code are broken down by the functionality they provide rather
> than use case.
> 
To be honest, this is a ton of work, just to create an debootstrap 'operating
system' element. I'm actually pretty happy how things look to day with our
-minimal elements. But it will be an uphill battle to do the work you are
asking.

I can especially understand the need to refactor code and optimize, but just
looking at the effort to create minimal / cloud elements[6], its been ongoing
since Oct. 2015. We haven't even landed that.

[6] https://review.openstack.org/#/c/211859/

> > > There 

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Michał Jastrzębski
Hey Brandon,

So couple comments to your mail.
Kolla at it's core is community which took on preparing deployment of
openstack using docker containers. This same community now is working with
both ansible and k8s as means to deploy these containers. So far we're
preserving that single community to allow full cooperation and let's be
honest, we still learn and experiment, especially in k8s space.

As for k8s being abstraction layer to container runtime, it indeed is, but
that's only part of the story. Container runtime is less important than
it's ABI for k8s deployment mechanism. With OCI container, what we today
have as docker containers can (don't know really, never tested) be
compatible with RKT, maybe with a bit of work. What's more important is how
to interact with these containers. Kolla honed our containers ABI over
multiple releases and we are still working on it. While k8s can run
multiple container formats, how do you interact with them depends on how
containers are built. While I can clearly see benefit of having
multi-runtime mechanism like that, all containers should follow same ABI
for deployment code to consume, and as far as I know (please, correct me if
I'm wrong), there is no alternative to Kolla's images that would be
compatible with Kolla ABI. So question about multiple runtimes becomes
hypothetical until one these appear. If there is community that is working
on alternative image format, I'd love to talk to them so we can try to keep
our ABIs compatible so deployment projects like one you describe can have
this choice too. I'd go further still, if such project would appear
(alternative container format), I'd be happy to discuss kolla-ansible and
kolla-k8s being able to consume it too! Just...nobody did that, doing that
or plant that as far as I know.

Cheers,
Michal

On 11 January 2017 at 12:09, Steven Dake (stdake)  wrote:

> Sure – you asked me and I thought you wanted an answer from me (which fits
> under the do not use OpenStack properties (i.e. this mailing list) for
> promotion of candidates email that Mark sent out).
>
>
>
> Others are able to answer in the broader Kolla community.
>
>
>
> Regards
>
> -steve
>
>
>
> *From: *"Brandon B. Jozsa" 
> *Date: *Wednesday, January 11, 2017 at 1:01 PM
> *To: *"Britt Houser (bhouser)" , "Steven Dake
> (stdake)" , "OpenStack Development Mailing List (not
> for usage questions)" 
>
> *Subject: *Re: [openstack-dev] [tc][kolla] Adding new deliverables
>
>
>
>
>
> I’m not entirely sure how the two relate, but anyone from Kolla can
> respond.
>
>
>
> Brandon B. Jozsa
>
>
>
> On January 11, 2017 at 2:49:07 PM, Steven Dake (stdake) (std...@cisco.com)
> wrote:
>
> Brandon,
>
>
>
> Your question is a mix of political and technical aspects that I am not
> permitted to answer until Monday because of my parsing of this email from
> Mark Collier:
>
>
>
> http://lists.openstack.org/pipermail/foundation/2017-January/002446.html
>
>
>
> I will answer you Monday after the individual board of directors elections
> conclude.
>
>
>
> Regards
>
> -steve
>
>
>
>
>
> *From: *"Brandon B. Jozsa" 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Wednesday, January 11, 2017 at 12:36 PM
> *To: *"Britt Houser (bhouser)" , "OpenStack
> Development Mailing List (not for usage questions)"  openstack.org>
> *Subject: *Re: [openstack-dev] [tc][kolla] Adding new deliverables
>
>
>
> To your point Steve, then I’d image that Kolla would have no objection to
> the introduction of other Openstack-namespace projects that provide
> alternative image formats, integration choices, or orchestration variances
> for those in the larger community who do not want to use Kolla images. All
> of the Kolla-x projects point to this one source of truth in the end. This
> results in large to the many projects falling under the Kolla umbrella:
> Kolla, Kolla-Mesos, Kolla-Ansible, Kolla-Kubernetes, Kolla-Salt, and I’d
> assume whatever else wants to consume Kolla, if things continue as they are.
>
>
>
> My immediate ask is "what are the potential negative impacts to Kolla
> having so many projects under one mission”: fragmentation of goals,
> misunderstanding of mission, increased developer debt across each
> inter-twined project (cross-repo commits and reviews), complex gating
> requirements? #kolla has been a place of spirited debate with the recent
> addition of Kolla-Kubernetes, and I think some of this is the result of the
> problems I’m alluding to. It’s very difficult to preserve what Kolla is at
> it’s core, and in turn preserve the benefits of something like Kubernetes
> which has a Runtime Interface abstraction model. It’s a tough sell for the
> larger Openstack community, and this is a critical time for Openstack and
> CNCF interoperability; 

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Steven Dake (stdake)
Sure – you asked me and I thought you wanted an answer from me (which fits 
under the do not use OpenStack properties (i.e. this mailing list) for 
promotion of candidates email that Mark sent out).

Others are able to answer in the broader Kolla community.

Regards
-steve

From: "Brandon B. Jozsa" 
Date: Wednesday, January 11, 2017 at 1:01 PM
To: "Britt Houser (bhouser)" , "Steven Dake (stdake)" 
, "OpenStack Development Mailing List (not for usage 
questions)" 
Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables


I’m not entirely sure how the two relate, but anyone from Kolla can respond.

Brandon B. Jozsa


On January 11, 2017 at 2:49:07 PM, Steven Dake (stdake) 
(std...@cisco.com) wrote:
Brandon,

Your question is a mix of political and technical aspects that I am not 
permitted to answer until Monday because of my parsing of this email from Mark 
Collier:

http://lists.openstack.org/pipermail/foundation/2017-January/002446.html

I will answer you Monday after the individual board of directors elections 
conclude.

Regards
-steve


From: "Brandon B. Jozsa" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, January 11, 2017 at 12:36 PM
To: "Britt Houser (bhouser)" , "OpenStack Development 
Mailing List (not for usage questions)" 
Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables

To your point Steve, then I’d image that Kolla would have no objection to the 
introduction of other Openstack-namespace projects that provide alternative 
image formats, integration choices, or orchestration variances for those in the 
larger community who do not want to use Kolla images. All of the Kolla-x 
projects point to this one source of truth in the end. This results in large to 
the many projects falling under the Kolla umbrella: Kolla, Kolla-Mesos, 
Kolla-Ansible, Kolla-Kubernetes, Kolla-Salt, and I’d assume whatever else wants 
to consume Kolla, if things continue as they are.

My immediate ask is "what are the potential negative impacts to Kolla having so 
many projects under one mission”: fragmentation of goals, misunderstanding of 
mission, increased developer debt across each inter-twined project (cross-repo 
commits and reviews), complex gating requirements? #kolla has been a place of 
spirited debate with the recent addition of Kolla-Kubernetes, and I think some 
of this is the result of the problems I’m alluding to. It’s very difficult to 
preserve what Kolla is at it’s core, and in turn preserve the benefits of 
something like Kubernetes which has a Runtime Interface abstraction model. It’s 
a tough sell for the larger Openstack community, and this is a critical time 
for Openstack and CNCF interoperability; would you not agree?

I’m failing to see the benefits you mention outweighing what others might see 
as potential pitfalls. My viewpoint is not news to those in Kolla. I’ve 
expressed this in Kolla already, and this is why I’m disappointed when 
Kolla-Kuberntes drops Secs in favor of quicker ad-hoc IRC 
architecturally-focused discussions.

So my question now becomes; "How is Kolla addressing these issues, and what has 
Kolla been doing with the assistance of the Openstack Foundation to gain the 
confidence of those who are watching Kolla and looking for that next cool 
container project”?

Brandon B. Jozsa


On January 11, 2017 at 1:46:13 PM, Britt Houser (bhouser) 
(bhou...@cisco.com) wrote:
My sentiments exactly Michal. We’ll get there, but let’s not jump the gun quite 
yet.

On 1/11/17, 1:38 PM, "Michał Jastrzębski"  wrote:

So from my point of view, while I understand why project separation
makes sense in the long run, I will argue that at this moment it will
be hurtful for the project. Our community is still fairly integrated,
and I'd love to keep it this way a while longer. We haven't yet fully
cleaned up mess that split of kolla-ansible caused (documentation and
whatnot). Having small revolution like that again is something that
would greatly hinder our ability to deliver valuable project, and I
think for now that should be our priority.

To me, at least before we will have more than one prod-ready
deployment tool, separation of projects would be bad. I think project
separation should be a process instead of revolution, and we already
started this process by separating kolla-ansible repo and core team.
I'd be happy to discuss how to pave road for full project separation
without causing pain for operators, users and developers, as to me
their best interest should take priority.

Cheers,
Michal

On 11 January 2017 at 09:59, Doug Hellmann  wrote:
> Excerpts from Steven Dake (stdake)'s message of 2017-01-11 14:50:31 +:
>> Thierry,
>>
>> I 

[openstack-dev] [infra][diskimage-builder][glean] nodepool dsvm (non-voting) check jobs

2017-01-11 Thread Paul Belanger
Greetings,

I'd like to mention we recently expanded our nodepool devstack jobs to include
glean and diskimage-builder. Specifically, we now attempt to build, upload and
SSH into DIBs produced by the jobs.  You'll notice 2 jobs now:

gate-dsvm-nodepool-debian-src-nv
 - ubuntu-precise
 - ubuntu-trusty
 - ubuntu-xenail
 - debian-jessie (currently missing)

gate-dsvm-nodepool-redhat-src-nv
 - centos-7
 - fedora-24

while they are non-voting, please only merge code if these jobs have passed.
This is to help cut down on the issues we find with nodepool.o.o after we tag
releases. So far, I'm happy how well they are working, job build times hover
around 25 - 30 mins.

If you have any questions, feel free to ask in #openstack-infra.

---
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Brandon B. Jozsa

I’m not entirely sure how the two relate, but anyone from Kolla can respond.

Brandon B. Jozsa


On January 11, 2017 at 2:49:07 PM, Steven Dake (stdake) 
(std...@cisco.com) wrote:
Brandon,

Your question is a mix of political and technical aspects that I am not 
permitted to answer until Monday because of my parsing of this email from Mark 
Collier:

http://lists.openstack.org/pipermail/foundation/2017-January/002446.html

I will answer you Monday after the individual board of directors elections 
conclude.

Regards
-steve


From: "Brandon B. Jozsa" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, January 11, 2017 at 12:36 PM
To: "Britt Houser (bhouser)" , "OpenStack Development 
Mailing List (not for usage questions)" 
Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables

To your point Steve, then I’d image that Kolla would have no objection to the 
introduction of other Openstack-namespace projects that provide alternative 
image formats, integration choices, or orchestration variances for those in the 
larger community who do not want to use Kolla images. All of the Kolla-x 
projects point to this one source of truth in the end. This results in large to 
the many projects falling under the Kolla umbrella: Kolla, Kolla-Mesos, 
Kolla-Ansible, Kolla-Kubernetes, Kolla-Salt, and I’d assume whatever else wants 
to consume Kolla, if things continue as they are.

My immediate ask is "what are the potential negative impacts to Kolla having so 
many projects under one mission”: fragmentation of goals, misunderstanding of 
mission, increased developer debt across each inter-twined project (cross-repo 
commits and reviews), complex gating requirements? #kolla has been a place of 
spirited debate with the recent addition of Kolla-Kubernetes, and I think some 
of this is the result of the problems I’m alluding to. It’s very difficult to 
preserve what Kolla is at it’s core, and in turn preserve the benefits of 
something like Kubernetes which has a Runtime Interface abstraction model. It’s 
a tough sell for the larger Openstack community, and this is a critical time 
for Openstack and CNCF interoperability; would you not agree?

I’m failing to see the benefits you mention outweighing what others might see 
as potential pitfalls. My viewpoint is not news to those in Kolla. I’ve 
expressed this in Kolla already, and this is why I’m disappointed when 
Kolla-Kuberntes drops Secs in favor of quicker ad-hoc IRC 
architecturally-focused discussions.

So my question now becomes; "How is Kolla addressing these issues, and what has 
Kolla been doing with the assistance of the Openstack Foundation to gain the 
confidence of those who are watching Kolla and looking for that next cool 
container project”?

Brandon B. Jozsa


On January 11, 2017 at 1:46:13 PM, Britt Houser (bhouser) 
(bhou...@cisco.com) wrote:
My sentiments exactly Michal. We’ll get there, but let’s not jump the gun quite 
yet.

On 1/11/17, 1:38 PM, "Michał Jastrzębski"  wrote:

So from my point of view, while I understand why project separation
makes sense in the long run, I will argue that at this moment it will
be hurtful for the project. Our community is still fairly integrated,
and I'd love to keep it this way a while longer. We haven't yet fully
cleaned up mess that split of kolla-ansible caused (documentation and
whatnot). Having small revolution like that again is something that
would greatly hinder our ability to deliver valuable project, and I
think for now that should be our priority.

To me, at least before we will have more than one prod-ready
deployment tool, separation of projects would be bad. I think project
separation should be a process instead of revolution, and we already
started this process by separating kolla-ansible repo and core team.
I'd be happy to discuss how to pave road for full project separation
without causing pain for operators, users and developers, as to me
their best interest should take priority.

Cheers,
Michal

On 11 January 2017 at 09:59, Doug Hellmann  wrote:
> Excerpts from Steven Dake (stdake)'s message of 2017-01-11 14:50:31 +:
>> Thierry,
>>
>> I am not a big fan of the separate gerrit teams we have instituted inside 
>> the Kolla project. I always believed we should have one core reviewer team 
>> responsible for all deliverables to avoid not just the appearance but the 
>> reality that each team would fragment the overall community of people 
>> working on Kolla containers and deployment tools. This is yet another reason 
>> I didn’t want to split the repositories into separate deliverables in the 
>> first place – since it further fragments the community working on Kolla 
>> deliverables.
>>
>> When we made our original mission statement, I originally wanted 

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Steven Dake (stdake)
Brandon,

Your question is a mix of political and technical aspects that I am not 
permitted to answer until Monday because of my parsing of this email from Mark 
Collier:

http://lists.openstack.org/pipermail/foundation/2017-January/002446.html

I will answer you Monday after the individual board of directors elections 
conclude.

Regards
-steve


From: "Brandon B. Jozsa" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, January 11, 2017 at 12:36 PM
To: "Britt Houser (bhouser)" , "OpenStack Development 
Mailing List (not for usage questions)" 
Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables

To your point Steve, then I’d image that Kolla would have no objection to the 
introduction of other Openstack-namespace projects that provide alternative 
image formats, integration choices, or orchestration variances for those in the 
larger community who do not want to use Kolla images. All of the Kolla-x 
projects point to this one source of truth in the end. This results in large to 
the many projects falling under the Kolla umbrella: Kolla, Kolla-Mesos, 
Kolla-Ansible, Kolla-Kubernetes, Kolla-Salt, and I’d assume whatever else wants 
to consume Kolla, if things continue as they are.

My immediate ask is "what are the potential negative impacts to Kolla having so 
many projects under one mission”: fragmentation of goals, misunderstanding of 
mission, increased developer debt across each inter-twined project (cross-repo 
commits and reviews), complex gating requirements? #kolla has been a place of 
spirited debate with the recent addition of Kolla-Kubernetes, and I think some 
of this is the result of the problems I’m alluding to. It’s very difficult to 
preserve what Kolla is at it’s core, and in turn preserve the benefits of 
something like Kubernetes which has a Runtime Interface abstraction model. It’s 
a tough sell for the larger Openstack community, and this is a critical time 
for Openstack and CNCF interoperability; would you not agree?

I’m failing to see the benefits you mention outweighing what others might see 
as potential pitfalls. My viewpoint is not news to those in Kolla. I’ve 
expressed this in Kolla already, and this is why I’m disappointed when 
Kolla-Kuberntes drops Secs in favor of quicker ad-hoc IRC 
architecturally-focused discussions.

So my question now becomes; "How is Kolla addressing these issues, and what has 
Kolla been doing with the assistance of the Openstack Foundation to gain the 
confidence of those who are watching Kolla and looking for that next cool 
container project”?

Brandon B. Jozsa


On January 11, 2017 at 1:46:13 PM, Britt Houser (bhouser) 
(bhou...@cisco.com) wrote:
My sentiments exactly Michal. We’ll get there, but let’s not jump the gun quite 
yet.

On 1/11/17, 1:38 PM, "Michał Jastrzębski"  wrote:

So from my point of view, while I understand why project separation
makes sense in the long run, I will argue that at this moment it will
be hurtful for the project. Our community is still fairly integrated,
and I'd love to keep it this way a while longer. We haven't yet fully
cleaned up mess that split of kolla-ansible caused (documentation and
whatnot). Having small revolution like that again is something that
would greatly hinder our ability to deliver valuable project, and I
think for now that should be our priority.

To me, at least before we will have more than one prod-ready
deployment tool, separation of projects would be bad. I think project
separation should be a process instead of revolution, and we already
started this process by separating kolla-ansible repo and core team.
I'd be happy to discuss how to pave road for full project separation
without causing pain for operators, users and developers, as to me
their best interest should take priority.

Cheers,
Michal

On 11 January 2017 at 09:59, Doug Hellmann  wrote:
> Excerpts from Steven Dake (stdake)'s message of 2017-01-11 14:50:31 +:
>> Thierry,
>>
>> I am not a big fan of the separate gerrit teams we have instituted inside 
>> the Kolla project. I always believed we should have one core reviewer team 
>> responsible for all deliverables to avoid not just the appearance but the 
>> reality that each team would fragment the overall community of people 
>> working on Kolla containers and deployment tools. This is yet another reason 
>> I didn’t want to split the repositories into separate deliverables in the 
>> first place – since it further fragments the community working on Kolla 
>> deliverables.
>>
>> When we made our original mission statement, I originally wanted it scoped 
>> around just Ansible and Docker. Fortunately, the core review team at the 
>> time made it much more general and broad before we joined the big tent. Our 
>> mission statement permits multiple 

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Brandon B. Jozsa
To your point Steve, then I’d image that Kolla would have no objection to the 
introduction of other Openstack-namespace projects that provide alternative 
image formats, integration choices, or orchestration variances for those in the 
larger community who do not want to use Kolla images. All of the Kolla-x 
projects point to this one source of truth in the end. This results in large to 
the many projects falling under the Kolla umbrella: Kolla, Kolla-Mesos, 
Kolla-Ansible, Kolla-Kubernetes, Kolla-Salt, and I’d assume whatever else wants 
to consume Kolla, if things continue as they are.

My immediate ask is "what are the potential negative impacts to Kolla having so 
many projects under one mission”: fragmentation of goals, misunderstanding of 
mission, increased developer debt across each inter-twined project (cross-repo 
commits and reviews), complex gating requirements? #kolla has been a place of 
spirited debate with the recent addition of Kolla-Kubernetes, and I think some 
of this is the result of the problems I’m alluding to. It’s very difficult to 
preserve what Kolla is at it’s core, and in turn preserve the benefits of 
something like Kubernetes which has a Runtime Interface abstraction model. It’s 
a tough sell for the larger Openstack community, and this is a critical time 
for Openstack and CNCF interoperability; would you not agree?

I’m failing to see the benefits you mention outweighing what others might see 
as potential pitfalls. My viewpoint is not news to those in Kolla. I’ve 
expressed this in Kolla already, and this is why I’m disappointed when 
Kolla-Kuberntes drops Secs in favor of quicker ad-hoc IRC 
architecturally-focused discussions.

So my question now becomes; "How is Kolla addressing these issues, and what has 
Kolla been doing with the assistance of the Openstack Foundation to gain the 
confidence of those who are watching Kolla and looking for that next cool 
container project”?

Brandon B. Jozsa


On January 11, 2017 at 1:46:13 PM, Britt Houser (bhouser) 
(bhou...@cisco.com) wrote:

My sentiments exactly Michal. We’ll get there, but let’s not jump the gun quite 
yet.

On 1/11/17, 1:38 PM, "Michał Jastrzębski"  wrote:

So from my point of view, while I understand why project separation
makes sense in the long run, I will argue that at this moment it will
be hurtful for the project. Our community is still fairly integrated,
and I'd love to keep it this way a while longer. We haven't yet fully
cleaned up mess that split of kolla-ansible caused (documentation and
whatnot). Having small revolution like that again is something that
would greatly hinder our ability to deliver valuable project, and I
think for now that should be our priority.

To me, at least before we will have more than one prod-ready
deployment tool, separation of projects would be bad. I think project
separation should be a process instead of revolution, and we already
started this process by separating kolla-ansible repo and core team.
I'd be happy to discuss how to pave road for full project separation
without causing pain for operators, users and developers, as to me
their best interest should take priority.

Cheers,
Michal

On 11 January 2017 at 09:59, Doug Hellmann  wrote:
> Excerpts from Steven Dake (stdake)'s message of 2017-01-11 14:50:31 +:
>> Thierry,
>>
>> I am not a big fan of the separate gerrit teams we have instituted inside 
>> the Kolla project. I always believed we should have one core reviewer team 
>> responsible for all deliverables to avoid not just the appearance but the 
>> reality that each team would fragment the overall community of people 
>> working on Kolla containers and deployment tools. This is yet another reason 
>> I didn’t want to split the repositories into separate deliverables in the 
>> first place – since it further fragments the community working on Kolla 
>> deliverables.
>>
>> When we made our original mission statement, I originally wanted it scoped 
>> around just Ansible and Docker. Fortunately, the core review team at the 
>> time made it much more general and broad before we joined the big tent. Our 
>> mission statement permits multiple different orchestration tools.
>>
>> Kolla is not “themed”, at least to me. Instead it is one community with 
>> slightly different interests (some people work on Ansible, some on 
>> Kubernetes, some on containers, some on all 3, etc). If we break that into 
>> separate projects with separate PTLs, those projects may end up competing 
>> with each other (which isn’t happening now inside Kolla). I think 
>> competition is a good thing. In this case, I am of the opinion it is high 
>> time we end the competition on deployment tools related to containers and 
>> get everyone working together rather than apart. That is, unless those folks 
>> want to “work apart” which of course is their prerogative. I wouldn’t 
>> suggest merging teams today that are 

Re: [openstack-dev] [release] subscribe to the OpenStack release calendar

2017-01-11 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2017-01-06 15:19:35 -0500:
> 
> > On Jan 6, 2017, at 1:14 PM, Julien Danjou  wrote:
> > 
> > On Fri, Jan 06 2017, Doug Hellmann wrote:
> > 
> > Hi Doug,
> > 
> >> The link for the Ocata schedule is
> >> https://releases.openstack.org/ocata/schedule.ics
> >> 
> >> We will have a similar Pike calendar available as soon as the
> >> schedule is finalized.
> > 
> > Thank you, this is great. One question: could it be possible to have
> > only one ICS for all releases? Maybe having one per release plus a
> > "all.ics"?
> > 
> > I'm lazy I don't want to track and add each calendar every 6 months. :-)
> > 
> > --
> > Julien Danjou
> > ;; Free Software hacker
> > ;; https://julien.danjou.info
> 
> See https://review.openstack.org/417495

This patch has merged, and the new calendar is available at
https://releases.openstack.org/schedule.ics

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Britt Houser (bhouser)
My sentiments exactly Michal.  We’ll get there, but let’s not jump the gun 
quite yet.

On 1/11/17, 1:38 PM, "Michał Jastrzębski"  wrote:

So from my point of view, while I understand why project separation
makes sense in the long run, I will argue that at this moment it will
be hurtful for the project. Our community is still fairly integrated,
and I'd love to keep it this way a while longer. We haven't yet fully
cleaned up mess that split of kolla-ansible caused (documentation and
whatnot). Having small revolution like that again is something that
would greatly hinder our ability to deliver valuable project, and I
think for now that should be our priority.

To me, at least before we will have more than one prod-ready
deployment tool, separation of projects would be bad. I think project
separation should be a process instead of revolution, and we already
started this process by separating kolla-ansible repo and core team.
I'd be happy to discuss how to pave road for full project separation
without causing pain for operators, users and developers, as to me
their best interest should take priority.

Cheers,
Michal

On 11 January 2017 at 09:59, Doug Hellmann  wrote:
> Excerpts from Steven Dake (stdake)'s message of 2017-01-11 14:50:31 +:
>> Thierry,
>>
>> I am not a big fan of the separate gerrit teams we have instituted 
inside the Kolla project.  I always believed we should have one core reviewer 
team responsible for all deliverables to avoid not just the appearance but the 
reality that each team would fragment the overall community of people working 
on Kolla containers and deployment tools.  This is yet another reason I didn’t 
want to split the repositories into separate deliverables in the first place – 
since it further fragments the community working on Kolla deliverables.
>>
>> When we made our original mission statement, I originally wanted it 
scoped around just Ansible and Docker.  Fortunately, the core review team at 
the time made it much more general and broad before we joined the big tent.  
Our mission statement permits multiple different orchestration tools.
>>
>> Kolla is not “themed”, at least to me.  Instead it is one community with 
slightly different interests (some people work on Ansible, some on Kubernetes, 
some on containers, some on all 3, etc).  If we break that into separate 
projects with separate PTLs, those projects may end up competing with each 
other (which isn’t happening now inside Kolla).  I think competition is a good 
thing.  In this case, I am of the opinion it is high time we end the 
competition on deployment tools related to containers and get everyone working 
together rather than apart.  That is, unless those folks want to “work apart” 
which of course is their prerogative.  I wouldn’t suggest merging teams today 
that are separate that don’t have a desire to merge.  That said, Kolla is very 
warm and open to new contributors so hopefully no more new duplicate effort 
solutions are started.
>
> It sure sounds to me like you want Kolla to "own" container deployment
> tools. As Thierry said, we aren't intended to be organized that way any
> more.
>
>> Siloing the deliverables into separate teams I believe would result in 
the competition I just mentioned, and further discord between the deployment 
tool projects in the big tent.  We need consolidation around people working 
together, not division.  Division around Kolla weakens Kolla specifically and 
doesn’t help out OpenStack all that much either.
>
> I would hope that the spirit of collaboration could extend across team
> boundaries. #WeAreOpenStack
>
> Doug
>
>>
>> The idea of branding or themes is not really relevant to me.  Instead 
this is all about the people producing and consuming Kolla.  I’d like these 
folks to work together as much as feasible.  Breaking a sub-community apart (in 
this case Kolla) into up to 4 different communities with 4 different PTLs 
sounds wrong to me.
>>
>> I hope my position is clear ☺  If not, feel free to ask any follow-up 
questions.
>>
>> Regards
>> -steve
>>
>> -Original Message-
>> From: Thierry Carrez 
>> Organization: OpenStack
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

>> Date: Wednesday, January 11, 2017 at 4:21 AM
>> To: "openstack-dev@lists.openstack.org" 

>> Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables
>>
>> Michał Jastrzębski wrote:
>> > I created CIVS poll with options we discussed. Every core member 
should
>> > get link to poll voting, if that's not the case, please let me 
know.
>>
>> Just a quick 

[openstack-dev] [RDO][DLRN] DLRN worker downtime during the weekend

2017-01-11 Thread Javier Pena
Hi RDO,

We need to run some maintenance operations on the DLRN instance next weekend, 
starting on Friday 13 @ 19:00 UTC. These are required to reduce the storage 
usage of the master and newton workers. The impact is:

- During the weekend, no new packages will be processed for the centos-master 
and centos-newton workers
- The existing repositories will be available as usual.

I will send a follow-up e-mail once the maintenance has been finished. Please 
do not hesitate to contact me if you have any concerns.

Regards,
Javier

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Steven Dake (stdake)
Doug,

I apologize for not being able to reply inline.  Bug in outlook.  I am probably 
going to start posting/responding on the ml with my gmail account so I can 
properly communicate with ml.

To your two points.

I don’t want kolla to “own” all deployment with containers.  I want kolla and 
it’s deliverables to operate as one community.  TripleO is consuming kolla 
containers currently – and our core team is suppoprtive of that.  Just this 
morning I  approved a tripleo blueprint to enable TripleO to use kolla 
containers.  My actions maybe here speak louder than my words ☺

I agree we could work across project boundaries because we are one community 
(OpenStack).  I also believe it would be more difficult to do so because of 
channel siloing, meeting siloing, developer siloing, etc.

Kolla really works hard to operate within the boundaries of the 4 Opens without 
ANY exception.

Regards
-steve


-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, January 11, 2017 at 10:59 AM
To: openstack-dev 
Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables

Excerpts from Steven Dake (stdake)'s message of 2017-01-11 14:50:31 +:
> Thierry,
> 
> I am not a big fan of the separate gerrit teams we have instituted inside 
the Kolla project.  I always believed we should have one core reviewer team 
responsible for all deliverables to avoid not just the appearance but the 
reality that each team would fragment the overall community of people working 
on Kolla containers and deployment tools.  This is yet another reason I didn’t 
want to split the repositories into separate deliverables in the first place – 
since it further fragments the community working on Kolla deliverables.
> 
> When we made our original mission statement, I originally wanted it 
scoped around just Ansible and Docker.  Fortunately, the core review team at 
the time made it much more general and broad before we joined the big tent.  
Our mission statement permits multiple different orchestration tools.
> 
> Kolla is not “themed”, at least to me.  Instead it is one community with 
slightly different interests (some people work on Ansible, some on Kubernetes, 
some on containers, some on all 3, etc).  If we break that into separate 
projects with separate PTLs, those projects may end up competing with each 
other (which isn’t happening now inside Kolla).  I think competition is a good 
thing.  In this case, I am of the opinion it is high time we end the 
competition on deployment tools related to containers and get everyone working 
together rather than apart.  That is, unless those folks want to “work apart” 
which of course is their prerogative.  I wouldn’t suggest merging teams today 
that are separate that don’t have a desire to merge.  That said, Kolla is very 
warm and open to new contributors so hopefully no more new duplicate effort 
solutions are started.

It sure sounds to me like you want Kolla to "own" container deployment
tools. As Thierry said, we aren't intended to be organized that way any
more.

> Siloing the deliverables into separate teams I believe would result in 
the competition I just mentioned, and further discord between the deployment 
tool projects in the big tent.  We need consolidation around people working 
together, not division.  Division around Kolla weakens Kolla specifically and 
doesn’t help out OpenStack all that much either.

I would hope that the spirit of collaboration could extend across team
boundaries. #WeAreOpenStack

Doug

> 
> The idea of branding or themes is not really relevant to me.  Instead 
this is all about the people producing and consuming Kolla.  I’d like these 
folks to work together as much as feasible.  Breaking a sub-community apart (in 
this case Kolla) into up to 4 different communities with 4 different PTLs 
sounds wrong to me.
> 
> I hope my position is clear ☺  If not, feel free to ask any follow-up 
questions.
> 
> Regards
> -steve
> 
> -Original Message-
> From: Thierry Carrez 
> Organization: OpenStack
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

> Date: Wednesday, January 11, 2017 at 4:21 AM
> To: "openstack-dev@lists.openstack.org" 

> Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables
> 
> Michał Jastrzębski wrote:
> > I created CIVS poll with options we discussed. Every core member 
should
> > get link to poll voting, if that's not the case, please let me know.
> 
> Just a quick sidenote to explain how the "big-tent" model of 
governance
> plays in here...

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Michał Jastrzębski
So from my point of view, while I understand why project separation
makes sense in the long run, I will argue that at this moment it will
be hurtful for the project. Our community is still fairly integrated,
and I'd love to keep it this way a while longer. We haven't yet fully
cleaned up mess that split of kolla-ansible caused (documentation and
whatnot). Having small revolution like that again is something that
would greatly hinder our ability to deliver valuable project, and I
think for now that should be our priority.

To me, at least before we will have more than one prod-ready
deployment tool, separation of projects would be bad. I think project
separation should be a process instead of revolution, and we already
started this process by separating kolla-ansible repo and core team.
I'd be happy to discuss how to pave road for full project separation
without causing pain for operators, users and developers, as to me
their best interest should take priority.

Cheers,
Michal

On 11 January 2017 at 09:59, Doug Hellmann  wrote:
> Excerpts from Steven Dake (stdake)'s message of 2017-01-11 14:50:31 +:
>> Thierry,
>>
>> I am not a big fan of the separate gerrit teams we have instituted inside 
>> the Kolla project.  I always believed we should have one core reviewer team 
>> responsible for all deliverables to avoid not just the appearance but the 
>> reality that each team would fragment the overall community of people 
>> working on Kolla containers and deployment tools.  This is yet another 
>> reason I didn’t want to split the repositories into separate deliverables in 
>> the first place – since it further fragments the community working on Kolla 
>> deliverables.
>>
>> When we made our original mission statement, I originally wanted it scoped 
>> around just Ansible and Docker.  Fortunately, the core review team at the 
>> time made it much more general and broad before we joined the big tent.  Our 
>> mission statement permits multiple different orchestration tools.
>>
>> Kolla is not “themed”, at least to me.  Instead it is one community with 
>> slightly different interests (some people work on Ansible, some on 
>> Kubernetes, some on containers, some on all 3, etc).  If we break that into 
>> separate projects with separate PTLs, those projects may end up competing 
>> with each other (which isn’t happening now inside Kolla).  I think 
>> competition is a good thing.  In this case, I am of the opinion it is high 
>> time we end the competition on deployment tools related to containers and 
>> get everyone working together rather than apart.  That is, unless those 
>> folks want to “work apart” which of course is their prerogative.  I wouldn’t 
>> suggest merging teams today that are separate that don’t have a desire to 
>> merge.  That said, Kolla is very warm and open to new contributors so 
>> hopefully no more new duplicate effort solutions are started.
>
> It sure sounds to me like you want Kolla to "own" container deployment
> tools. As Thierry said, we aren't intended to be organized that way any
> more.
>
>> Siloing the deliverables into separate teams I believe would result in the 
>> competition I just mentioned, and further discord between the deployment 
>> tool projects in the big tent.  We need consolidation around people working 
>> together, not division.  Division around Kolla weakens Kolla specifically 
>> and doesn’t help out OpenStack all that much either.
>
> I would hope that the spirit of collaboration could extend across team
> boundaries. #WeAreOpenStack
>
> Doug
>
>>
>> The idea of branding or themes is not really relevant to me.  Instead this 
>> is all about the people producing and consuming Kolla.  I’d like these folks 
>> to work together as much as feasible.  Breaking a sub-community apart (in 
>> this case Kolla) into up to 4 different communities with 4 different PTLs 
>> sounds wrong to me.
>>
>> I hope my position is clear ☺  If not, feel free to ask any follow-up 
>> questions.
>>
>> Regards
>> -steve
>>
>> -Original Message-
>> From: Thierry Carrez 
>> Organization: OpenStack
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Date: Wednesday, January 11, 2017 at 4:21 AM
>> To: "openstack-dev@lists.openstack.org" 
>> Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables
>>
>> Michał Jastrzębski wrote:
>> > I created CIVS poll with options we discussed. Every core member should
>> > get link to poll voting, if that's not the case, please let me know.
>>
>> Just a quick sidenote to explain how the "big-tent" model of governance
>> plays in here...
>>
>> In the previous project structure model, we had "programs". If you
>> wanted to do networking stuff, you had to join the Networking program
>> (neutron). If you worked on object storage, you had to join the Object
>> 

Re: [openstack-dev] [nova] placement/resource providers update 7

2017-01-11 Thread Chris Dent

On Fri, 6 Jan 2017, Chris Dent wrote:


## can_host, aggregates in filtering

There's still some confusion (from at least me) on whether the
can_host field is relevant when making queries to filter resource
providers. Similarly, when requesting resource providers to satisfy a
set of resources, we don't (unless I've completely missed it) return
resource providers (as compute nodes) that are associated with other
resource providers (by aggregate) that can satisfy a resource
requirement. Feels like we need to work backwards from a test or use
case and see what's missing.


At several points throughout the day I've been talking with edleafe
about this to see whether "knowing about aggregates (or can_host)" when
making a request to `GET /resource_providers?resources=`
needs to be dealt with on a scale of now, soon, later.

After much confusion I think we've established that for now we don't
need to. But we need to confirm so I said I'd write something down.

The basis for this conclusion is from three assumptions:

* The value of 'local_gb' on the compute_node object is any disk the
  compute_node can see/use and the concept of associating with shared
  disk by aggregates is not something that is real yet[0].

* Any query for resources from the scheduler client is going to
  include a VCPU requirement of at least one (meaning that every
  resource provider returned will be a compute node[1]).

* Claiming the consumption of some of that local_gb by the resource
  tracker is the resource tracker's problem and not something we're
  talking about here[2].

If all that's true, then we're getting pretty close for near term
joy on limiting the number of hosts the filter scheduler needs to
filter[3].

If it's not true (for the near term), can someone explain why not
and what need to do to fix it?

In the longer term:

Presumably the resource tracker will start reporting inventory
without DISK_GB when using shared disk, and shared disk will be
managed via aggregate associations. When that happens, the query
to GET /resource_providers will need a way to say "only give me
compute nodes that can either satisfy this resource request
directly or via associated stuff". Something tidier than:

GET 
/resource_providers?resources:_only_want_capable_or_associated_compute_nodes=True

The techniques to do that, if I understand correctly, are in an
email from Jay that some of us received a while go with a subject of
"Some attachments to help with resource providers querying".
Butterfly joins and such like.

Thoughts, questions, clarifications?

[0] This is different from the issue with allocations not needing to
be recorded when the instance has non-local disk (is volume backed):
https://review.openstack.org/#/c/407180/ . Here we are talking about
recording compute node inventory.

[1] This ignores for the moment that unless someone has been playing
around there are no resource providers being created in the
placement API that are not compute nodes.

[2] But for reference will presumably come from the work started
here https://review.openstack.org/#/c/407309/ .

[3] That work starts here: https://review.openstack.org/#/c/392569/


--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Doug Hellmann
Excerpts from Steven Dake (stdake)'s message of 2017-01-11 14:50:31 +:
> Thierry,
> 
> I am not a big fan of the separate gerrit teams we have instituted inside the 
> Kolla project.  I always believed we should have one core reviewer team 
> responsible for all deliverables to avoid not just the appearance but the 
> reality that each team would fragment the overall community of people working 
> on Kolla containers and deployment tools.  This is yet another reason I 
> didn’t want to split the repositories into separate deliverables in the first 
> place – since it further fragments the community working on Kolla 
> deliverables.
> 
> When we made our original mission statement, I originally wanted it scoped 
> around just Ansible and Docker.  Fortunately, the core review team at the 
> time made it much more general and broad before we joined the big tent.  Our 
> mission statement permits multiple different orchestration tools.
> 
> Kolla is not “themed”, at least to me.  Instead it is one community with 
> slightly different interests (some people work on Ansible, some on 
> Kubernetes, some on containers, some on all 3, etc).  If we break that into 
> separate projects with separate PTLs, those projects may end up competing 
> with each other (which isn’t happening now inside Kolla).  I think 
> competition is a good thing.  In this case, I am of the opinion it is high 
> time we end the competition on deployment tools related to containers and get 
> everyone working together rather than apart.  That is, unless those folks 
> want to “work apart” which of course is their prerogative.  I wouldn’t 
> suggest merging teams today that are separate that don’t have a desire to 
> merge.  That said, Kolla is very warm and open to new contributors so 
> hopefully no more new duplicate effort solutions are started.

It sure sounds to me like you want Kolla to "own" container deployment
tools. As Thierry said, we aren't intended to be organized that way any
more.

> Siloing the deliverables into separate teams I believe would result in the 
> competition I just mentioned, and further discord between the deployment tool 
> projects in the big tent.  We need consolidation around people working 
> together, not division.  Division around Kolla weakens Kolla specifically and 
> doesn’t help out OpenStack all that much either.

I would hope that the spirit of collaboration could extend across team
boundaries. #WeAreOpenStack

Doug

> 
> The idea of branding or themes is not really relevant to me.  Instead this is 
> all about the people producing and consuming Kolla.  I’d like these folks to 
> work together as much as feasible.  Breaking a sub-community apart (in this 
> case Kolla) into up to 4 different communities with 4 different PTLs sounds 
> wrong to me.
> 
> I hope my position is clear ☺  If not, feel free to ask any follow-up 
> questions.
> 
> Regards
> -steve
> 
> -Original Message-
> From: Thierry Carrez 
> Organization: OpenStack
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Wednesday, January 11, 2017 at 4:21 AM
> To: "openstack-dev@lists.openstack.org" 
> Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables
> 
> Michał Jastrzębski wrote:
> > I created CIVS poll with options we discussed. Every core member should
> > get link to poll voting, if that's not the case, please let me know.
> 
> Just a quick sidenote to explain how the "big-tent" model of governance
> plays in here...
> 
> In the previous project structure model, we had "programs". If you
> wanted to do networking stuff, you had to join the Networking program
> (neutron). If you worked on object storage, you had to join the Object
> Storage program (swift). The main issue with this model is that it
> prevented alternate approaches from emerging (as a program PTL could
> just refuse its emergence to continue to "own" that space). It also
> created weird situations where there would be multiple distinct groups
> of people in a program, but a single PTL to elect to represent them all.
> That created unnecessary political issues within programs and tension
> around PTL election.
> 
> Part of the big-tent project structure reform was to abolish programs
> and organize our work around "teams", rather than "themes". Project
> teams should be strongly aligned with a single team of people that work
> together. That allowed some amount of competition to emerge (we still
> try to avoid "gratuitous duplication of effort"), but most importantly
> made sure groups of people could "own" their work without having to
> defer to an outside core team or PTL. So if you have a distinct team, it
> should be its own separate project team with its own PTL. There is no
> program or namespace anymore. 

Re: [openstack-dev] [cinder] Deprecate Cinder Linux SMB driver

2017-01-11 Thread Sean McGinnis
On Wed, Jan 11, 2017 at 05:28:16PM +, Lucian Petrut wrote:
> Hi,
> 
> We're planning to deprecate the Cinder Linux SMB driver. We're taking 
> this decision mostly because of its limitations and lack of demand, 
> unlike the Windows SMB driver which is largely adopted and the current 
> to-go Cinder driver in Hyper-V deployments.
> 
> We're going to mark it as unsupported for Ocata, completely removing it 
> in Pike. Our CI will continue to send reports for this driver until it's 
> completely removed. We'll update the release notes and related 
> documentation accordingly.
> 
> My question is: are there any other requirements on our side in this 
> process? Do we need a spec for this?
> 

Hi Lucien,

Thanks for handling this. This is much better than just leaving it to
die on its own. I appreciate that.

No blueprint or spec is required from my end. Just submit a patch
marking it as unsupported, then we can remove it in Pike.

Thanks!

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Deprecate Cinder Linux SMB driver

2017-01-11 Thread Lucian Petrut
Hi,

We're planning to deprecate the Cinder Linux SMB driver. We're taking 
this decision mostly because of its limitations and lack of demand, 
unlike the Windows SMB driver which is largely adopted and the current 
to-go Cinder driver in Hyper-V deployments.

We're going to mark it as unsupported for Ocata, completely removing it 
in Pike. Our CI will continue to send reports for this driver until it's 
completely removed. We'll update the release notes and related 
documentation accordingly.

My question is: are there any other requirements on our side in this 
process? Do we need a spec for this?

Regards,
Lucian Petrut
Cloudbase Solutions SRL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] 2017-1-11 policy meeting

2017-01-11 Thread Lance Bragstad
Hey folks,

In case you missed the policy meeting today, we had a good discussion [0]
around incorporating keystone's policy into code using the Nova approach.

Keystone is in a little bit of a unique position since we maintain two
different policy files [1] [2], and there were a lot of questions around
why we have two. This same topic came up in a recent keystone meeting, and
we wanted to loop Henry Nash into the conversation, since I believe he
spearheaded a lot of the original policy.v3cloudsample work.

Let's see if we can air out some of that tribal knowledge and answer a
couple questions.

What was the main initiative for introducing policy.v3cloudsample.json?

Is it possible to consolidate the two?


[0]
http://eavesdrop.openstack.org/meetings/policy/2017/policy.2017-01-11-16.00.log.html
[1]
https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json
[2] https://github.com/openstack/keystone/blob/master/etc/policy.json
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-11 Thread Clint Byrum
Excerpts from Thomas Herve's message of 2017-01-11 08:50:19 +0100:
> On Tue, Jan 10, 2017 at 10:41 PM, Clint Byrum  wrote:
> > Excerpts from Zane Bitter's message of 2017-01-10 15:28:04 -0500:
> >> location is a required property:
> >>
> >> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Glance::Image
> >>
> >> The resource type literally does not do anything else but expose a Heat
> >> interface to a feature of Glance that no longer exists in v2. That's
> >> fundamentally why "add v2 support" has been stalled for so long ;)
> >>
> >
> > I think most of this has been beating around the bush, and the statement
> > above is the heart of the issue.
> >
> > The functionality was restricted and mostly removed from Glance for a
> > reason. Heat users will have to face that reality just like users of
> > other orchestration systems have to.
> >
> > If a cloud has v1.. great.. take a location.. use it. If they have v2..
> > location explodes. If you want to get content in to that image, well,
> > other systems have to deal with this too. Ansible's os_image will upload
> > a local file to glance for instance. Terraform doesn't even include
> > image support.
> >
> > So the way to go is likely to just make location optional, and start
> > to use v2 when the catalog says to. From there, Heat can probably help
> > make the v2 API better, and perhaps add support to to the Heat API to
> > tell the user where they can upload blobs of data for Heat to then feed
> > into Glance.
> 
> Making location optional doesn't really make sense. We don't have any
> mechanism in a template to upload data, so it would just create an
> empty shell that you can't use to boot instances from.
> 
> I think this is going where I thought it would: let's not do anything.
> The image resource is there for v1 compatibility, but there is no
> point to have a v2 resource, at least right now.

Agreed, not much point w/o upload facilities.

So what about adding that to Heat's API? A way to tell the user "in
order to create your stack I'll need you to upload these data blobs"
would also be generically useful for any large data blob that resources
would want, such as swift files.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] updating to pycryptome from pycrypto

2017-01-11 Thread Ian Cordasco
-Original Message-
From: Matthew Thode 
Reply: prometheanf...@gentoo.org ,
OpenStack Development Mailing List (not for usage questions)

Date: January 11, 2017 at 04:53:41
To: OpenStack Development Mailing List (not for usage questions)

Subject:  [openstack-dev] updating to pycryptome from pycrypto

> So, pycrypto decided to rename themselves a while ago. At the same time
> they did an ABI change. This is causing projects that dep on them to
> have to handle both at the same time. While some projects have
> migrated, most have not.
>
> A problem has come up where a project has a CVE (pysaml2) and the fix is
> only in versions after they changed to pycryptome. This means that in
> order to consume the fix in a python-native way all the pycrypto
> dependency would need to be updated to pycryptome in all projects in the
> same namespace that pysaml2 is installed.
>
> Possible solutions:
>
> update everything to pycryptome
> * would be the best going forward
> * a ton of work very late in the cycle
>
> have upstream pysaml2 release a fix based on the code before the change
> * less work
> * should still circle around and update the world in pike
> * 4.0.2 was the last release 4.0.3 was the change
> * would necessitate a 4.0.2.1 release
> * tag was removed, can hopefully be recovered for checkout/branch
>
>
> Here's the upstream bug to browse at your leisure :)
>
> https://github.com/rohe/pysaml2/issues/366

I don't think pycrypto actually willfully renamed itself. [1] As I
understand it, pycryptome is a fork of pycrypto made after pycrypto
decided that they wanted to tell people to use pyca/cryptography
instead. Frankly, given pycrypto's history (and the history that
pycryptome has probably inherited), I'd suspect that the best effort
for those of us interested, is to help pysaml2 express the deficits it
has with cryptography so it can move to a better project. If there are
no deficits, then we should focus on helping pysaml2 port to
cryptography.


[1]: I'm verifying this with some people who know better

Cheers,
--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] glance issue at QA team meeting on 12 January

2017-01-11 Thread Brian Rosmaita
Hello QA Team,

There wasn't an agenda for 12 January on the Meetings/QATeamMeeting page
on the wiki, so I took the liberty of creating one.  I added an item
under the "Tempest" section to discuss a patch to modify a Glance test
that is currently blocking a few Glance Ocata priorities (community
images and rolling upgrades).

I put the following link on the agenda to provide some background about
the issue and the subsequent discussion:
http://lists.openstack.org/pipermail/openstack-dev/2017-January/109817.html

So far a few QA team members have weighed in on the discussion, either
on the ML or on the patch, but I need to get an official decision from
the QA team on whether the patch is acceptable or not so that we can get
moving.  As you know, O-3 is fast approaching!

https://review.openstack.org/#/c/414261/

thanks,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] diskimage-builder Package grub-pc is not available error running on Fedora 25 AArch64

2017-01-11 Thread dmarlin


I am running Fedora 25 on a 64-bit ARM (AArch64) host, and tried testing 
the latest (F26) version of diskimage-builder,


  # cat /etc/redhat-release
  Fedora release 25 (Twenty Five)

  # rpm -q diskimage-builder
  diskimage-builder-1.26.1-1.fc26.noarch

but I encountered an error with the following command:

  disk-image-create -a arm64 -o test.qcow2 vm ubuntu
:

  + install-packages -m bootloader grub-pc
  Reading package lists... Done
  Building dependency tree
  Reading state information... Done
  Package grub-pc is not available, but is referred to by another package.
  This may mean that the package is missing, has been obsoleted, or
  is only available from another source
  However the following packages replace it:
grub2-common grub-common

  E: Package 'grub-pc' has no installation candidate


Is this a known issue, or am I possibly just using the wrong (or missing 
an) option when running this command?


If this is a real issue and not already reported, please let me know if 
I need to file a ticket.



Thank you,

d.marlin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Installation/Deployment Docs

2017-01-11 Thread Tim Hinrichs
The reason we have devstack instructions under Users is because they are so
easy and make it simple to test drive Congress. To test drive you need at
least a couple of services besides Congress, which makes devstack a good
fit.

But maybe users don't care about install at all. Operators care about
install and upgrade. Users care about data sources, policies, etc.

Tim
On Tue, Jan 10, 2017 at 2:24 PM Aimee Ukasick 
wrote:

> Hi all. While looking at the installation docs in preparation for
> scripting and testing Congress installation
> (https://bugs.launchpad.net/congress/+bug/1651928), I noticed there are
> installation instructions in two places:  1) For Users: Congress
> Introduction and Installation; and 2) For Operators: Deployment. The
> "For Users" section details Devstack as well as Standalone installation.
>
> I would like to rearrange the content: 1) move README.rst/4.1
> Devstack-install and 4.3 Debugging unit tests to to the For
> Developers/Contributing section; 2) move README.rst/4.2 Standalone
> install and 4.4 Upgrade to the For Operators/Deployment section. I think
> this  would make it easier for end users to create an installation
> script or validate an existing script.
>
> Any objections or thoughts?
>
> Thanks.
>
> --
>
> Aimee Ukasick, AT Open Source
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] openstack console port from dashboard

2017-01-11 Thread yang sheng
Hi ALL

I have openstack liberty running and everything works fine.

The vm console from dashboard is also working fine using port 6080.

when click console, the URL is:
http://controller_vip:6080/vnc_auto.html?token=XXX=VM_name(openstack_id)

I want to customize the dashboard console port to other port, say 6090, to:
http://controller_vip:6090
/vnc_auto.html?token=XXX=VM_name(openstack_id)

I tried to change all the value from 6080 to 6090 in nova.conf based on
http://docs.openstack.org/admin-guide/compute-remote-console-access.html,
especially "novncproxy_base_url"

However, when do the netstat, the controller is still listening on port
6080 and when click console from dashboard, the URL still use port 6080.

is there any other config I need to change?

Thanks
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [openstack-ansible] Can someone run tomorrow's (2016-01-12) meeting for me?

2017-01-11 Thread Major Hayden
On 01/11/2017 10:08 AM, Alexandra Settle wrote:
> I can run the meeting tomorrow ☺

Thanks so much, Alex! :)

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptls][goals] python3 devstack+functional tests

2017-01-11 Thread Davanum Srinivas
Team,

To recap:
* We have devstack/devstack-gate/tempest/rally have all gotten updates
to support python 3.5
* We have a bunch of jobs already running as non-voting against master branches:
gate-devstack-dsvm-py35-updown-ubuntu-xenial-nv
gate-rally-dsvm-py35-cinder-nv
gate-rally-dsvm-py35-glance-nv
gate-heat-dsvm-functional-orig-mysql-lbaasv2-py35-ubuntu-xenial-nv
gate-keystone-dsvm-py35-functional-v3-only-ubuntu-xenial-nv
gate-rally-dsvm-py35-neutron-neutron-ubuntu-xenial-nv
gate-nova-tox-db-functional-py35-ubuntu-xenial
gate-rally-dsvm-py35-rally-nova-nv
* Work is being tracked at
https://etherpad.openstack.org/p/support-python3.5-functional-tests
please use that to coordinate

It should now be easy quickly define new jobs and to spot what fails
under python3.5.

Hope this exercise over the last 2 weeks will help us all with our
collective Pike goal.

It's time to turn this over to all the individual projects. Please
don't hesitate to ping me if you need a hand with something.

Thanks,
Dims


On Sun, Jan 8, 2017 at 2:27 PM, Davanum Srinivas  wrote:
> Folks,
>
> Here's where we track the work for a little while:
> https://etherpad.openstack.org/p/support-python3.5-functional-tests
>
> There's lots of work to be done to get stuff working properly. What we
> have now is the ability to just startup a bunch of services using
> py35. Does not mean they all work.
>
> Teams that do not have functional tests in their own projects may
> still be able to look for, find and fix bugs. Example, the heat
> functional tests exercises Nova, Oslo etc. So look for tracebacks
> there and get going!
>
> This is a good chance for newbies to get involved as well. Find a
> traceback to fix, propose a fix with a python3 unit test :)
>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Non Candidacy for PTL - Pike

2017-01-11 Thread Tim Simmons
Graham,

We've been lucky to have you leading the project over the last few cycles. 
You've done really great work. I'll
always appreciate you fighting for what you believed in and trying to make 
things better for everyone.

I'll miss your heavy involvement on a professional and personal level. You are 
an awesome person and I've
greatly enjoyed working and...not working with you at all the events we've 
attended over the last few years.

<3
Tim Simmons

From: Hayes, Graham 
Sent: Monday, January 9, 2017 4:30 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [designate] Non Candidacy for PTL - Pike

Happy new year!

As you may have guessed from the subject, I have decided that the time
has come to step aside as PTL
for the upcoming cycle. It is unfortunate, but my work has pivoted in a
different direction over the last
year (containers all the way down man - but hey, I got part of my wish
to write Golang, just not on the
project I envisaged :) ).

As a result, I have been trying to PTL out of hours for the last cycle
and a half. Unfortunatly, this has had a
bad impact on this cycle, and I don't think we should repeat the pattern.

We have done some great work over the last year or so - Worker Model,
the s/Domain/Zone work,
the new dashboard, being one of the first projects to have an external
tempest plugin and getting lost in
the west of Ireland in the aftermath of the flooding.

I can honestly say, I have enjoyed my entire time with this team, from
our first meeting in Austin, back in
the beginning of 2014, the whole way through to today. We have always
been a small team, but when I think back
to what we have produced over the last few years, I am incredibly proud.

Change is healthy, and I have been in a leadership position in Designate
longer than most, and no project should
rely on a person or persons to continue to exist.

I will stick around on IRC, and still remain a member of the core review
team, as a lot of the roadmap is still in
the heads of myself and 2 or 3 others, but my main aim will be to
document the roadmap in a single place, and not
just in thousands of etherpads.

It has been a fun journey - I have gotten to work with some great
people, see some amazing places, work on really
interesting problems and contribute to a project that was close to my heart.

This is not an easy thing to do, but I think the time is right for the
project and me to let someone else make
their stamp on the project, and bring it to the next level.

Nominations close soon [0] so please start thinking about if you would
like to run or not. If anyone has any questions
about the role, please drop me an email or ping me [1] on IRC [2]

Thank you for this opportunity to serve the community for so long, it is
not something I will forget.

- Graham

0 - https://governance.openstack.org/election/
1 - mugsie
2 - #openstack-dns


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Can someone run tomorrow's (2016-01-12) meeting for me?

2017-01-11 Thread Alexandra Settle
Hey,

I can run the meeting tomorrow ☺

Thanks,

Alex

On 1/11/17, 3:29 PM, "Major Hayden"  wrote:

Hey folks,

A conflict came up and I won't be available to run tomorrow's weekly 
meeting in IRC. Would someone else be able to take over this meeting for me?

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Steve Wilkerson
Steve,

As a contributor to a kolla-* project, I disagree with your view that Kolla
is not "themed".  The more I work in the Kolla community, it's become
apparent that Kolla is operating under the "Deploy containers" theme, and
you state as much in your reply.

I also disagree that separating the kolla-* projects somehow introduces
competition between them.  Some operators would like to use Ansible for
orchestration because it's a tool their familiar with and love.  Others may
want to use Salt.  If kolla-puppet became a thing, I'm sure some people
love Puppet enough to run with that tool.  Tools are religious.  The
contributors who wish to work on a deployment project centered around their
favorite tool should be free to do so without influence from those who
enjoy a different orchestration tool.  We can both agree here that the end
goal is to deploy the Kolla containers specifically.  The orchestration
methods for deploying those containers goes beyond slightly different
interests.  For example, my interest lies solely in working with
kolla-kubernetes.  I have no desire to work with kolla-ansible or
kolla-{whatever else may come along} currently.  Some individuals do have
that desire, and contributing to different projects is absolutely
possible.  The argument that splitting the projects into their own team
produces competition in regards to choosing a tool doesn't make senes to me
-- the end goal is the same.  Having these deployment projects under the
Kolla umbrella is no different -- an operator will still choose the tool
they want to use.

I agree competition is a good thing, and as long as separate projects have
significant technical differences, competition should be encouraged.  It
allows the community as a whole to see what works well, what doesn't work
well, and whether we're overlooking requirements that need addressed.
However, to follow up your support for competition with the claim we should
end competition on deployment tools for containers confuses me.  One of the
primary concerns I have with the Kolla project currently is the idea it
needs to be the go-to project for containerized OpenStack deployments, and
everyone who wishes for such a deployment should work exclusively under
Kolla.  I will say this:  If an organization is seeking a containerized
deployment of OpenStack using the Kolla images, they should collaborate and
work closely with the Kolla community and the kolla-* deployment tool
they're aiming to use.  However, thinking the Kolla project as it exists
today is the single source of truth for a containerized deployment of
OpenStack does a huge disservice both to the community Kolla has today and
to the users who have a use case that isn't addressed by Kolla.  While
Kolla wants to be flexible and allow for any orchestration method, some
users may want something that's very targeted to their needs.  That's
perfectly okay.  Trying to force groups with different needs to work under
one umbrella project leads to distraction and will invariably lead to a
sub-optimal solution for all parties involved.

I will agree with you that Kolla is warm and open to new contributors --
I'm a new contributor, and I received encouraging support from members on
the team in regards to getting up to speed and looking at the picture of
where Kolla was going.  I'm positive that anyone who wants to contribute to
Kolla and the Kolla-* projects would be welcome all the same, regardless of
whether the projects existed under one umbrella or separately.

I honestly don't see how the Kolla-* projects existing separately sows
discord between deployment tool projects in the big tent.  Once again --
Kolla is focused entirely on deploying the Kolla images exclusively while
remaining pluggable on the orchestration tool end.  As it stands, there are
no other deployment tools that share the same mission of "Kolla images
only, using deployment tool {x}" that I'm aware of.

You mention this weakens Kolla and doesn't help OpenStack much.  On the
contrary, I see splitting the projects off as helping Kolla, Kolla-*, and
OpenStack immensely.  This reduces the strain on Kolla cores and the
proposed role they play in adding Kolla deliverables.  Kolla cores can
focus exclusively on maintaining and improving the consumable artifacts the
Kolla-* projects require.  The Kolla-* projects benefit by driving their
project forward independent of other deployment tools.  They can build a
core team focused around their use case, and they can elect a PTL that will
drive their project forward in a way that best suits their needs.  I fail
to see how one PTL overlooking different deployment tools can honestly have
the best interest of all Kolla-* projects in mind, unless you're implying
we only choose someone who's familiar with each and every one of them if
things remain the same.  Finally, as the projects evolve and are able to
better target their unique means to the same end, OpenStack benefits all
around.

I'd also like to see people 

Re: [Openstack] nova backup - instances unreachable

2017-01-11 Thread John Petrini
Mohammed,

It looks like you may be right. Just found the permissions issue in the
nova log on the compute node.

4-e8f52e4fbcfb 691caf1c10354efab3e3c8ed61b7d89a
49bc5e5bf2684bd0948d9f94c7875027 - - -] Performing standard snapshot
because direct snapshot failed: no write permission on storage pool images

I'm going to test the change and will send an update you all with the
results.

Thank You,

___

John Petrini



>>
> Yes, we are also running Mitaka and I also read Sebastien Han's blogs ;-)
>
> our snapshots are not happening at the RBD level,
>> they are being copied and uploaded to glance which takes up a lot of space
>> and is very slow.
>>
>
> Unfortunately, that's what we are experiencing, too. I don't know if
> there's something I missed in the nova configs or somewhere else, but I'm
> relieved that I'm not the only one :-)
>
> While writing this email I searched again and found something:
>
> https://specs.openstack.org/openstack/nova-specs/specs/mitak
> a/implemented/rbd-instance-snapshots.html
>
> https://review.openstack.org/#/c/205282/
>
> It seems to be implemented already, I'm looking for the config options to
> set. If you manage to get nova to make rbd snapshots, please let me know ;-)
>
> Regards,
> Eugen
>
>
>
> Zitat von John Petrini :
>
> Hi Eugen,
>>
>> Thanks for the response! That makes a lost of sense and is what I figured
>> was going on but I missed it in the documentation. We use Ceph as well and
>> I had considered doing the snapshots at the RBD level but I was hoping
>> there was someway to accomplish this via nova. I came across this
>> Sebastien
>> Han write-up that claims this functionality was added to Mitaka:
>> http://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-
>> snapshots-on-ceph-rbd/
>>
>> We are running Mitaka but our snapshots are not happening at the RBD
>> level,
>> they are being copied and uploaded to glance which takes up a lot of space
>> and is very slow.
>>
>> Have you or anyone else implemented this in Mitaka? Other than Sebastian's
>> blog I haven't found any documentation on this.
>>
>> Thank You,
>>
>> ___
>>
>> John Petrini
>>
>> On Wed, Jan 11, 2017 at 3:32 AM, Eugen Block  wrote:
>>
>> Hi,
>>>
>>> this seems to be exptected, the docs say:
>>>
>>> "Shut down the source VM before you take the snapshot to ensure that all
>>> data is flushed to disk."
>>>
>>> So if the VM is not shut down, it's freezed to prevent data loss (I
>>> guess). Depending on your storage backend, there are other ways to
>>> perform
>>> backups of your VMs.
>>> We use Ceph as backend for nova, glance and cinder. Ceph stores the
>>> disks,
>>> images and volumes as Rados block device objects. We have a backup script
>>> that creates snapshots of these RBDs, which are exported to our backup
>>> drive. This way the running VM is not stopped or freezed, the user
>>> doesn't
>>> notice any issues. Unlike a nova snapshot, the rbd snapshot is created
>>> immediately within a few seconds. After a successful backup the snapshots
>>> are removed.
>>>
>>> Hope this helps! If you are interested in Ceph, visit [1].
>>>
>>> Regards,
>>> Eugen
>>>
>>> [1] http://docs.ceph.com/docs/giant/start/intro/
>>>
>>>
>>> Zitat von John Petrini :
>>>
>>>
>>> Hello,
>>>

 I've just started experimenting with nova backup and discovered that
 there
 is a period of time during the snapshot where the instance becomes
 unreachable. Is this behavior expected during a live snapshot? Is there
 any
 way to prevent this?

 ___

 John Petrini


>>>
>>>
>>> --
>>> Eugen Block voice   : +49-40-559 51 75
>>> NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
>>> Postfach 61 03 15
>>> D-22423 Hamburg e-mail  : ebl...@nde.ag
>>>
>>> Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>>>   Sitz und Registergericht: Hamburg, HRB 90934
>>>   Vorstand: Jens-U. Mozdzen
>>>USt-IdNr. DE 814 013 983
>>>
>>>
>>> ___
>>> Mailing list: http://lists.openstack.org/cgi
>>> -bin/mailman/listinfo/openstac
>>> k
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi
>>> -bin/mailman/listinfo/openstac
>>> k
>>>
>>>
>
>
> --
> Eugen Block voice   : +49-40-559 51 75
> NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
> Postfach 61 03 15
> D-22423 Hamburg e-mail  : ebl...@nde.ag
>
> Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>   Sitz und Registergericht: Hamburg, HRB 90934
>   Vorstand: Jens-U. Mozdzen
>USt-IdNr. DE 814 013 983
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org

[openstack-dev] [openstack-ansible] Can someone run tomorrow's (2016-01-12) meeting for me?

2017-01-11 Thread Major Hayden
Hey folks,

A conflict came up and I won't be available to run tomorrow's weekly meeting in 
IRC. Would someone else be able to take over this meeting for me?

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] placement job is busted in stable/newton (NO MORE HOSTS LEFT)

2017-01-11 Thread Jeremy Stanley
On 2017-01-10 20:03:28 -0500 (-0500), Matt Riedemann wrote:
> I'm trying to sort out failures in the placement job in stable/newton job
> where the tests aren't failing but it's something in the host cleanup step
> that blows up.
> 
> Looking here I see this:
> 
> http://logs.openstack.org/57/416757/1/check/gate-tempest-dsvm-neutron-placement-full-ubuntu-xenial-nv/dfe0c38/_zuul_ansible/ansible_log.txt.gz
[...]
> 2017-01-04 23:44:42,880 p=10771 u=zuul |  fatal: [node]: FAILED! =>
> {"changed": true, "cmd": ["/tmp/05-cb20affd78a84851b47992ff129722af.sh"],
> "delta": "0:57:51.734808", "end": "2017-01-04 23:44:42.632473", "failed":
> true, "rc": 127, "start": "2017-01-04 22:46:50.897665", "stderr": "",
> "stdout": "", "stdout_lines": [], "warnings": []}
[...]

If you look in the _zuul_ansible/scripts directory you'll see that
shell script which exited nonzero is the one calling devstack-gate,
so we've got something broken near the end of the job as you
surmise. I think it might be the post_test_hook:

http://logs.openstack.org/57/416757/1/check/gate-tempest-dsvm-neutron-placement-full-ubuntu-xenial-nv/dfe0c38/logs/devstack-gate-post_test_hook.txt.gz

Looking in the nova repo, tools/hooks/post_test_hook.sh is a
relative symlink to gate/post_test_hook.sh but for some reason the
job doesn't seem to be following that. You might try recreating this
locally with the logs/reproduce.sh from that run and see if you get
the same behavior.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] nova backup - instances unreachable

2017-01-11 Thread Mohammed Naser
Hi John,

It just works for us with Mitaka.  You might be running with issues regarding 
permissions where the Nova user might not be able to write to the images pool. 

Turn debug on in your nova compute and snapshot a machine on it, you'll see the 
logs and if it's turning it off, it's probably because your rbd snapshot failed 
(in my experience) and it fell back to the older snapshot process. 

Thanks
Mohammed 

Sent from my iPhone

> On Jan 11, 2017, at 9:22 AM, John Petrini  wrote:
> 
> Hi Eugen,
> 
> Thanks for the response! That makes a lost of sense and is what I figured was 
> going on but I missed it in the documentation. We use Ceph as well and I had 
> considered doing the snapshots at the RBD level but I was hoping there was 
> someway to accomplish this via nova. I came across this Sebastien Han 
> write-up that claims this functionality was added to Mitaka: 
> http://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/
> 
> We are running Mitaka but our snapshots are not happening at the RBD level, 
> they are being copied and uploaded to glance which takes up a lot of space 
> and is very slow.
> 
> Have you or anyone else implemented this in Mitaka? Other than Sebastian's 
> blog I haven't found any documentation on this.
> 
> Thank You,
> 
> ___
> 
> John Petrini
> 
>> On Wed, Jan 11, 2017 at 3:32 AM, Eugen Block  wrote:
>> Hi,
>> 
>> this seems to be exptected, the docs say:
>> 
>> "Shut down the source VM before you take the snapshot to ensure that all 
>> data is flushed to disk."
>> 
>> So if the VM is not shut down, it's freezed to prevent data loss (I guess). 
>> Depending on your storage backend, there are other ways to perform backups 
>> of your VMs.
>> We use Ceph as backend for nova, glance and cinder. Ceph stores the disks, 
>> images and volumes as Rados block device objects. We have a backup script 
>> that creates snapshots of these RBDs, which are exported to our backup 
>> drive. This way the running VM is not stopped or freezed, the user doesn't 
>> notice any issues. Unlike a nova snapshot, the rbd snapshot is created 
>> immediately within a few seconds. After a successful backup the snapshots 
>> are removed.
>> 
>> Hope this helps! If you are interested in Ceph, visit [1].
>> 
>> Regards,
>> Eugen
>> 
>> [1] http://docs.ceph.com/docs/giant/start/intro/
>> 
>> 
>> Zitat von John Petrini :
>> 
>> 
>>> Hello,
>>> 
>>> I've just started experimenting with nova backup and discovered that there
>>> is a period of time during the snapshot where the instance becomes
>>> unreachable. Is this behavior expected during a live snapshot? Is there any
>>> way to prevent this?
>>> 
>>> ___
>>> 
>>> John Petrini
>> 
>> 
>> 
>> -- 
>> Eugen Block voice   : +49-40-559 51 75
>> NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
>> Postfach 61 03 15
>> D-22423 Hamburg e-mail  : ebl...@nde.ag
>> 
>> Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>>   Sitz und Registergericht: Hamburg, HRB 90934
>>   Vorstand: Jens-U. Mozdzen
>>USt-IdNr. DE 814 013 983
>> 
>> 
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [tripleo] Adding a LateServices ResourceChain

2017-01-11 Thread Lars Kellogg-Stedman
> 2. Do the list manipulation in puppet, like we do for firewall rules
> 
> E.g see:
> 
> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/ceilometer-api.yaml#L62
> 
> https://github.com/openstack/puppet-tripleo/blob/master/manifests/firewall/service_rules.pp#L32
> 
> This achieves the same logical result as the above, but it does the list
> manipulation in the puppet profile instead of t-h-t.
> 
> I think either approach would be fine, but I've got a slight preference for
> (1) as I think it may be more reusable in a future non-puppet world, e.g
> for container deployments etc where we may not always want to use puppet.
> 
> Open to other suggestions, but would either of the above solve your
> problem?

I went with (2), even though iteration in Puppet is a little funky.
Looking through the firewall rules implementation helped me
understand how the service_config_settings stuff works.

You can see the updated implementation at:

- https://review.openstack.org/#/c/417509/ (puppet-tripleo)
- https://review.openstack.org/#/c/411048/ (t-h-t)

-- 
Lars Kellogg-Stedman  | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] nova backup - instances unreachable

2017-01-11 Thread Eugen Block

Have you or anyone else implemented this in Mitaka?


Yes, we are also running Mitaka and I also read Sebastien Han's blogs ;-)


our snapshots are not happening at the RBD level,
they are being copied and uploaded to glance which takes up a lot of space
and is very slow.


Unfortunately, that's what we are experiencing, too. I don't know if  
there's something I missed in the nova configs or somewhere else, but  
I'm relieved that I'm not the only one :-)


While writing this email I searched again and found something:

https://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/rbd-instance-snapshots.html

https://review.openstack.org/#/c/205282/

It seems to be implemented already, I'm looking for the config options  
to set. If you manage to get nova to make rbd snapshots, please let me  
know ;-)


Regards,
Eugen


Zitat von John Petrini :


Hi Eugen,

Thanks for the response! That makes a lost of sense and is what I figured
was going on but I missed it in the documentation. We use Ceph as well and
I had considered doing the snapshots at the RBD level but I was hoping
there was someway to accomplish this via nova. I came across this Sebastien
Han write-up that claims this functionality was added to Mitaka:
http://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/

We are running Mitaka but our snapshots are not happening at the RBD level,
they are being copied and uploaded to glance which takes up a lot of space
and is very slow.

Have you or anyone else implemented this in Mitaka? Other than Sebastian's
blog I haven't found any documentation on this.

Thank You,

___

John Petrini

On Wed, Jan 11, 2017 at 3:32 AM, Eugen Block  wrote:


Hi,

this seems to be exptected, the docs say:

"Shut down the source VM before you take the snapshot to ensure that all
data is flushed to disk."

So if the VM is not shut down, it's freezed to prevent data loss (I
guess). Depending on your storage backend, there are other ways to perform
backups of your VMs.
We use Ceph as backend for nova, glance and cinder. Ceph stores the disks,
images and volumes as Rados block device objects. We have a backup script
that creates snapshots of these RBDs, which are exported to our backup
drive. This way the running VM is not stopped or freezed, the user doesn't
notice any issues. Unlike a nova snapshot, the rbd snapshot is created
immediately within a few seconds. After a successful backup the snapshots
are removed.

Hope this helps! If you are interested in Ceph, visit [1].

Regards,
Eugen

[1] http://docs.ceph.com/docs/giant/start/intro/


Zitat von John Petrini :


Hello,


I've just started experimenting with nova backup and discovered that there
is a period of time during the snapshot where the instance becomes
unreachable. Is this behavior expected during a live snapshot? Is there
any
way to prevent this?

___

John Petrini





--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
k
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
k





--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [Kuryr] kuryr-kubernetes failure

2017-01-11 Thread Agmon, Gideon (Nokia - IL)
Hi,

Per 
https://github.com/openstack/kuryr-kubernetes/blob/master/devstack/local.conf.sample
 when installing kubernets as part of devstack installation (this is local 
.conf default) the issue is there are no workers (nodes) defined:
[stack@comp1 devstack]$ kubectl get node
[stack@comp1 devstack]$ 
This shows that there are no nodes, although it is a single node with active 
master, and the local node should automatically be a worker.
[stack@comp1 devstack]$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
This issue causes pods to remain at "pending" state:
[stack@comp1 devstack]$ kubectl get pod
NAME READY STATUSRESTARTS   AGE
my-nginx-379829228-4031f 0/1   Pending   0  2d
[stack@comp1 devstack]$ kubectl describe pod my-nginx-379829228-4031f
 2d9s   9883  {default-scheduler } Warning   FailedScheduling  
no nodes available to schedule pods

Is it a bug, or am I miss some configuration?

Thanks
Gideon

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-11 Thread Thomas Herve
On Wed, Jan 11, 2017 at 3:34 PM, Emilien Macchi  wrote:
> On Wed, Jan 11, 2017 at 2:50 AM, Thomas Herve  wrote:
>> I think this is going where I thought it would: let's not do anything.
>> The image resource is there for v1 compatibility, but there is no
>> point to have a v2 resource, at least right now.
>
> If we do nothing, we force our heat-template users to keep Glance v1
> API enabled in their cloud (+ running Glance Registry service), which
> at some point doens't make sense, since Glance team asked to moved
> forward with Glance v2 API.
>
> I would really recommend to move forward and stop ignoring the new API 
> version.

Emilien was right: by defaulting to Glance v1, we still required it
for the image constraint, which is used everywhere like servers and
volumes. We can easily switch to v2 for this, I'll do it right away.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Steven Dake (stdake)
Thierry,

I am not a big fan of the separate gerrit teams we have instituted inside the 
Kolla project.  I always believed we should have one core reviewer team 
responsible for all deliverables to avoid not just the appearance but the 
reality that each team would fragment the overall community of people working 
on Kolla containers and deployment tools.  This is yet another reason I didn’t 
want to split the repositories into separate deliverables in the first place – 
since it further fragments the community working on Kolla deliverables.

When we made our original mission statement, I originally wanted it scoped 
around just Ansible and Docker.  Fortunately, the core review team at the time 
made it much more general and broad before we joined the big tent.  Our mission 
statement permits multiple different orchestration tools.

Kolla is not “themed”, at least to me.  Instead it is one community with 
slightly different interests (some people work on Ansible, some on Kubernetes, 
some on containers, some on all 3, etc).  If we break that into separate 
projects with separate PTLs, those projects may end up competing with each 
other (which isn’t happening now inside Kolla).  I think competition is a good 
thing.  In this case, I am of the opinion it is high time we end the 
competition on deployment tools related to containers and get everyone working 
together rather than apart.  That is, unless those folks want to “work apart” 
which of course is their prerogative.  I wouldn’t suggest merging teams today 
that are separate that don’t have a desire to merge.  That said, Kolla is very 
warm and open to new contributors so hopefully no more new duplicate effort 
solutions are started.

Siloing the deliverables into separate teams I believe would result in the 
competition I just mentioned, and further discord between the deployment tool 
projects in the big tent.  We need consolidation around people working 
together, not division.  Division around Kolla weakens Kolla specifically and 
doesn’t help out OpenStack all that much either.

The idea of branding or themes is not really relevant to me.  Instead this is 
all about the people producing and consuming Kolla.  I’d like these folks to 
work together as much as feasible.  Breaking a sub-community apart (in this 
case Kolla) into up to 4 different communities with 4 different PTLs sounds 
wrong to me.

I hope my position is clear ☺  If not, feel free to ask any follow-up questions.

Regards
-steve


-Original Message-
From: Thierry Carrez 
Organization: OpenStack
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, January 11, 2017 at 4:21 AM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables

Michał Jastrzębski wrote:
> I created CIVS poll with options we discussed. Every core member should
> get link to poll voting, if that's not the case, please let me know.

Just a quick sidenote to explain how the "big-tent" model of governance
plays in here...

In the previous project structure model, we had "programs". If you
wanted to do networking stuff, you had to join the Networking program
(neutron). If you worked on object storage, you had to join the Object
Storage program (swift). The main issue with this model is that it
prevented alternate approaches from emerging (as a program PTL could
just refuse its emergence to continue to "own" that space). It also
created weird situations where there would be multiple distinct groups
of people in a program, but a single PTL to elect to represent them all.
That created unnecessary political issues within programs and tension
around PTL election.

Part of the big-tent project structure reform was to abolish programs
and organize our work around "teams", rather than "themes". Project
teams should be strongly aligned with a single team of people that work
together. That allowed some amount of competition to emerge (we still
try to avoid "gratuitous duplication of effort"), but most importantly
made sure groups of people could "own" their work without having to
defer to an outside core team or PTL. So if you have a distinct team, it
should be its own separate project team with its own PTL. There is no
program or namespace anymore. As a bonus side-effect, it made sure teams
would not indefinitely grow, and we all know that it's difficult to grow
core teams (and trust) beyond a certain point.

This is why we have multiple packaging project teams, each specialized
in a given package orchestration mechanism, rather than have a single
"Packaging" program with a single PTL and Ansible / Puppet / Chef
fighting in elections to get their man at the helm. This is why the
Storlets team, while 

Re: [openstack-dev] [tripleo] Release notes in TripleO

2017-01-11 Thread Ben Nemec



On 01/11/2017 08:24 AM, Emilien Macchi wrote:

On Wed, Jan 11, 2017 at 9:21 AM, Emilien Macchi  wrote:

Greetings,

OpenStack has been using reno [1] to manage release notes for a while
now and it has been proven to be super useful.
Puppet OpenStack project adopted it in Mitaka and since then we loved it.
The path to use reno in a project is not that simple. People need to
get used of adding a release note every time they submit a patch that
fix a bug or add a new feature. This thing takes time and will require
some involvement from the team.
Though the benefits are really here:
- our users will understand what new features we have developed
- our users will learn deprecations.
- developers will have a way to communicate with non-devs, expressing
the work done in TripleO (eg: to product managers, etc).

This is an example of a release note:
https://github.com/openstack/puppet-nova/blob/master/releasenotes/notes/nova-placement-30566167309fd124.yaml

And the output:
http://docs.openstack.org/releasenotes/puppet-nova/unreleased.html

So here's a plan proposal:
1) Emilien to add all CI jobs and required bits to have reno in
TripleO (already done for python-tripleoclient). I'm doing the rest of
the projects this week.


I forgot to mention which projects we would target for Ocata:
- python-tripleoclient
- puppet-tripleo
- tripleo-common
- tripleo-heat-templates
- tripleo-puppet-elements
- tripleo-ui
- tripleo-validations
- tripleo-quickstart and tripleo-quickstart-extras


+instack-undercloud

Otherwise this all sounds good to me.  Adding reno to more tripleo 
projects has been on my todo list for months.





2) Emilien with the team (please ping me if you volunteer to help) to
write Ocata release notes before the release (we have ~ one month).
3) Once 1) is done, I would ask to the team to use it.

Regarding 3), here are some thoughts:
During pike-1 and pike-2:
I wouldn't -1 a patch that does't have a release note, but rather
comment and give some guidance to the committer and ask if it's
something doable. Otherwise, proposing a patch on top of it with the
release note. That way, we don't force people to use it immediately,
but instead giving them some guidance on why and how to use it,
directly in the review.
During pike-3:
Start -1 patches which don't have a release note. I think 3 or 4
months is fair to learn how to use reno (it takes less than 5 min to
create a good release note).

Any feedback is highly welcome, let's make TripleO releases better!

Thanks,

[1] http://docs.openstack.org/developer/reno
--
Emilien Macchi






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-11 Thread Emilien Macchi
On Wed, Jan 11, 2017 at 2:50 AM, Thomas Herve  wrote:
> On Tue, Jan 10, 2017 at 10:41 PM, Clint Byrum  wrote:
>> Excerpts from Zane Bitter's message of 2017-01-10 15:28:04 -0500:
>>> location is a required property:
>>>
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Glance::Image
>>>
>>> The resource type literally does not do anything else but expose a Heat
>>> interface to a feature of Glance that no longer exists in v2. That's
>>> fundamentally why "add v2 support" has been stalled for so long ;)
>>>
>>
>> I think most of this has been beating around the bush, and the statement
>> above is the heart of the issue.
>>
>> The functionality was restricted and mostly removed from Glance for a
>> reason. Heat users will have to face that reality just like users of
>> other orchestration systems have to.
>>
>> If a cloud has v1.. great.. take a location.. use it. If they have v2..
>> location explodes. If you want to get content in to that image, well,
>> other systems have to deal with this too. Ansible's os_image will upload
>> a local file to glance for instance. Terraform doesn't even include
>> image support.
>>
>> So the way to go is likely to just make location optional, and start
>> to use v2 when the catalog says to. From there, Heat can probably help
>> make the v2 API better, and perhaps add support to to the Heat API to
>> tell the user where they can upload blobs of data for Heat to then feed
>> into Glance.
>
> Making location optional doesn't really make sense. We don't have any
> mechanism in a template to upload data, so it would just create an
> empty shell that you can't use to boot instances from.
>
> I think this is going where I thought it would: let's not do anything.
> The image resource is there for v1 compatibility, but there is no
> point to have a v2 resource, at least right now.

If we do nothing, we force our heat-template users to keep Glance v1
API enabled in their cloud (+ running Glance Registry service), which
at some point doens't make sense, since Glance team asked to moved
forward with Glance v2 API.

I would really recommend to move forward and stop ignoring the new API version.

> We could document how to hide the resource in Heat if you don't deploy
> Glance v1.
>
> --
> Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-11 Thread John Fulton

On 01/11/2017 12:56 AM, Saravanan KR wrote:

Thanks Emilien and Giulio for your valuable feedback. I will start
working towards finalizing the workbook and the actions required.


Saravanan,

If you can add me to the review for your workbook, I'd appreciate it. 
I'm trying to solve a similar problem, of computing THT params for HCI 
deployments in order to isolate resources between CephOSDs and 
NovaComputes, and I was also looking to use a Mistral workflow. I'll add 
you to the review of any related work, if you don't mind. Your proposal 
to get NUMA info into Ironic [1] helps me there too. Hope to see you at 
the PTG.


Thanks,
  John

[1] https://review.openstack.org/396147


would you be able to join the PTG to help us with the session on the
overcloud settings optimization?

I will come back on this, as I have not planned for it yet. If it
works out, I will update the etherpad.

Regards,
Saravanan KR


On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente  wrote:

On 01/04/2017 09:13 AM, Saravanan KR wrote:


Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under
review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.



hi, I am not an expert, I think John (on CC) knows more but this looks like
a good initial step to me.

once we have the workbook in good shape, we could probably integrate it in
the tripleo client/common to (optionally) trigger it before every deployment

would you be able to join the PTG to help us with the session on the
overcloud settings optimization?

https://etherpad.openstack.org/p/tripleo-ptg-pike
--
Giulio Fidente
GPG KEY: 08D733BA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] PTL non-nomination

2017-01-11 Thread Telles Nobrega
We really appreciate all the hard work that you've put into Sahara. And
will be missed leading this project.

Thanks.

On Wed, Jan 11, 2017 at 11:03 AM, Vitaly Gridnev 
wrote:

> Hello,
>
> PTL self-nomination period is going to start soon (see release schedule,
> that is Jan, 23)
> and I have an important announcement to make. I have to announce that I'm
> not going
> to run for PTL role for the Pike release, but I will continue my duties as
> a core reviewer
> of the project. For sure, I will do my best to help a new PTL to adopt
> this role.
>
> That was really good opportunity for me, and I say thanks for all members
> of our team.
>
> [0] https://releases.openstack.org/ocata/schedule.html
>
> Best regards,
> Vitaly Gridnev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
[image: Red Hat] 
Telles Nobrega | Software Engineer
Red Hat Brasil
T: +55 11 3529-6000 | M: +55 11 9 9910-1689
Av. Brigadeiro Faria Lima 3900, 8° Andar. São Paulo, Brasil.
RED HAT | TRIED. TESTED. TRUSTED. Saiba porque em redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Release notes in TripleO

2017-01-11 Thread Emilien Macchi
On Wed, Jan 11, 2017 at 9:21 AM, Emilien Macchi  wrote:
> Greetings,
>
> OpenStack has been using reno [1] to manage release notes for a while
> now and it has been proven to be super useful.
> Puppet OpenStack project adopted it in Mitaka and since then we loved it.
> The path to use reno in a project is not that simple. People need to
> get used of adding a release note every time they submit a patch that
> fix a bug or add a new feature. This thing takes time and will require
> some involvement from the team.
> Though the benefits are really here:
> - our users will understand what new features we have developed
> - our users will learn deprecations.
> - developers will have a way to communicate with non-devs, expressing
> the work done in TripleO (eg: to product managers, etc).
>
> This is an example of a release note:
> https://github.com/openstack/puppet-nova/blob/master/releasenotes/notes/nova-placement-30566167309fd124.yaml
>
> And the output:
> http://docs.openstack.org/releasenotes/puppet-nova/unreleased.html
>
> So here's a plan proposal:
> 1) Emilien to add all CI jobs and required bits to have reno in
> TripleO (already done for python-tripleoclient). I'm doing the rest of
> the projects this week.

I forgot to mention which projects we would target for Ocata:
- python-tripleoclient
- puppet-tripleo
- tripleo-common
- tripleo-heat-templates
- tripleo-puppet-elements
- tripleo-ui
- tripleo-validations
- tripleo-quickstart and tripleo-quickstart-extras

> 2) Emilien with the team (please ping me if you volunteer to help) to
> write Ocata release notes before the release (we have ~ one month).
> 3) Once 1) is done, I would ask to the team to use it.
>
> Regarding 3), here are some thoughts:
> During pike-1 and pike-2:
> I wouldn't -1 a patch that does't have a release note, but rather
> comment and give some guidance to the committer and ask if it's
> something doable. Otherwise, proposing a patch on top of it with the
> release note. That way, we don't force people to use it immediately,
> but instead giving them some guidance on why and how to use it,
> directly in the review.
> During pike-3:
> Start -1 patches which don't have a release note. I think 3 or 4
> months is fair to learn how to use reno (it takes less than 5 min to
> create a good release note).
>
> Any feedback is highly welcome, let's make TripleO releases better!
>
> Thanks,
>
> [1] http://docs.openstack.org/developer/reno
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Release notes in TripleO

2017-01-11 Thread Emilien Macchi
Greetings,

OpenStack has been using reno [1] to manage release notes for a while
now and it has been proven to be super useful.
Puppet OpenStack project adopted it in Mitaka and since then we loved it.
The path to use reno in a project is not that simple. People need to
get used of adding a release note every time they submit a patch that
fix a bug or add a new feature. This thing takes time and will require
some involvement from the team.
Though the benefits are really here:
- our users will understand what new features we have developed
- our users will learn deprecations.
- developers will have a way to communicate with non-devs, expressing
the work done in TripleO (eg: to product managers, etc).

This is an example of a release note:
https://github.com/openstack/puppet-nova/blob/master/releasenotes/notes/nova-placement-30566167309fd124.yaml

And the output:
http://docs.openstack.org/releasenotes/puppet-nova/unreleased.html

So here's a plan proposal:
1) Emilien to add all CI jobs and required bits to have reno in
TripleO (already done for python-tripleoclient). I'm doing the rest of
the projects this week.
2) Emilien with the team (please ping me if you volunteer to help) to
write Ocata release notes before the release (we have ~ one month).
3) Once 1) is done, I would ask to the team to use it.

Regarding 3), here are some thoughts:
During pike-1 and pike-2:
I wouldn't -1 a patch that does't have a release note, but rather
comment and give some guidance to the committer and ask if it's
something doable. Otherwise, proposing a patch on top of it with the
release note. That way, we don't force people to use it immediately,
but instead giving them some guidance on why and how to use it,
directly in the review.
During pike-3:
Start -1 patches which don't have a release note. I think 3 or 4
months is fair to learn how to use reno (it takes less than 5 min to
create a good release note).

Any feedback is highly welcome, let's make TripleO releases better!

Thanks,

[1] http://docs.openstack.org/developer/reno
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo] Heat memory usage in the TripleO gate during Ocata

2017-01-11 Thread Zane Bitter

On 06/01/17 16:58, Emilien Macchi wrote:

It's worth reiterating that TripleO still disables convergence in the
undercloud, so these are all tests of the legacy code path. It would be
great if we could set up a non-voting job on t-h-t with convergence enabled
and start tracking memory use over time there too. As a first step, maybe we
could at least add an experimental job on Heat to give us a baseline?

+1. We haven't made any huge changes into that direction, but having
some info would be great.

+1 too. I volunteer to do it.


Emilien kindly set up the experimental job for us, so we now have a 
baseline: https://review.openstack.org/#/c/418583/


From that run, total memory usage by Heat was 2.32GiB. That's a little 
lower than the peak that occurred near the end of Newton development for 
the legacy path, but still more than double the current legacy path 
usage (0.90GiB on the job that ran for that same review). So we have 
work to do.


I still expect storing output values in the database at the time 
resources are created/updated, rather than generating them on the fly, 
will create the biggest savings. There may be other infelicities we can 
iron out to get some more wins as well.


It's worth noting for the record that convergence is an architecture 
designed to allow arbitrary scale-out, even at the cost of CPU/memory 
performance (a common trade-off). Thus TripleO, which combines an 
enormous number of stacks and resources with running on a single 
undercloud server, represents the worst case.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] [ceilometer] [panko] ceilometer API deprecation

2017-01-11 Thread gordon chung


On 11/01/17 08:06 AM, William M Edmonds wrote:
>
> After discussing with my team, I think we will need to propose a revert.
> The deprecation process was not followed correctly here. We will start
> working on moving to panko, but we are not sure we can contain that for
> Ocata. Please follow the deprecation process correctly in future and we
> can avoid this hassle for everyone.

the above is not correct.

prior to you sending this email, i had already made an effort to revert 
the patch because i chose to look at the ceilometer+panko integration 
and saw a gap i'd like tested first. i encourage you to do this next 
time rather than sending: 'i didn't try it but...'. understandably you 
have limited resources and i believe no one understands this more than 
the Telemetry team.

regards,
-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] PTL non-nomination

2017-01-11 Thread Vitaly Gridnev
Hello,

PTL self-nomination period is going to start soon (see release schedule, that 
is Jan, 23)
and I have an important announcement to make. I have to announce that I'm not 
going
to run for PTL role for the Pike release, but I will continue my duties as a 
core reviewer
of the project. For sure, I will do my best to help a new PTL to adopt this 
role.

That was really good opportunity for me, and I say thanks for all members
of our team. 

[0] https://releases.openstack.org/ocata/schedule.html

Best regards,
Vitaly Gridnev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

2017-01-11 Thread Miguel Angel Ajo Pelayo
Armando, thank you very much for all the work you've done as PTL,
my best wishes, and happy to know that you'll be around!

Best regards,
Miguel Ángel.


On Wed, Jan 11, 2017 at 1:52 AM, joehuang  wrote:

> Sad to know that you will step down from Neutron PTL. Had several f2f talk
> with you, and got lots of valuable feedback from you. Thanks a lot!
>
> Best Regards
> Chaoyi Huang (joehuang)
> --
> *From:* Armando M. [arma...@gmail.com]
> *Sent:* 09 January 2017 22:11
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [neutron] PTL nominations deadline and
> non-candidacy
>
> Hi neutrinos,
>
> The PTL nomination week is fast approaching [0], and as you might have
> guessed by the subject of this email, I am not planning to run for Pike. If
> I look back at [1], I would like to think that I was able to exercise the
> influence on the goals I set out with my first self-nomination [2].
>
> That said, when it comes to a dynamic project like neutron one can't never
> claim to be *done done* and for this reason, I will continue to be part of
> the neutron core team, and help the future PTL drive the next stage of the
> project's journey.
>
> I must admit, I don't write this email lightly, however I feel that it is
> now the right moment for me to step down, and give someone else the
> opportunity to grow in the amazing role of neutron PTL! I have certainly
> loved every minute of it!
>
> Cheers,
> Armando
>
> [0] https://releases.openstack.org/ocata/schedule.html
> [1] https://review.openstack.org/#/q/project:openstack/elect
> ion+owner:armando-migliaccio
> [2] https://review.openstack.org/#/c/223764/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] [ceilometer] [panko] ceilometer API deprecation

2017-01-11 Thread Julien Danjou
On Wed, Jan 11 2017, William M Edmonds wrote:

> After discussing with my team, I think we will need to propose a revert.
> The deprecation process was not followed correctly here. We will start
> working on moving to panko, but we are not sure we can contain that for
> Ocata. Please follow the deprecation process correctly in future and we can
> avoid this hassle for everyone.

I see nobody from IBM contributing to Telemetry, so I find it hard to
read that kind of statement.

There's also nobody working on Panko itself, so I don't think it's fair
to ask for a revert and to the Ceilometer team to continue maintaining a
software nobody wants to.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] placement job is busted in stable/newton (NO MORE HOSTS LEFT)

2017-01-11 Thread Sylvain Bauza


Le 11/01/2017 02:03, Matt Riedemann a écrit :
> I'm trying to sort out failures in the placement job in stable/newton
> job where the tests aren't failing but it's something in the host
> cleanup step that blows up.
> 
> Looking here I see this:
> 
> http://logs.openstack.org/57/416757/1/check/gate-tempest-dsvm-neutron-placement-full-ubuntu-xenial-nv/dfe0c38/_zuul_ansible/ansible_log.txt.gz
> 
> 
> 2017-01-04 22:46:50,761 p=10771 u=zuul |  changed: [node] => {"changed":
> true, "checksum": "7f4d51086f4bc4de5ae6d83c00b0e458b8606aa2", "dest":
> "/tmp/05-cb20affd78a84851b47992ff129722af.sh", "gid": 3001, "group":
> "jenkins", "md5sum": "2de9baa70e4d28bbcca550a17959beab", "mode": "0555",
> "owner": "jenkins", "size": 647, "src":
> "/tmp/tmpz_guiR/.ansible/remote_tmp/ansible-tmp-1483570010.54-207083993908564/source",
> "state": "file", "uid": 3000}
> 2017-01-04 22:46:50,775 p=10771 u=zuul |  TASK [command generated from
> JJB] **
> 2017-01-04 23:44:42,880 p=10771 u=zuul |  fatal: [node]: FAILED! =>
> {"changed": true, "cmd":
> ["/tmp/05-cb20affd78a84851b47992ff129722af.sh"], "delta":
> "0:57:51.734808", "end": "2017-01-04 23:44:42.632473", "failed": true,
> "rc": 127, "start": "2017-01-04 22:46:50.897665", "stderr": "",
> "stdout": "", "stdout_lines": [], "warnings": []}
> 2017-01-04 23:44:42,887 p=10771 u=zuul |  NO MORE HOSTS LEFT
> *
> 2017-01-04 23:44:42,888 p=10771 u=zuul |  PLAY RECAP
> *
> 2017-01-04 23:44:42,888 p=10771 u=zuul |  node   :
> ok=13   changed=13   unreachable=0failed=1
> 
> I'm not sure what the 'NO MORE HOSTS LEFT' error means. Is there
> something wrong with the post/cleanup step for this job in newton? It's
> non-voting but we're backporting bug fixes for this code since it needs
> to work to upgrade to ocata.
> 


Is there a follow-up on the above problem ?

On a separate change, I also have the placement job being -1 because of
the ComputeFilter saying that the service is disabled because of
'connection of libvirt lost' :

http://logs.openstack.org/20/415520/5/check/gate-tempest-dsvm-neutron-placement-full-ubuntu-xenial-nv/19fcab4/logs/screen-n-sch.txt.gz#_2017-01-11_04_33_35_995


-Sylvain

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] [ceilometer] [panko] ceilometer API deprecation

2017-01-11 Thread William M Edmonds

On 01/10/2017 09:26 AM, gordon chung wrote:
> On 10/01/17 07:27 AM, Julien Danjou wrote:
> > On Mon, Jan 09 2017, William M Edmonds wrote:
> >
> >> I started the conversation on IRC [5], but wanted to send this to the
> >> mailing list and see if others have thoughts/concerns here and figure
out
> >> what we should do about this going forward.
> >
> > Nothing? The code has not been removed, it has been moved to a new
> > project. Ocata will be the second release for Panko, so if user did not
> > switch already during Newton, they'll have to do it for Ocata. That's a
> > lot of overlap. Two cycles to switch to a "new" service should be
enough.
>
> well it's not actually two. it'd just be the one cycle in Newton since
> it's gone in Ocata. :P
>
> that said, for me, the move to remove it is to avoid any needless
> additional work of maintaining two active codebases. we're a small team
> so it's work we don't have time for.
>
> as i mentioned in chat, i'm ok with reverting patch and leaving it for
> Ocata but if the transition is clean (similiar to how aodh was split)
> i'd rather not waste resources on maintaining residual 'dead' code.

After discussing with my team, I think we will need to propose a revert.
The deprecation process was not followed correctly here. We will start
working on moving to panko, but we are not sure we can contain that for
Ocata. Please follow the deprecation process correctly in future and we can
avoid this hassle for everyone.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [nova] [placement] Which service is using port 8778?

2017-01-11 Thread Chris Dent

On Tue, 10 Jan 2017, Mohammed Naser wrote:


We use virtual hosts, haproxy runs on our VIP at port 80 and port
443 (SSL) (with keepalived to make sure it’s always running) and
we use `use_backend` to send to the appropriate backend, more
information here:

http://blog.haproxy.com/2015/01/26/web-application-name-to-backend-mapping-in-haproxy/


Thanks for writing about this, the way you're doing things is
deliciously sane. When this discussion initially came up I was
surprised to hear that people were deploying with any correspondence
between what they had in the service catalog and the explicit
(internal) hosts and (internal) ports on which they were deploying
the services. Your model is what I've been assuming people would
(and actually) do:

* host the WSGI applications somewhere (anywhere)
* have front end proxies / load balancers/ HA services dispatching
  to those backends based on either host name or a prefix on the URL

This means that what shows up for the configured listening host and
port in somewhere like puppet-placement's actual installation of the
service is very likely completely different from what shows up in
whatever is writing the service catalog.


It makes our catalog nice and neat, we have a
-.vexxhost.net  internal
naming convention, so our catalog looks nice and clean and the API
calls don’t get blocked by firewalls (the strange ports might be
blocked on some customer-side firewalls).


[catalog snipped]


I’d be more than happy to give my comments, but I think this is
the best way.  Prefixes can work too and would make things easy
during dev, but in a production deployment, I would rather not
deal with something like that.  Also, all of those are CNAME
records pointing to api-.vexxhost.net
 so it makes it easy to move things over if
needed.  I guess the only problem is DNS setup overhead


The reason for starting to use prefixes in devstack has been because
it is easy to manage when there's just the one running apache and
modifying the /etc/hosts table was not considered. Since this topic
came up there's been discussion of adding hosts (for each service)
to /etc/hosts as a way of allowing different virtual hosts for each
service, all on the same port. This allows for the desired
cleanliness and preserves different log files for each service (when
using prefixes, it is harder to manage the error logs).

These concerns that are present in devstack don't apply in "real"
installations where having a reverse proxy of some kind is the norm.

So to bring this back round to puppet and ports: Should puppet be
expressing a default port at all? It really depends on whether the
intention is to allow multiple services to run in the same server on
the same host, how logging is being managed, whether apache is being
used, etc.

Should each service have a prescribed default port to avoid
collisions? I think not. I think the ports that the services run on,
as exposed to the users, should always be 80 and 443 (so no need to
define a port, just a scheme) and the internal ports, if necessary,
should be up to the deployer and their own internal plans. If we
define a default port, people will use it and expose it to users.

imho, iana(deployer), ymmv, etc

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [nova] [placement] Which service is using port 8778?

2017-01-11 Thread Chris Dent

On Tue, 10 Jan 2017, Mohammed Naser wrote:


We use virtual hosts, haproxy runs on our VIP at port 80 and port
443 (SSL) (with keepalived to make sure it’s always running) and
we use `use_backend` to send to the appropriate backend, more
information here:

http://blog.haproxy.com/2015/01/26/web-application-name-to-backend-mapping-in-haproxy/


Thanks for writing about this, the way you're doing things is
deliciously sane. When this discussion initially came up I was
surprised to hear that people were deploying with any correspondence
between what they had in the service catalog and the explicit
(internal) hosts and (internal) ports on which they were deploying
the services. Your model is what I've been assuming people would
(and actually) do:

* host the WSGI applications somewhere (anywhere)
* have front end proxies / load balancers/ HA services dispatching
  to those backends based on either host name or a prefix on the URL

This means that what shows up for the configured listening host and
port in somewhere like puppet-placement's actual installation of the
service is very likely completely different from what shows up in
whatever is writing the service catalog.


It makes our catalog nice and neat, we have a
-.vexxhost.net  internal
naming convention, so our catalog looks nice and clean and the API
calls don’t get blocked by firewalls (the strange ports might be
blocked on some customer-side firewalls).


[catalog snipped]


I’d be more than happy to give my comments, but I think this is
the best way.  Prefixes can work too and would make things easy
during dev, but in a production deployment, I would rather not
deal with something like that.  Also, all of those are CNAME
records pointing to api-.vexxhost.net
 so it makes it easy to move things over if
needed.  I guess the only problem is DNS setup overhead


The reason for starting to use prefixes in devstack has been because
it is easy to manage when there's just the one running apache and
modifying the /etc/hosts table was not considered. Since this topic
came up there's been discussion of adding hosts (for each service)
to /etc/hosts as a way of allowing different virtual hosts for each
service, all on the same port. This allows for the desired
cleanliness and preserves different log files for each service (when
using prefixes, it is harder to manage the error logs).

These concerns that are present in devstack don't apply in "real"
installations where having a reverse proxy of some kind is the norm.

So to bring this back round to puppet and ports: Should puppet be
expressing a default port at all? It really depends on whether the
intention is to allow multiple services to run in the same server on
the same host, how logging is being managed, whether apache is being
used, etc.

Should each service have a prescribed default port to avoid
collisions? I think not. I think the ports that the services run on,
as exposed to the users, should always be 80 and 443 (so no need to
define a port, just a scheme) and the internal ports, if necessary,
should be up to the deployer and their own internal plans. If we
define a default port, people will use it and expose it to users.

imho, iana(deployer), ymmv, etc

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2017-01-11 Thread Davanum Srinivas
Mehdi,

I'd support switching g-r to make oslo.messaging work. period. This is
dragged on way too long.

Thanks,
Dims

On Wed, Jan 11, 2017 at 2:25 AM, Mehdi Abaakouk  wrote:
> The library final release is really soon, and we are still blocked on
> this topic. If this is not solved, we will release one more time an
> unusable driver in oslo.messging.
>
> I want to remember that people current uses the kafka driver in
> production with 'downstream patches' ready since 1 years to make it
> works.
>
> We recently remove the kafka dep from oslo.messaging to be able to merge
> some of these patches. But we can't untag the experimental flag of
> this driver until the dependency issue is solved.
>
> So what can we do to unblock this situation ?
>
>
> On Fri, Jan 06, 2017 at 02:31:28PM +0100, Mehdi Abaakouk wrote:
>>
>> Any progress ?
>>
>> On Thu, Dec 08, 2016 at 08:32:54AM +1100, Tony Breeds wrote:
>>>
>>> On Mon, Dec 05, 2016 at 04:03:13AM +, Keen, Joe wrote:

 I wasn’t able to set a test up on Friday and with all the other work I
 have for the next few days I doubt I’ll be able to get to it much before
 Wednesday.
>>>
>>>
>>> It's Wednesday so can we have an update?
>>>
>>> Yours Tony.
>>
>>
>> --
>> Mehdi Abaakouk
>> mail: sil...@sileht.net
>>
>> irc: sileht
>
>
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> ]]irc: sileht
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Thierry Carrez
Thierry Carrez wrote:
> [...]
> The fact that you're having hard discussions in Kolla about "adding new
> deliverables" produced by distinct groups of people indicates that you
> may be using Kolla as an old-style "program" rather than as a single
> team. Why not set them up as separate project teams ? What am I missing
> here ?

Answering my own question using Michał's previous answer in the thread:

Michał Jastrzębski wrote:
> Having single Kolla umbrella has practical benefits which I would hate
> to lose quite frankly. One of which would be that Kolla is being
> evaluated by lot of different companies, and having full separation
> between projects would make navigation of a landscape harder.

That sounds like you're building a "Kolla" brand and afraid that a more
distributed project structure would hurt that... So this is going a bit
against the grain of the OpenStack project structure (which is designed
to facilitate people to openly collaborate, not really to create
sub-brands).

Also when you say companies evaluate "Kolla", in the end don't they
choose one of the kolla-* flavors to deploy, rather than "Kolla" ? It
feels like multiple projects could depend on Kolla images (which
everyone seems to be very happy with) without breaking that ?

> Another reason is single community which we value - there is no full
> separation even between kolla-ansible and kolla-k8s (ansible still
> generates config files for k8s for example), and further separation of
> projects would hurt cooperation, and I don't think we've hit situation
> when it's necessary. I'm not ready to have this discussion yet, and
> I'm personally quite opposed to this.

Wondering if this lack of separation is an artifact of the
single-project-team model you picked, rather than a reason to keep it...
Stronger contracts and proper decomposition of roles sounds like a
worthwhile goal ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Thierry Carrez
Michał Jastrzębski wrote:
> I created CIVS poll with options we discussed. Every core member should
> get link to poll voting, if that's not the case, please let me know.

Just a quick sidenote to explain how the "big-tent" model of governance
plays in here...

In the previous project structure model, we had "programs". If you
wanted to do networking stuff, you had to join the Networking program
(neutron). If you worked on object storage, you had to join the Object
Storage program (swift). The main issue with this model is that it
prevented alternate approaches from emerging (as a program PTL could
just refuse its emergence to continue to "own" that space). It also
created weird situations where there would be multiple distinct groups
of people in a program, but a single PTL to elect to represent them all.
That created unnecessary political issues within programs and tension
around PTL election.

Part of the big-tent project structure reform was to abolish programs
and organize our work around "teams", rather than "themes". Project
teams should be strongly aligned with a single team of people that work
together. That allowed some amount of competition to emerge (we still
try to avoid "gratuitous duplication of effort"), but most importantly
made sure groups of people could "own" their work without having to
defer to an outside core team or PTL. So if you have a distinct team, it
should be its own separate project team with its own PTL. There is no
program or namespace anymore. As a bonus side-effect, it made sure teams
would not indefinitely grow, and we all know that it's difficult to grow
core teams (and trust) beyond a certain point.

This is why we have multiple packaging project teams, each specialized
in a given package orchestration mechanism, rather than have a single
"Packaging" program with a single PTL and Ansible / Puppet / Chef
fighting in elections to get their man at the helm. This is why the
Storlets team, while deeply related to Swift and in very good
collaboration terms with them, was set up as a separate project team.
Different people, different team.

The fact that you're having hard discussions in Kolla about "adding new
deliverables" produced by distinct groups of people indicates that you
may be using Kolla as an old-style "program" rather than as a single
team. Why not set them up as separate project teams ? What am I missing
here ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [infra] drbdmanage is no more GPL2

2017-01-11 Thread Thierry Carrez
Jeremy Stanley wrote:
> On 2017-01-10 09:44:06 -0600 (-0600), Sean McGinnis wrote:
> [...]
>> It doesn't look like much has changed here. There has been one commit
>> that only slightly modified the new license: [1]
>>
>> IANAL, and I don't want to make assumption on what can and can't be
>> done, so looking to other more informed folks. Do we need to remove this
>> from the Jenkins run CI tests?
>>
>> Input would be appreciated.
> [...]
> 
> Our chosen platform distributors aren't ever going to incorporate
> software with such license terms so it's not something I would, from
> a CI toolchain perspective, support installing on our test servers.
> The only obvious solutions are to stick with testing an older
> release upstream (which is only useful for so long) and/or switch to
> third-party CI for newer releases (perhaps dropping official support
> for the driver entirely if Cinder feels it's warranted).

An alternative would be to fork drbdmanage from the last GPL commit and
try to maintain it in workable state from there as a GPLv2 library on
PyPI. This is obviously not a valid suggestion if we constantly rely on
new features / updates on that one (I have no idea if that's the case).

Overall, it sounds like a good idea (as we revisit driver visibility) to
indicate their external dependencies and under which licenses and
conditions you can use / modify / redistribute those.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] updating to pycryptome from pycrypto

2017-01-11 Thread Matthew Thode
So, pycrypto decided to rename themselves a while ago.  At the same time
they did an ABI change.  This is causing projects that dep on them to
have to handle both at the same time.  While some projects have
migrated, most have not.

A problem has come up where a project has a CVE (pysaml2) and the fix is
only in versions after they changed to pycryptome.  This means that in
order to consume the fix in a python-native way all the pycrypto
dependency would need to be updated to pycryptome in all projects in the
same namespace that pysaml2 is installed.

Possible solutions:

update everything to pycryptome
  * would be the best going forward
  * a ton of work very late in the cycle

have upstream pysaml2 release a fix based on the code before the change
  * less work
  * should still circle around and update the world in pike
  * 4.0.2 was the last release 4.0.3 was the change
* would necessitate a 4.0.2.1 release
* tag was removed, can hopefully be recovered for checkout/branch


Here's the upstream bug to browse at your leisure :)

https://github.com/rohe/pysaml2/issues/366

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] About alarms reported by datasource and the alarms generated by vitrage evaluator

2017-01-11 Thread Yujun Zhang
I have just realized abstract alarm is not a good term. What I was talking
about is *fault* and *alarm*.

Fault is what actually happens, and alarm is how it is detected (or
deduced).

On Wed, Jan 11, 2017 at 5:13 PM Yujun Zhang 
wrote:

> Yes, if we consider the Vitrage scenario evaluator as a pseudo monitor.
>
> I think YinLiYin's idea is a reasonable requirement from end user. They
> care more about the *real faults* in the system, not how they are
> detected. Though it will bring much challenge to design and engineering, it
> creates value for customers. I'm quite positive on this evolution.
>
> One possible solution would be introducing a high level (abstract)
> template from users view. Then convert it to Vitrage scenario templates (or
> directly to graph). The *more sources* (nagios, vitrage deduction) for an
> abstract alarm we get from the system, the *more confidence* we get for a
> real fault. And the confidence of an alarm could be included in the
> scenario condition.
>
> On Wed, Jan 11, 2017 at 4:08 PM Afek, Ifat (Nokia - IL) <
> ifat.a...@nokia.com> wrote:
>
> You are right. But as I see it, the case of Vitrage suspect vs. the real
> Nagios alarm is just one example of the more general case of two monitors
> reporting the same alarm.
>
> Don’t you think so?
>
>
>
> *From: *Yujun Zhang 
>
>
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
>
> *Date: *Wednesday, 11 January 2017 at 09:46
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>, "yinli...@zte.com.cn" <
> yinli...@zte.com.cn>
> *Cc: *"han.jin...@zte.com.cn" , "
> wang.we...@zte.com.cn" , "zhang.yuj...@zte.com.cn"
> , "jia.peiy...@zte.com.cn" <
> jia.peiy...@zte.com.cn>, "gong.yah...@zte.com.cn" 
>
>
> *Subject: *Re: [openstack-dev] [Vitrage] About alarms reported by
> datasource and the alarms generated by vitrage evaluator
>
>
>
> Hi, Ifat
>
>
>
> If I understand it correctly, your concerns are mainly on same alarm from
> different monitor, but not "suspect" status as discussed in another thread.
>
>
>
> On Tue, Jan 10, 2017 at 10:21 PM Afek, Ifat (Nokia - IL) <
> ifat.a...@nokia.com> wrote:
>
> Hi Yinliyin,
>
>
>
> At first I thought that changing the deduced to be a property on the alarm
> might help in solving your use case. But now I think most of the problems
> will remain the same:
>
>
>
> ·  It won’t solve the general problem of two different monitors that
> raise the same alarm
>
> ·  It won’t solve possible conflicts of timestamp and severity between
> different monitors
>
> ·  It will make the decision of when to delete the alarm more complex
> (delete it when the deduced alarm is deleted? When Nagios alarm is deleted?
> both? And how to change the timestamp and severity in these cases?)
>
>
>
> So I don’t think that making this change is beneficial.
>
> What do you think?
>
>
>
> Best Regards,
>
> Ifat.
>
>
>
>
>
> *From: *"yinli...@zte.com.cn" 
> *Date: *Monday, 9 January 2017 at 05:29
> *To: *"Afek, Ifat (Nokia - IL)" 
> *Cc: *"openstack-dev@lists.openstack.org" <
> openstack-dev@lists.openstack.org>, "han.jin...@zte.com.cn" <
> han.jin...@zte.com.cn>, "wang.we...@zte.com.cn" , "
> zhang.yuj...@zte.com.cn" , "
> jia.peiy...@zte.com.cn" , "gong.yah...@zte.com.cn"
> 
> *Subject: *Re: [openstack-dev] [Vitrage] About alarms reported by
> datasource and the alarms generated by vitrage evaluator
>
>
>
> Hi Ifat,
>
>  I think there is a situation that all the alarms are reported by
> the monitored system. We use vitrage to:
>
> 1.  Found the relationships of the alarms, and find the root
> cause.
>
> 2.  Deduce the alarm before it really occured. This comprise
> two aspects:
>
>  1) A cause B:  When A occured,  we deduce that B would
> occur
>
>  2) B is caused by A:  When B occured, we deduce that A
> must occured
>
> In "2",   we do expect vitrage to raise the alarm before the
> alarm is reported because the alarm would be lost or be delayed for some
> reason.  So we would write "raise alarm" actions in the scenarios of the
> template.  I think that the alarm is reported or is deduced should be a
> state property of the alarm. The vertex reported and the vertex deduced of
> the same alarm should be merged to one vertex.
>
>
>
>  Best Regards,
>
>  Yinliyin.
>
> 原始邮件
>
> *发件人:* <ifat.a...@nokia.com>;
>
> *收件人:* <openstack-dev@lists.openstack.org>;
>
> *抄送人:*韩静6838;王维雅00042110;章宇军10200531;贾培源10101785;龚亚辉6092001895
> <(609)%20200-1895>;
>
> *日* *期* *:*2017年01月07日 02:18
>
> *主* *题* *:**Re: [openstack-dev] [Vitrage] About alarms reported by
> 

Re: [Openstack-operators] Mitaka to Newton Provider network issues

2017-01-11 Thread Saverio Proto
Hello,
in the upgrade did the version of ovs change ?

what Openstack distribution are you using ?

thanks

Saverio


2017-01-10 16:30 GMT+01:00 Telmo Morais :
>
> Hi All,
>
> We are currently on the process of upgrading from Mitaka to Newton, and on
> the upgraded compute nodes we lost all connectivity on provider networks.
>
> After digging through the ovs and ovs agent, we found that if we have only
> ONE provider network configured everything works as expected. But as soon as
> we add more providers networks in the config files we lose connectivity in
> them all. We also noticed that some flows in ovs are created, probably not
> all of them, as it doesn't work.
>
> Anyone has seen this behavior before?
>
> PS: no changes to the config were made during the upgrade.
>
> Thanks.
>
> Telmo Morais
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [cinder]Can I run cinder-volume and cinder-backup on a same host?

2017-01-11 Thread Rikimaru Honjo

Hi All,

I have a question about cinder.
Can I run cinder-volume and cinder-backup on a same host?

I use a cinder driver that uses iscsi protocol.
I afraid that iscsi operations will be conflicted between cinder-volume and 
cinder-backup.

e.g.(Caution: This is just a forecast.)
If cinder-backup execute "iscsiadm -rescan" while cinder-volume is terminating 
connection,
the iscsi connection will remain unexpectedly.

Is there a model architecture?
Please share knowledge if you have it.
--
Rikimaru Honjo
honjo.rikim...@po.ntts.co.jp


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Vitrage] About alarms reported by datasource and the alarms generated by vitrage evaluator

2017-01-11 Thread Yujun Zhang
Yes, if we consider the Vitrage scenario evaluator as a pseudo monitor.

I think YinLiYin's idea is a reasonable requirement from end user. They
care more about the *real faults* in the system, not how they are detected.
Though it will bring much challenge to design and engineering, it creates
value for customers. I'm quite positive on this evolution.

One possible solution would be introducing a high level (abstract) template
from users view. Then convert it to Vitrage scenario templates (or directly
to graph). The *more sources* (nagios, vitrage deduction) for an abstract
alarm we get from the system, the *more confidence* we get for a real
fault. And the confidence of an alarm could be included in the scenario
condition.

On Wed, Jan 11, 2017 at 4:08 PM Afek, Ifat (Nokia - IL) 
wrote:

> You are right. But as I see it, the case of Vitrage suspect vs. the real
> Nagios alarm is just one example of the more general case of two monitors
> reporting the same alarm.
>
> Don’t you think so?
>
>
>
> *From: *Yujun Zhang 
>
>
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
>
> *Date: *Wednesday, 11 January 2017 at 09:46
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>, "yinli...@zte.com.cn" <
> yinli...@zte.com.cn>
> *Cc: *"han.jin...@zte.com.cn" , "
> wang.we...@zte.com.cn" , "zhang.yuj...@zte.com.cn"
> , "jia.peiy...@zte.com.cn" <
> jia.peiy...@zte.com.cn>, "gong.yah...@zte.com.cn" 
>
>
> *Subject: *Re: [openstack-dev] [Vitrage] About alarms reported by
> datasource and the alarms generated by vitrage evaluator
>
>
>
> Hi, Ifat
>
>
>
> If I understand it correctly, your concerns are mainly on same alarm from
> different monitor, but not "suspect" status as discussed in another thread.
>
>
>
> On Tue, Jan 10, 2017 at 10:21 PM Afek, Ifat (Nokia - IL) <
> ifat.a...@nokia.com> wrote:
>
> Hi Yinliyin,
>
>
>
> At first I thought that changing the deduced to be a property on the alarm
> might help in solving your use case. But now I think most of the problems
> will remain the same:
>
>
>
> ·  It won’t solve the general problem of two different monitors that
> raise the same alarm
>
> ·  It won’t solve possible conflicts of timestamp and severity between
> different monitors
>
> ·  It will make the decision of when to delete the alarm more complex
> (delete it when the deduced alarm is deleted? When Nagios alarm is deleted?
> both? And how to change the timestamp and severity in these cases?)
>
>
>
> So I don’t think that making this change is beneficial.
>
> What do you think?
>
>
>
> Best Regards,
>
> Ifat.
>
>
>
>
>
> *From: *"yinli...@zte.com.cn" 
> *Date: *Monday, 9 January 2017 at 05:29
> *To: *"Afek, Ifat (Nokia - IL)" 
> *Cc: *"openstack-dev@lists.openstack.org" <
> openstack-dev@lists.openstack.org>, "han.jin...@zte.com.cn" <
> han.jin...@zte.com.cn>, "wang.we...@zte.com.cn" , "
> zhang.yuj...@zte.com.cn" , "
> jia.peiy...@zte.com.cn" , "gong.yah...@zte.com.cn"
> 
> *Subject: *Re: [openstack-dev] [Vitrage] About alarms reported by
> datasource and the alarms generated by vitrage evaluator
>
>
>
> Hi Ifat,
>
>  I think there is a situation that all the alarms are reported by
> the monitored system. We use vitrage to:
>
> 1.  Found the relationships of the alarms, and find the root
> cause.
>
> 2.  Deduce the alarm before it really occured. This comprise
> two aspects:
>
>  1) A cause B:  When A occured,  we deduce that B would
> occur
>
>  2) B is caused by A:  When B occured, we deduce that A
> must occured
>
> In "2",   we do expect vitrage to raise the alarm before the
> alarm is reported because the alarm would be lost or be delayed for some
> reason.  So we would write "raise alarm" actions in the scenarios of the
> template.  I think that the alarm is reported or is deduced should be a
> state property of the alarm. The vertex reported and the vertex deduced of
> the same alarm should be merged to one vertex.
>
>
>
>  Best Regards,
>
>  Yinliyin.
>
> 原始邮件
>
> *发件人:* <ifat.a...@nokia.com>;
>
> *收件人:* <openstack-dev@lists.openstack.org>;
>
> *抄送人:*韩静6838;王维雅00042110;章宇军10200531;贾培源10101785;龚亚辉6092001895
> <(609)%20200-1895>;
>
> *日* *期* *:*2017年01月07日 02:18
>
> *主* *题* *:**Re: [openstack-dev] [Vitrage] About alarms reported by
> datasource and the alarms generated by vitrage evaluator*
>
>
>
> Hi YinLiYin,
>
>
>
> This is an interesting question. Let me divide my answer to two parts.
>
>
>
> First, the case that you described with Nagios and Vitrage. This problem
> depends on the specific Nagios tests that you 

Re: [Openstack] nova backup - instances unreachable

2017-01-11 Thread Eugen Block

Hi,

this seems to be exptected, the docs say:

"Shut down the source VM before you take the snapshot to ensure that  
all data is flushed to disk."


So if the VM is not shut down, it's freezed to prevent data loss (I  
guess). Depending on your storage backend, there are other ways to  
perform backups of your VMs.
We use Ceph as backend for nova, glance and cinder. Ceph stores the  
disks, images and volumes as Rados block device objects. We have a  
backup script that creates snapshots of these RBDs, which are exported  
to our backup drive. This way the running VM is not stopped or  
freezed, the user doesn't notice any issues. Unlike a nova snapshot,  
the rbd snapshot is created immediately within a few seconds. After a  
successful backup the snapshots are removed.


Hope this helps! If you are interested in Ceph, visit [1].

Regards,
Eugen

[1] http://docs.ceph.com/docs/giant/start/intro/


Zitat von John Petrini :


Hello,

I've just started experimenting with nova backup and discovered that there
is a period of time during the snapshot where the instance becomes
unreachable. Is this behavior expected during a live snapshot? Is there any
way to prevent this?

___

John Petrini




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [docs][badges][all] Docs gate broken for projects that include README.rst in docs

2017-01-11 Thread Flavio Percoco

On 09/01/17 17:35 +0100, Flavio Percoco wrote:

On 09/01/17 15:02 +0100, Andreas Jaeger wrote:

On 2016-12-12 09:22, Flavio Percoco wrote:

On 11/12/16 13:32 +0100, Flavio Percoco wrote:

On 09/12/16 17:20 +0100, Flavio Percoco wrote:

Greetings,

Some docs jobs seem to be broken by the latest (or not?) docutils
release. The
breakage seems to be related to the latest addition of the bages
patch. The docs
generation doesn't like to have remote images. It used to be a
warning but it
seems to have turned into an error now. While this is reported and fixed
upstream, we can workaround the issue by tagging the image as remote.

An example of this fix can be found here:
https://review.openstack.org/#/q/topic:readme-badge-fix

Note that this is mostly relevant for projects that include the
readme files in
their documentation. If your project doesn't do it, you can ignore
this email.
That said, I'd recommend all projects to do it.



Apparently this "fix" doesn't render the image, which is far from the
ideal
solution. Hang on while we find a better fix.


Ok, here's the actual "fix" for this issue. We're now skipping version
0.13.1 of
docutils as that's breaking several docs dates. if your project is using
the
requirements constraints, you should not be hitting this issue. However,
if your
project isn't using the upper constraints, then you may want to do
something
similar to this[0][1].

This issue has been reported upstream [2]

[0] https://review.openstack.org/#/c/409630/
[1] https://review.openstack.org/#/c/409529/
[2] https://sourceforge.net/p/docutils/bugs/301/


I see that upstream has closed the issues with "Fixed in Sphinx 1.5.1".

Should we update Sphinx to 1.5.1? Anybody wants to go for it? Right now,
the global requirements file has:

global-requirements.txt:sphinx>=1.2.1,!=1.3b1,<1.4  # BSD



Happy to help here,


As promissed, I took at stab at this. I've submitted a patch to update the
sphinx requirement[0] and I've tested this in one of the projects[1].

[0] https://review.openstack.org/#/c/418772/
[1] https://review.openstack.org/#/c/418611/

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of Jan.11

2017-01-11 Thread joehuang
Hello, team,

Agenda of Jan.11 weekly meeting:

  1.  Ocata release preparation
  2.  shadow_agent for VxLAN L2 networking/L3 DVR issue
  3.  shared_vlan confusion when creating vlan network in some specified region
  4.  Open Discussion

How to join:
#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang (joehuang)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] About alarms reported by datasource and the alarms generated by vitrage evaluator

2017-01-11 Thread Afek, Ifat (Nokia - IL)
You are right. But as I see it, the case of Vitrage suspect vs. the real Nagios 
alarm is just one example of the more general case of two monitors reporting 
the same alarm.
Don’t you think so?

From: Yujun Zhang 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 11 January 2017 at 09:46
To: "OpenStack Development Mailing List (not for usage questions)" 
, "yinli...@zte.com.cn" 
Cc: "han.jin...@zte.com.cn" , "wang.we...@zte.com.cn" 
, "zhang.yuj...@zte.com.cn" , 
"jia.peiy...@zte.com.cn" , "gong.yah...@zte.com.cn" 

Subject: Re: [openstack-dev] [Vitrage] About alarms reported by datasource and 
the alarms generated by vitrage evaluator

Hi, Ifat

If I understand it correctly, your concerns are mainly on same alarm from 
different monitor, but not "suspect" status as discussed in another thread.

On Tue, Jan 10, 2017 at 10:21 PM Afek, Ifat (Nokia - IL) 
> wrote:
Hi Yinliyin,

At first I thought that changing the deduced to be a property on the alarm 
might help in solving your use case. But now I think most of the problems will 
remain the same:

·  It won’t solve the general problem of two different monitors that raise the 
same alarm
·  It won’t solve possible conflicts of timestamp and severity between 
different monitors
·  It will make the decision of when to delete the alarm more complex (delete 
it when the deduced alarm is deleted? When Nagios alarm is deleted? both? And 
how to change the timestamp and severity in these cases?)

So I don’t think that making this change is beneficial.
What do you think?

Best Regards,
Ifat.


From: "yinli...@zte.com.cn" 
>
Date: Monday, 9 January 2017 at 05:29
To: "Afek, Ifat (Nokia - IL)" >
Cc: 
"openstack-dev@lists.openstack.org" 
>, 
"han.jin...@zte.com.cn" 
>, 
"wang.we...@zte.com.cn" 
>, 
"zhang.yuj...@zte.com.cn" 
>, 
"jia.peiy...@zte.com.cn" 
>, 
"gong.yah...@zte.com.cn" 
>
Subject: Re: [openstack-dev] [Vitrage] About alarms reported by datasource and 
the alarms generated by vitrage evaluator



Hi Ifat,

 I think there is a situation that all the alarms are reported by the 
monitored system. We use vitrage to:

1.  Found the relationships of the alarms, and find the root cause.

2.  Deduce the alarm before it really occured. This comprise two 
aspects:

 1) A cause B:  When A occured,  we deduce that B would occur

 2) B is caused by A:  When B occured, we deduce that A must 
occured

In "2",   we do expect vitrage to raise the alarm before the alarm 
is reported because the alarm would be lost or be delayed for some reason.  So 
we would write "raise alarm" actions in the scenarios of the template.  I think 
that the alarm is reported or is deduced should be a state property of the 
alarm. The vertex reported and the vertex deduced of the same alarm should be 
merged to one vertex.



 Best Regards,

 Yinliyin.

原始邮件
发件人: <ifat.a...@nokia.com>;
收件人: 
<openstack-dev@lists.openstack.org>;
抄送人:韩静6838;王维雅00042110;章宇军10200531;贾培源10101785;龚亚辉6092001895;
日 期 :2017年01月07日 02:18
主 题 :Re: [openstack-dev] [Vitrage] About alarms reported by datasource and the 
alarms generated by vitrage evaluator


Hi YinLiYin,

This is an interesting question. Let me divide my answer to two parts.

First, the case that you described with Nagios and Vitrage. This problem 
depends on the specific Nagios tests that you configure in your system, as well 
as on the Vitrage templates that  you use. For example, you can use 
Nagios/Zabbix to monitor the physical layer, and Vitrage to raise deduced 
alarms on the virtual and application layers. This way you will never have 
duplicated alarms. If you want to use Nagios to monitor the other layers  as 
well, you can simply modify Vitrage templates so they don’t raise the deduced 
alarms that Nagios may generate, and use the templates to show RCA between 
different Nagios alarms.

Now let’s 

Re: [OpenStack-Infra] [OpenStack-docs] [infra][docs] Steps to migrate docs.o.o to new AFS based server

2017-01-11 Thread Andreas Jaeger
My spider I found a few directories that were completely missing and
we'll migrate those over later today manually. Once that's done I rerun
the spider and send the result here so that people can fix documents or
file bugs.

I also fixed a few broken links myself.


Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra