Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-10 Thread Thomas Herve
On Tue, Jan 10, 2017 at 10:41 PM, Clint Byrum  wrote:
> Excerpts from Zane Bitter's message of 2017-01-10 15:28:04 -0500:
>> location is a required property:
>>
>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Glance::Image
>>
>> The resource type literally does not do anything else but expose a Heat
>> interface to a feature of Glance that no longer exists in v2. That's
>> fundamentally why "add v2 support" has been stalled for so long ;)
>>
>
> I think most of this has been beating around the bush, and the statement
> above is the heart of the issue.
>
> The functionality was restricted and mostly removed from Glance for a
> reason. Heat users will have to face that reality just like users of
> other orchestration systems have to.
>
> If a cloud has v1.. great.. take a location.. use it. If they have v2..
> location explodes. If you want to get content in to that image, well,
> other systems have to deal with this too. Ansible's os_image will upload
> a local file to glance for instance. Terraform doesn't even include
> image support.
>
> So the way to go is likely to just make location optional, and start
> to use v2 when the catalog says to. From there, Heat can probably help
> make the v2 API better, and perhaps add support to to the Heat API to
> tell the user where they can upload blobs of data for Heat to then feed
> into Glance.

Making location optional doesn't really make sense. We don't have any
mechanism in a template to upload data, so it would just create an
empty shell that you can't use to boot instances from.

I think this is going where I thought it would: let's not do anything.
The image resource is there for v1 compatibility, but there is no
point to have a v2 resource, at least right now.

We could document how to hide the resource in Heat if you don't deploy
Glance v1.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] About alarms reported by datasource and the alarms generated by vitrage evaluator

2017-01-10 Thread Yujun Zhang
Hi, Ifat

If I understand it correctly, your concerns are mainly on same alarm from
different monitor, but not "suspect" status as discussed in another thread.

On Tue, Jan 10, 2017 at 10:21 PM Afek, Ifat (Nokia - IL) <
ifat.a...@nokia.com> wrote:

Hi Yinliyin,



At first I thought that changing the deduced to be a property on the alarm
might help in solving your use case. But now I think most of the problems
will remain the same:



   - It won’t solve the general problem of two different monitors that
   raise the same alarm
   - It won’t solve possible conflicts of timestamp and severity between
   different monitors
   - It will make the decision of when to delete the alarm more complex
   (delete it when the deduced alarm is deleted? When Nagios alarm is deleted?
   both? And how to change the timestamp and severity in these cases?)



So I don’t think that making this change is beneficial.

What do you think?



Best Regards,

Ifat.





*From: *"yinli...@zte.com.cn" 
*Date: *Monday, 9 January 2017 at 05:29
*To: *"Afek, Ifat (Nokia - IL)" 
*Cc: *"openstack-dev@lists.openstack.org" ,
"han.jin...@zte.com.cn" , "wang.we...@zte.com.cn" <
wang.we...@zte.com.cn>, "zhang.yuj...@zte.com.cn" ,
"jia.peiy...@zte.com.cn" , "gong.yah...@zte.com.cn"

*Subject: *Re: [openstack-dev] [Vitrage] About alarms reported by
datasource and the alarms generated by vitrage evaluator



Hi Ifat,

 I think there is a situation that all the alarms are reported by
the monitored system. We use vitrage to:

1.  Found the relationships of the alarms, and find the root
cause.

2.  Deduce the alarm before it really occured. This comprise
two aspects:

 1) A cause B:  When A occured,  we deduce that B would
occur

 2) B is caused by A:  When B occured, we deduce that A
must occured

In "2",   we do expect vitrage to raise the alarm before the
alarm is reported because the alarm would be lost or be delayed for some
reason.  So we would write "raise alarm" actions in the scenarios of the
template.  I think that the alarm is reported or is deduced should be a
state property of the alarm. The vertex reported and the vertex deduced of
the same alarm should be merged to one vertex.



 Best Regards,

 Yinliyin.

原始邮件

*发件人:* <ifat.a...@nokia.com>;

*收件人:* <openstack-dev@lists.openstack.org>;

*抄送人:*韩静6838;王维雅00042110;章宇军10200531;贾培源10101785;龚亚辉6092001895
<(609)%20200-1895>;

*日* *期* *:*2017年01月07日 02:18

*主* *题* *:**Re: [openstack-dev] [Vitrage] About alarms reported by
datasource and the alarms generated by vitrage evaluator*



Hi YinLiYin,



This is an interesting question. Let me divide my answer to two parts.



First, the case that you described with Nagios and Vitrage. This problem
depends on the specific Nagios tests that you configure in your system, as
well as on the Vitrage templates that  you use. For example, you can use
Nagios/Zabbix to monitor the physical layer, and Vitrage to raise deduced
alarms on the virtual and application layers. This way you will never have
duplicated alarms. If you want to use Nagios to monitor the other layers
 as well, you can simply modify Vitrage templates so they don’t raise the
deduced alarms that Nagios may generate, and use the templates to show RCA
between different Nagios alarms.



Now let’s talk about the more general case. Vitrage can receive alarms from
different monitors, including Nagios, Zabbix, collectd and Aodh. If you are
using more than one monitor, it is  possible that the same alarm (maybe
with a different name) will be raised twice. We need to create a mechanism
to identify such cases and create a single alarm with the properties of
both monitors. This has not been designed in details yet, so if you have
 any suggestion we will be happy to hear them.



Best Regards,

Ifat.





*From: *"yinli...@zte.com.cn" <yinli...@zte.com.cn>
*Reply-To: *"OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
*Date: *Friday, 6 January 2017 at 03:27
*To: *"openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org
>
*Cc: *"gong.yah...@zte.com.cn" <gong.yah...@zte.com.cn>, "
han.jin...@zte.com.cn" <han.jin...@zte.com.cn>, "wang.we...@zte.com.cn" <
wang.we...@zte.com.cn>, "jia.peiy...@zte.com.cn" <jia.peiy...@zte.com.cn>, "
zhang.yuj...@zte.com.cn" <zhang.yuj...@zte.com.cn>
*Subject: *[openstack-dev] [Vitrage] About alarms reported by datasource
and the alarms generated by vitrage evaluator



Hi all,

   Vitrage generate alarms acording to the templates. All the alarms raised
by vitrage has the type "vitrage". Suppose Nagios has an alarm A. Alarm A
is raised by vitrage evaluator according to the action part of a scenario,
type  of alarm A is "vitrage". If Nagios reported alarm A 

Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2017-01-10 Thread Mehdi Abaakouk

The library final release is really soon, and we are still blocked on
this topic. If this is not solved, we will release one more time an
unusable driver in oslo.messging.

I want to remember that people current uses the kafka driver in
production with 'downstream patches' ready since 1 years to make it
works.

We recently remove the kafka dep from oslo.messaging to be able to merge
some of these patches. But we can't untag the experimental flag of
this driver until the dependency issue is solved.

So what can we do to unblock this situation ?

On Fri, Jan 06, 2017 at 02:31:28PM +0100, Mehdi Abaakouk wrote:

Any progress ?

On Thu, Dec 08, 2016 at 08:32:54AM +1100, Tony Breeds wrote:

On Mon, Dec 05, 2016 at 04:03:13AM +, Keen, Joe wrote:

I wasn’t able to set a test up on Friday and with all the other work I
have for the next few days I doubt I’ll be able to get to it much before
Wednesday.


It's Wednesday so can we have an update?

Yours Tony.


--
Mehdi Abaakouk
mail: sil...@sileht.net

irc: sileht


--
Mehdi Abaakouk
mail: sil...@sileht.net
]]irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which service is using port 8778?

2017-01-10 Thread Michael Davies
On Tue, Dec 20, 2016 at 4:46 PM, Ghanshyam Mann <
ghanshyam.m...@nectechnologies.in> wrote:
[snip]

> But OpenStack port used by services are maintained here[3], may be it will
> be good for each project to add their port in this list.
>
[snip]

> ..[3] http://docs.openstack.org/newton/config-reference/
> firewalls-default-ports.html


I know this thread has moved on, but I'm not sure a list of default ports
for a firewall is the right place to be documenting this.

If there are admin services that perhaps should not, by default, be exposed
publicly - then they shouldn't be listed in such a table.  A simple
implementation might be to expose all of these, which would not be the most
secure default.

Perhaps the equivalent of /etc/services or
http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml
specifically for OpenStack might be better.

Hope this helps,

Michael...
-- 
Michael Davies   mich...@the-davies.net
Rackspace Cloud Builders Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-10 Thread Saravanan KR
Thanks Emilien and Giulio for your valuable feedback. I will start
working towards finalizing the workbook and the actions required.

> would you be able to join the PTG to help us with the session on the
> overcloud settings optimization?
I will come back on this, as I have not planned for it yet. If it
works out, I will update the etherpad.

Regards,
Saravanan KR


On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente  wrote:
> On 01/04/2017 09:13 AM, Saravanan KR wrote:
>>
>> Hello,
>>
>> The aim of this mail is to ease the DPDK deployment with TripleO. I
>> would like to see if the approach of deriving THT parameter based on
>> introspection data, with a high level input would be feasible.
>>
>> Let me brief on the complexity of certain parameters, which are
>> related to DPDK. Following parameters should be configured for a good
>> performing DPDK cluster:
>> * NeutronDpdkCoreList (puppet-vswitch)
>> * ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under
>> review)
>> * NovaVcpuPinset (puppet-nova)
>>
>> * NeutronDpdkSocketMemory (puppet-vswitch)
>> * NeutronDpdkMemoryChannels (puppet-vswitch)
>> * ComputeKernelArgs (PreNetworkConfig [4]) (under review)
>> * Interface to bind DPDK driver (network config templates)
>>
>> The complexity of deciding some of these parameters is explained in
>> the blog [1], where the CPUs has to be chosen in accordance with the
>> NUMA node associated with the interface. We are working a spec [2], to
>> collect the required details from the baremetal via the introspection.
>> The proposal is to create mistral workbook and actions
>> (tripleo-common), which will take minimal inputs and decide the actual
>> value of parameters based on the introspection data. I have created
>> simple workbook [3] with what I have in mind (not final, only
>> wireframe). The expected output of this workflow is to return the list
>> of inputs for "parameter_defaults",  which will be used for the
>> deployment. I would like to hear from the experts, if there is any
>> drawbacks with this approach or any other better approach.
>
>
> hi, I am not an expert, I think John (on CC) knows more but this looks like
> a good initial step to me.
>
> once we have the workbook in good shape, we could probably integrate it in
> the tripleo client/common to (optionally) trigger it before every deployment
>
> would you be able to join the PTG to help us with the session on the
> overcloud settings optimization?
>
> https://etherpad.openstack.org/p/tripleo-ptg-pike
> --
> Giulio Fidente
> GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [blazar] Yesterday's meeting summary

2017-01-10 Thread Hiroaki Kobayashi

Hi blazar folks,

I share a summary of yesterday's blazar meeting because
some parts of a meeting log was lost due to a bot failure.

# Action items

1. Share information about PTG if there are any update. (All)

2. List ideas for a presentation at Boston Summit
 https://etherpad.openstack.org/p/blazar-session-boston
 by the next meeting. (All)

3. Continue action items from a previous meeting on Dec 20 (All)

4. Check new features related to OPNFV Promise (All)
 L139- at https://etherpad.openstack.org/p/Blazar_status_2016


# Agreed

* Merge the patch 406009 not to grow the pile of patches and
drastically redesign the instance reservation in the long term.

* Clean up current reviews related to tempest.

* Decide when to update the namespace after current in-review
patches are merged.

Best regards,
Hiroaki



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack-gate][all]support multi-region gate environment

2017-01-10 Thread joehuang
If there is environment variable to define the role of each node, it's good. To 
support multi-region devstack gate/check job, we have to configure the localrc 
for each node of different role, for example:

 for the second region node, need to configure it to use the primary 
node's keystone:
 local sub_node=`cat /etc/nodepool/sub_nodes_private`
echo "HOST_IP=$sub_node" >>"$localrc_file"
echo "SERVICE_HOST=$HOST_IP" >>"$localrc_file"
echo "REGION_NAME=RegionTwo" >>"$localrc_file"
echo "KEYSTONE_REGION_NAME=RegionOne" >>"$localrc_file"
echo "KEYSTONE_SERVICE_HOST=$primary_node" >>"$localrc_file"
echo "KEYSTONE_AUTH_HOST=$primary_node" >>"$localrc_file"

   for the primary node where the keystone service will be enabled:
echo "HOST_IP=$primary_node" >>"$localrc_file"
echo "REGION_NAME=RegionOne" >>"$localrc_file

If there is only one role environment variable, then the primary node should 
always set HOST_IP to $primary_node but not the default 127.0.0.1, because the 
keystone will not work well in multi-region scenario if HOST_IP is 127.0.0.1.

Best Regards
Chaoyi Huang (joehuang)


From: Sean M. Collins [s...@coreitpro.com]
Sent: 10 January 2017 22:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [infra][devstack-gate][all]support multi-region 
gate environment

joehuang wrote:
> In multi-region environment(for example, two regions RegionOne and 
> RegionTwo), KeyStone will be shared in RegionOne and RegionTwo, so the 
> primary node and subnode should use different role, one role to enable the 
> keystone, while another role is to use the keystone in another node, only one 
> role to support multi-region setup seems to be not possible.  The flag 
> "MULTI_REGION" is to make the subnode play with the role where no keystone 
> will run. If we don't use the flag, or maybe use DEVSTACK_GATE_MULTI_REGION?
>
> This is the first patch in devstack-gate for me, any help or guide will be 
> appreciated.


Basically, it's more of a note for myself at this point.

We don't directly expose a way to define a role in
devstack-gate[1][2][3][4]. We do a lot of heuristics to detect whether
or not the node is a primary node or a subnode.

Ideally, we should really have an environment variable ($ROLE ?) that
can be set by projects in  project-config, and just call the test matrix
script[5] with the role that is being set in the environment variable.

Because otherwise we end up with more if/else checks on random variables
like your patch and the MULTI_KEYSTONE[6] patch, and eventually it
becomes very difficult to maintain and add to.

Does this make sense?


[1]: 
https://github.com/openstack-infra/devstack-gate/blob/8740b6075b53e3c9bfda76d022fcc53904594e9c/devstack-vm-gate.sh#L230
[2]: 
https://github.com/openstack-infra/devstack-gate/blob/8740b6075b53e3c9bfda76d022fcc53904594e9c/devstack-vm-gate.sh#L259

[3]: 
https://github.com/openstack-infra/devstack-gate/blob/8740b6075b53e3c9bfda76d022fcc53904594e9c/devstack-vm-gate.sh#L642

[4]: 
https://github.com/openstack-infra/devstack-gate/blob/8740b6075b53e3c9bfda76d022fcc53904594e9c/devstack-vm-gate.sh#L121

[5]: 
https://github.com/openstack-infra/devstack-gate/blob/8740b6075b53e3c9bfda76d022fcc53904594e9c/devstack-vm-gate.sh#L265

[6]: https://review.openstack.org/#/c/394895/
--
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2017-01-10 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] placement job is busted in stable/newton (NO MORE HOSTS LEFT)

2017-01-10 Thread Matt Riedemann
I'm trying to sort out failures in the placement job in stable/newton 
job where the tests aren't failing but it's something in the host 
cleanup step that blows up.


Looking here I see this:

http://logs.openstack.org/57/416757/1/check/gate-tempest-dsvm-neutron-placement-full-ubuntu-xenial-nv/dfe0c38/_zuul_ansible/ansible_log.txt.gz

2017-01-04 22:46:50,761 p=10771 u=zuul |  changed: [node] => {"changed": 
true, "checksum": "7f4d51086f4bc4de5ae6d83c00b0e458b8606aa2", "dest": 
"/tmp/05-cb20affd78a84851b47992ff129722af.sh", "gid": 3001, "group": 
"jenkins", "md5sum": "2de9baa70e4d28bbcca550a17959beab", "mode": "0555", 
"owner": "jenkins", "size": 647, "src": 
"/tmp/tmpz_guiR/.ansible/remote_tmp/ansible-tmp-1483570010.54-207083993908564/source", 
"state": "file", "uid": 3000}
2017-01-04 22:46:50,775 p=10771 u=zuul |  TASK [command generated from 
JJB] **
2017-01-04 23:44:42,880 p=10771 u=zuul |  fatal: [node]: FAILED! => 
{"changed": true, "cmd": 
["/tmp/05-cb20affd78a84851b47992ff129722af.sh"], "delta": 
"0:57:51.734808", "end": "2017-01-04 23:44:42.632473", "failed": true, 
"rc": 127, "start": "2017-01-04 22:46:50.897665", "stderr": "", 
"stdout": "", "stdout_lines": [], "warnings": []}
2017-01-04 23:44:42,887 p=10771 u=zuul |  NO MORE HOSTS LEFT 
*
2017-01-04 23:44:42,888 p=10771 u=zuul |  PLAY RECAP 
*
2017-01-04 23:44:42,888 p=10771 u=zuul |  node   : 
ok=13   changed=13   unreachable=0failed=1


I'm not sure what the 'NO MORE HOSTS LEFT' error means. Is there 
something wrong with the post/cleanup step for this job in newton? It's 
non-voting but we're backporting bug fixes for this code since it needs 
to work to upgrade to ocata.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

2017-01-10 Thread joehuang
Sad to know that you will step down from Neutron PTL. Had several f2f talk with 
you, and got lots of valuable feedback from you. Thanks a lot!

Best Regards
Chaoyi Huang (joehuang)

From: Armando M. [arma...@gmail.com]
Sent: 09 January 2017 22:11
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

Hi neutrinos,

The PTL nomination week is fast approaching [0], and as you might have guessed 
by the subject of this email, I am not planning to run for Pike. If I look back 
at [1], I would like to think that I was able to exercise the influence on the 
goals I set out with my first self-nomination [2].

That said, when it comes to a dynamic project like neutron one can't never 
claim to be *done done* and for this reason, I will continue to be part of the 
neutron core team, and help the future PTL drive the next stage of the 
project's journey.

I must admit, I don't write this email lightly, however I feel that it is now 
the right moment for me to step down, and give someone else the opportunity to 
grow in the amazing role of neutron PTL! I have certainly loved every minute of 
it!

Cheers,
Armando

[0] https://releases.openstack.org/ocata/schedule.html
[1] 
https://review.openstack.org/#/q/project:openstack/election+owner:armando-migliaccio
[2] https://review.openstack.org/#/c/223764/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla-ansible] Am I doing this wrong?

2017-01-10 Thread Kris G. Lindgren
Hello Kolla/Kolla-ansible peoples.

I have been trying to take kolla/kolla-ansible and use it to start moving our 
existing openstack deployment into containers.  At the same time also trying to 
fix some of the problems that we created with our previous deployment work 
(everything was in puppet).  Where we had puppet doing *everything* which 
eventually created a system that effectively performed actions at a distance.  
As we were never really 100% what puppet was going to do when we ran it.  Even 
with NOOP mode enabled.  So taking an example of building and deploying glance 
via kolla-ansible. I am running into some problems/concerns and wanted to reach 
out to make sure that I am not missing something.

Things that I am noticing:
 * I need to define a number of servers in my inventory outside of the specific 
servers that I want to perform actions against.  I need to define groups 
baremetal, rabbitmq, memcached, and control (IN addition to the glance specific 
groups) most of these seem to be gathering information for config? (Baremetal 
was needed soley to try to run the bootstrap play).  Running a change 
specifically against "glance" causes fact gathering on a number of other 
servers not specifically where glance is running?  My concern here is that I 
want to be able to run kola-ansible against a specific service and know that 
only those servers are being logged into.

* I want to run a dry-run only, being able to see what will happen before it 
happens, not during; during makes it really hard to see what will happen until 
it happens. Also supporting  `ansible --diff` would really help in 
understanding what will be changed (before it happens).  Ideally, this wouldn’t 
be 100% needed.  But the ability to figure out what a run would *ACTUALLY* do 
on a box is what I was hoping to see.

* Database task are ran on every deploy and status of change DB permissions 
always reports as changed? Even when nothing happens, which makes you wonder 
"what changed"?  Seems like this is because the task either reports a 0 or a 1, 
where it seems like there is 3 states, did nothing, updated something, failed 
to do what was required.  Also, Can someone tell me why the DB stuff is done on 
a deployment task?  Seems like the db checks/migration work should only be done 
on a upgrade or a bootstrap?

* Database services (that at least we have) our not managed by our team, so 
don't want kolla-ansible touching those (since it won't be able to). No way to 
mark the DB as "externally managed"?  IE we dont have permissions to create 
databases or add users.  But we got all other permissions on the databases that 
are created, so normal db-manage tooling works.

* Maintenance level operations; doesn't seem to be any built-in to say 'take a 
server out  of a production state, deploy to it, test it, put it back into 
production'  Seems like if kola-ansible is doing haproxy for API's, it should 
be managing this?  Or an extension point to allow us to run our own 
maintenance/testing scripts?

* Config must come from kolla-ansible and generated templates.  I know we have 
a patch up for externally managed service configuration.  But if we aren't 
suppose to use kolla-ansible for generating configs (see below), why cant we 
override this piece?

Hard to determine what kolla-ansible *should* be used for:

* Certain parts of it are 'reference only' (the config tasks), some are not 
recommended
  to be used at all (bootstrap?); what is the expected parts of kolla-ansible 
people are
  actually using (and not just as a reference point); if parts of kolla-ansible 
are just
  *reference only* then might as well be really upfront about it and tell 
people how to
  disable/replace those reference pieces?

* Seems like this will cause everyone who needs to make tweaks to fork or 
create an "overlay" to override playbooks/tasks with specific functions?

Other questions:

Is kolla-ansibles design philosophy that every deployment is an upgrade?  Or 
every deployment should include all the base level boostrap tests?

Because it seems to me that you have a required set of tasks that should only 
be done once (boot strap).  Another set of tasks that should be done for day to 
day care/feeding: service restarts, config changes, updates to code (new 
container deployments), package updates (new docker container deployment).  And 
a final set of tasks for upgrades where you will need to do things like db 
migrations and other special upgrade things.  It also seems like the day to day 
care and feeding tasks should be incredibly targeted/explicit. For example, 
deploying a new glance container (not in an upgrade scenario).  I would expect 
it to login to the glance servers one at a time.  Place the server in 
maintenance mode to ensure that actions are not performed against it.  
Downloaded the new container.  Start the new container.  Test the new 
container, if successful, place the new container into rotation.  Stop the old 
container. 

Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-10 Thread Giulio Fidente

On 01/04/2017 09:13 AM, Saravanan KR wrote:

Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.


hi, I am not an expert, I think John (on CC) knows more but this looks 
like a good initial step to me.


once we have the workbook in good shape, we could probably integrate it 
in the tripleo client/common to (optionally) trigger it before every 
deployment


would you be able to join the PTG to help us with the session on the 
overcloud settings optimization?


https://etherpad.openstack.org/p/tripleo-ptg-pike
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-10 Thread Giulio Fidente

On 01/04/2017 09:13 AM, Saravanan KR wrote:

Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.


hi, I am not an expert, I think John (on CC) knows more but this looks 
like a good initial step to me.


once we have the workbook in good shape, we could probably integrate it 
in the tripleo client/common to (optionally) trigger it before every 
deployment


would you be able to join the PTG to help us with the session on the 
overcloud settings optimization?


https://etherpad.openstack.org/p/tripleo-ptg-pike
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

2017-01-10 Thread Henry Fourie
Armando
   Appreciate your efforts, leadership and guidance.

-Louis

From: Armando M. [mailto:arma...@gmail.com]
Sent: Monday, January 09, 2017 6:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

Hi neutrinos,

The PTL nomination week is fast approaching [0], and as you might have guessed 
by the subject of this email, I am not planning to run for Pike. If I look back 
at [1], I would like to think that I was able to exercise the influence on the 
goals I set out with my first self-nomination [2].

That said, when it comes to a dynamic project like neutron one can't never 
claim to be *done done* and for this reason, I will continue to be part of the 
neutron core team, and help the future PTL drive the next stage of the 
project's journey.

I must admit, I don't write this email lightly, however I feel that it is now 
the right moment for me to step down, and give someone else the opportunity to 
grow in the amazing role of neutron PTL! I have certainly loved every minute of 
it!

Cheers,
Armando

[0] https://releases.openstack.org/ocata/schedule.html
[1] 
https://review.openstack.org/#/q/project:openstack/election+owner:armando-migliaccio
[2] https://review.openstack.org/#/c/223764/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-10 Thread Clint Byrum
Excerpts from Thomas Herve's message of 2017-01-10 22:15:45 +0100:
> On Tue, Jan 10, 2017 at 9:28 PM, Zane Bitter  wrote:
> > On 10/01/17 14:17, Tim Bell wrote:
> >>
> >>
> >>> On 10 Jan 2017, at 17:41, Zane Bitter  >>> > wrote:
> >>>
> >>> On 10/01/17 05:25, Flavio Percoco wrote:
> 
> 
> 
> > I'd recommend Heat to not use locations as that will require deployers
> > to either enable them for everyone or have a dedicate glance-api node
> > for Heat.
> > If not use location, do we have other options for user? What
> > should user to do before create a glance image using v2? Download the
> > image data? And then pass the image data to glance api? I really don't
> > think it's good way.
> >
> 
>  That *IS* how users create images. There used to be copy-from too (which
>  may or
>  may not come back).
> 
>  Heat's use case is different and I understand that but as I said in my
>  other
>  email, I do not think sticking to v1 is the right approach. I'd rather
>  move on
>  with a deprecation path or compatibility layer.
> >>>
> >>>
> >>> "Backwards-compatibility" is a wide-ranging topic, so let's break this
> >>> down into 3 more specific questions:
> >>>
> >>> 1) What is an interface that we could support with the v2 API?
> >>>
> >>> - If copy-from is not a thing then it sounds to me like the answer is
> >>> "none"? We are not ever going to support uploading a multi-GB image
> >>> file through Heat and from there to Glance.
> >>> - We could have an Image resource that creates a Glance image from a
> >>> volume. It's debatable how useful this would be in an orchestration
> >>> setting (i.e. in most cases this would have to be part of a larger
> >>> workflow anyway), but there are some conceivable uses I guess. Given
> >>> that this is completely disjoint from what the current resource type
> >>> does, we'd make it easier on everyone if we just gave it a new name.
> >>>
> >>> 2) How can we avoid breaking existing stacks that use Image resources?
> >>>
> >>> - If we're not replacing it with anything, then we can just mark the
> >>> resource type as first Deprecated, and then Hidden and switch the back
> >>> end to use the v2 API for things like deleting. As long as nobody
> >>> attempts to replace the image then the rest of the stack should
> >>> continue to work fine.
> >>>
> >>
> >> Can we only deprecate the resources using the location function but
> >> maintain backwards compatibility if the location function is not used?
> >
> >
> > location is a required property:
> >
> > http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Glance::Image
> >
> > The resource type literally does not do anything else but expose a Heat
> > interface to a feature of Glance that no longer exists in v2. That's
> > fundamentally why "add v2 support" has been stalled for so long ;)
> 
> Throwing stuff against the wall, but could we solve the issue in
> heatclient? If we change it to handle the location property, upload
> the image from the client, and pass the id to Heat, it could be
> somewhat transparent to the user. We'd need to do it in Horizon
> though. For the heatclient as a library it's not perfect, but it may
> be good enough.
> 

Pretend that python doesn't exist outside the API layer of a running
service.

The REST API has to do something sane. So IMO, the Heat API's stack
create/update methods need to start feeding back instructions on where
and how to upload data.

But the simple answer is, right now, you just have to create images
outside of stacks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

2017-01-10 Thread Sukhdev Kapur
Hey Armando,

Too bad we can't pick on you anymore :-)

On a serious note, thanks for the leadership you brought to the Neutron
team and the project. Your contributions will always be appreciated.

Look forward to continue to work with you.

regards..
-Sukhdev


On Mon, Jan 9, 2017 at 6:11 AM, Armando M.  wrote:

> Hi neutrinos,
>
> The PTL nomination week is fast approaching [0], and as you might have
> guessed by the subject of this email, I am not planning to run for Pike. If
> I look back at [1], I would like to think that I was able to exercise the
> influence on the goals I set out with my first self-nomination [2].
>
> That said, when it comes to a dynamic project like neutron one can't never
> claim to be *done done* and for this reason, I will continue to be part of
> the neutron core team, and help the future PTL drive the next stage of the
> project's journey.
>
> I must admit, I don't write this email lightly, however I feel that it is
> now the right moment for me to step down, and give someone else the
> opportunity to grow in the amazing role of neutron PTL! I have certainly
> loved every minute of it!
>
> Cheers,
> Armando
>
> [0] https://releases.openstack.org/ocata/schedule.html
> [1] https://review.openstack.org/#/q/project:openstack/elect
> ion+owner:armando-migliaccio
> [2] https://review.openstack.org/#/c/223764/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Nova]Making gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial non voting

2017-01-10 Thread Matt Riedemann

On 1/10/2017 10:02 AM, Jordan Pittier wrote:

Hi,
I don't know if you've noticed but
the gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial job has a
high rate of false negative. I've queried Gerrit and analysed all the
"Verified -2" messages left by Jenkins (i.e Gate failure) for the last
30 days. (script is here [1]).

On project openstack/nova: For the last 58 times where
gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial ran AND
jenkins left a 'Verified -2' message, the job failed 48 times and
succeeded 10 times.

On project openstack/tempest: For the last 25 times where
gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial ran AND
jenkins left a 'Verified -2' message, the job failed 14 times and
succeeded 11 times.

In order words, when there's a gate failure
gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial is the main
culprit, by a significant margin.

I am Tempest core reviewer and this bugs me because it slows the
development of the project I care for reasons that I don't really
understand. I am going to propose a change to make this job non voting
on openstack/tempest.

Jordan

[1]
: 
https://github.com/JordanP/openstack-snippets/blob/master/analyse-gate-failures/analyse_gate_failures.py





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The ceph job has had a high failure rate the last month or more. It's 
been very whack-a-mole from what I've seen when I'm digging into issues. 
There are still some open unresolved bugs being tracked against that job 
in the e-r status page:


http://status.openstack.org//elastic-recheck/index.html

We've fixed a few issues already (device not found on volume detach race 
in nova was one, and some cinder capacity filter issues in another), but 
what's out there now is still an issue and I'm not aware of a ton of 
focus on fixing those. jbernard probably knows the latest but unless 
there are good fixes just waiting for review, then I'm probably OK with 
making it non-voting too.


The most recent bug I reported against that job was due to the c-vol 
service completely dropping out in that job for some reason and then the 
cinder scheduler couldn't build any volumes. Very weird.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit downtime on Thursday 2017-01-12 at 20:00 UTC

2017-01-10 Thread Ian Wienand
Hi everyone,

On Thursday, January 12th from approximately 20:00 through 20:30 UTC
Gerrit will be unavailable while we complete project renames.

Currently, we plan on renaming the following projects:

 Nomad -> Cyborg
  - openstack/nomad -> openstack/cyborg

 Nimble -> Mogan 
  - openstack/nimble -> openstack/mogan
  - openstack/python-nimbleclient -> openstack/python-moganclient
  - openstack/nimble-specs -> openstack/mogan-specs

Existing reviews, project watches, etc, for these projects will all be
carried over.

This list is subject to change. If you need a rename, please be sure
to get your project-config change in soon so we can review it and add
it to 
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Upcoming_Project_Renames

If you have any questions about the maintenance, please reply here or
contact us in #openstack-infra on freenode.

-i 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [placement] Which service is using port 8778?

2017-01-10 Thread Mohammed Naser
We use virtual hosts, haproxy runs on our VIP at port 80 and port 443 (SSL) 
(with keepalived to make sure it’s always running) and we use `use_backend` to 
send to the appropriate backend, more information here:

http://blog.haproxy.com/2015/01/26/web-application-name-to-backend-mapping-in-haproxy/
 


It makes our catalog nice and neat, we have a -.vexxhost.net 
 internal naming convention, so our catalog looks nice 
and clean and the API calls don’t get blocked by firewalls (the strange ports 
might be blocked on some customer-side firewalls).

+--+--+--+-+-+---+--+
| ID   | Region   | Service Name | Service Type
| Enabled | Interface | URL 
 |
+--+--+--+-+-+---+--+
| 01fdd8e07ca74c9daf80a8b66dcc8bf6 | ca-ymq-1 | cinderv2 | volumev2
| True| internal  | 
https://block-storage-ca-ymq-1.vexxhost.net/v2/%(tenant_id)s |
| 09b4a971659643528875f70d93ef6846 | ca-ymq-1 | manila   | share   
| True| internal  | 
https://file-storage-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s  |
| 203fd4e466b44569aa9ab8c78ef55bad | ca-ymq-1 | heat | orchestration   
| True| admin | 
https://orchestration-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s |
| 20b24181722b49a3983d17d42147a22c | ca-ymq-1 | swift| object-store
| True| admin | 
https://object-storage-ca-ymq-1.vexxhost.net/v1/$(tenant_id)s|
| 2f582f99db974766af7548dda56c3b50 | ca-ymq-1 | nova | compute 
| True| internal  | https://compute-ca-ymq-1.vexxhost.net/v2/$(tenant_id)s  
 |
| 37860b492dd947daa738f461b9084d2a | ca-ymq-1 | neutron  | network 
| True| admin | https://network-ca-ymq-1.vexxhost.net   
 |
| 4d38fa91197e4712a2f2d3f89fcd7dad | ca-ymq-1 | nova | compute 
| True| public| https://compute-ca-ymq-1.vexxhost.net/v2/$(tenant_id)s  
 |
| 58894a7156b848d3baa0382ed465f3c2 | ca-ymq-1 | manilav2 | sharev2 
| True| internal  | 
https://file-storage-ca-ymq-1.vexxhost.net/v2/%(tenant_id)s  |
| 5ebc8fa90c3c46d69d3fa8a03688e452 | ca-ymq-1 | manila   | share   
| True| public| 
https://file-storage-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s  |
| 769a4de22d864c3bb2beefe775e3cb9f | ca-ymq-1 | manila   | share   
| True| admin | 
https://file-storage-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s  |
| 79fa33ff42ec45118ae8b36789fcb8ae | ca-ymq-1 | swift| object-store
| True| public| 
https://object-storage-ca-ymq-1.vexxhost.net/v1/$(tenant_id)s|
| 7a095734e4984cc7b8ac581aa6131f23 | ca-ymq-1 | neutron  | network 
| True| public| https://network-ca-ymq-1.vexxhost.net   
 |
| 7f8b519dfb494cef811b164f5eed0360 | ca-ymq-1 | sahara   | data-processing 
| True| internal  | 
https://data-processing-ca-ymq-1.vexxhost.net/v1.1/%(tenant_id)s |
| 8842c03d2c51449ebf9ff36778cf17c1 | ca-ymq-1 | glance   | image   
| True| public| https://image-ca-ymq-1.vexxhost.net 
 |
| 8df18f47fcdc4c348d521d4724a5b7ac | ca-ymq-1 | keystone | identity
| True| admin | https://identity-ca-ymq-1.vexxhost.net/v2.0 
 |
| 96357df3d6694477b0ad17fef6091210 | ca-ymq-1 | neutron  | network 
| True| internal  | https://network-ca-ymq-1.vexxhost.net   
 |
| a25efaf48347441a8d36ce302f31d527 | ca-ymq-1 | cinderv2 | volumev2
| True| public| 
https://block-storage-ca-ymq-1.vexxhost.net/v2/%(tenant_id)s |
| b073b767f10d44f895d9d14fbc3e3d6b | ca-ymq-1 | swift| object-store
| True| internal  | 
https://object-storage-ca-ymq-1.vexxhost.net/v1/$(tenant_id)s|
| b132fe7bcf98440f8e72a142df76292d | ca-ymq-1 | sahara   | data-processing 
| True| admin | 
https://data-processing-ca-ymq-1.vexxhost.net/v1.1/%(tenant_id)s |
| b736338e3c94402a9b21b32b3d0bf1e5 | ca-ymq-1 | sahara   | data-processing 
| True| public| 
https://data-processing-ca-ymq-1.vexxhost.net/v1.1/%(tenant_id)s |
| c0dd9f5f8db248b093d6735b167e1af6 | ca-ymq-1 | keystone | identity
| True| public| https://auth.vexxhost.net/v2.0  
 |
| c8505f07c349413aa7cd61d42337af99 | ca-ymq-1 | keystone | identity
| True| internal  | https://auth.vexxhost.net/v2.0  
 |
| da3d087e0c724338ba12c9a1168ef80c | 

[openstack-dev] [Congress] Installation/Deployment Docs

2017-01-10 Thread Aimee Ukasick
Hi all. While looking at the installation docs in preparation for
scripting and testing Congress installation
(https://bugs.launchpad.net/congress/+bug/1651928), I noticed there are
installation instructions in two places:  1) For Users: Congress
Introduction and Installation; and 2) For Operators: Deployment. The
"For Users" section details Devstack as well as Standalone installation.

I would like to rearrange the content: 1) move README.rst/4.1
Devstack-install and 4.3 Debugging unit tests to to the For
Developers/Contributing section; 2) move README.rst/4.2 Standalone
install and 4.4 Upgrade to the For Operators/Deployment section. I think
this  would make it easier for end users to create an installation
script or validate an existing script.

Any objections or thoughts?

Thanks.

-- 

Aimee Ukasick, AT Open Source



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [heat] [ironic] Common sessions at PTG

2017-01-10 Thread Fox, Kevin M
I'll be there all week, so I can attend later sessions.

Thanks,
Kevin

From: Michał Jastrzębski [inc...@gmail.com]
Sent: Tuesday, January 10, 2017 12:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] [heat] [ironic] Common sessions at PTG

Hey Emilien,

Thanks for starting this, I think it makes a lot of sense for us to
sit in same room for few sessions.

So Kolla will have it's sessions in first half of week, so
unfortunately there is no overlap, hence I don't expect Kolla
community to be well represented. I don't know yet if personally will
stay that long, but maybe at least extend it to Wed? If you guys could
schedule sessions you want us to be on to Wed, I think it would make
things easier for us.

Cheers,
Michal

On 10 January 2017 at 11:56, Emilien Macchi  wrote:
> Greetings folks,
>
> I've asked to TripleO folks to propose design sessions for next PTG in
> Atlanta and mention if some of them would need horizontal
> collaboration with some other projects.
> So far it has been the case for Heat, Ironic and Kolla:
> https://etherpad.openstack.org/p/tripleo-ptg-pike
>
> I just want to let you know that we might want to work together on
> finding common slots so our teams can work together on the topics.
> Please let us know if you have some schedule constraints so far. in
> TripleO, we plan to have sessions from Wednesday to Friday included
> (probably with less people on Friday, who would be travelling).
>
> Thanks for your collaboration!
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Which service is using port 8778?

2017-01-10 Thread Emilien Macchi
On Tue, Jan 10, 2017 at 6:00 AM, Andy McCrae  wrote:
> Sorry to resurrect a few weeks old thread, but I had a few questions.
>
>>
>> Yes, we should stop with the magic ports. Part of the reason of
>> switching over to apache was to alleviate all of that.
>>
>> -Sean
>
>
> Is this for devstack specifically?
> I can see the motivation for Devstack, since it reduces the concern for
> managing port allocations.
>
> Is the idea that we move away from ports and everything is on 80 with a
> VHost to differentiate between services/endpoints?
>
> It seems to me that it would still be good to have a "designated" (and
> unique - or as unique as possible at least within OpenStack) port for
> services. We may not have all services on the same hosts, for example, using
> a single VIP for load balancing. The issue then is that it becomes hard to
> differentiate the LB pool based on the request.
> I.e. How would i differentiate between Horizon requests and requests for any
> other service on port 80, the VIP is the same, but the backends may be
> completely different (so all requests aren't handled by the same Apache
> server).

Right, it causes conflicts when running architectures with HAproxy &
API co-located.
In the case of HAproxy, you might need to run ACLs, but it sounds
adding a layer of complexity in the current deployments that might not
exist in some cases yet.

In TripleO, we decided to pick a port (8778) and deploy Placement API
on this port, so it's consistent with existing services already
deployed.

Regarding Sean's comment about switching to Apache, I agree it
simplifies a lot of things but I don't remember we decided to pick
Apache because of the magic port thing. Though I remember because it
was also for the SSL configuration that would be standard across all
services.

Any feedback at how our operators do here would be very welcome
(adding operators mailing-list), so we would make sure we're taking
the more realistic approach here.
So the question would it be:

When deploying OpenStack APIs under WSGI, do you pick magic port (ex:
8774 for Nova Compute API) or do you use 80/443 + vhost path?

Thanks,

> Assuming, in that case, having a designated port is the only way (and if it
> isn't I'd love to discuss alternate, and simpler, methods of achieving this)
> it then seems that assigning a dedicated port for services in Devstack would
> make sense - it would ensure that there is no overlap, and in a way the
> error received when the ports overlapped is a genuine issue that would need
> to be addressed. Although if that is the case, perhaps there is a better way
> to manage that.
>
> Essentially it seems better to handle port conflicts (within the OpenStack
> ecosystem, at least) at source rather than pass that on to the deployer to
> randomly pick ports and avoid conflicts.
>
> Andy
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Policy for deprecating metric names

2017-01-10 Thread Mario Villaplana
Hi all,

There was consensus at this week's IRC meeting that the best thing to
do here is simply to require a release note when metric names change.
[0]

I have a patch up to add this to the existing metrics documentation,
so feel free to voice any concerns there:
https://review.openstack.org/418589

Thanks,
Mario

[0] 
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-3/%23openstack-meeting-3.2017-01-09.log.html#t2017-01-09T17:26:33

On Tue, Jan 3, 2017 at 5:11 PM, Mario Villaplana
 wrote:
> Hi all,
>
> Recently, Ruby found a patch that modifies the name of a metric
> emitted by ironic. [0] After some IRC discussion, we realized that
> there is no real deprecation policy regarding changing metric names.
> [1]
>
> For anyone not familiar with this feature, ironic has the capability
> to emit various metrics to supported backends using metrics support in
> ironic-lib. [2] Currently, the only supported backend is statsd. Most
> (all?) metrics currently in ironic are implemented as function
> decorators, like the following:
>
> @METRICS.timer('my.module.MyClass.my_method')
> def my_method(self):
> ...
>
> This will send a time series datapoint to statsd which will be stored
> with the current epoch timestamp, the name of the metric
> ('my.module.MyClass.my_method'), and the amount of time the method
> took to finish.
>
> The primary use case for this that I'm familiar with is generating
> graphs with Graphite/Grafana to get a granular look at performance
> over time. With Graphite/Grafana, operators can also create graphs
> with wildcard matches. For example, a graph that matches on
> ironic.conductor.*.* will contain metrics for all methods emitted by
> modules in the ironic/conductor subdirectory. Each metric will appear
> separately as a line on the same graph by default, if I remember
> correctly.
>
> I did some limited research into the way other OpenStack projects emit
> metrics to statsd. I was only able to find one example in a short
> amount of time - Swift. [3] Swift seems to document each metric
> emitted with a short description of what the metric represents, but it
> doesn't guarantee anything at all about the naming or semantics of
> metrics.
>
> I'd like to solicit the opinion of the community, especially operators
> who use this feature, for what a good deprecation policy for metric
> names should be.
>
> As a former operator who used a downstream implementation very similar
> to the upstream version in production, my recommendation is as
> follows:
>
> 1. Document the metric name as well as what the metric represents in
> the deploy docs, for each metric [2]
> 2. Guarantee to operators that the docs will be up to date, but don't
> guarantee that the metric name won't change without warning between
> deploys
> 3. Maybe document best practices for using metrics in a stable manner.
> Things like using wildcards instead of keying off of specific metric
> names, checking documentation for critical changes before deploys,
> etc.
>
> My reasoning for this is that it's hard to guarantee that a function
> name won't change (or be completely removed) in between releases.
> Since operators can use wildcards to match on metrics, it won't take
> too long to notice any changes, even without staying up-to-date on the
> documentation. One alternative that was suggested previously - keeping
> both prior and new metric names for some deprecation period - won't
> solve for the case where the function is removed. Additionally, that
> would unnecessarily increase the amount of storage required for
> metrics. In my experience, metric storage can be quite expensive.
> There's a calculator for storage requirements for Whisper, one of the
> storage backends used with Graphite, that can illustrate this. [4]
>
> I haven't yet scoped the documentation work, but I'm curious about
> feedback on this proposal or alternative suggestions from people who
> use the feature.
>
> Thank you!
>
> Mario
>
> [0] https://review.openstack.org/#/c/412339
> [1] 
> http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2016-12-19.log.html#t2016-12-19T16:16:11
> [2] http://docs.openstack.org/developer/ironic/deploy/metrics.html
> [3] 
> http://docs.openstack.org/developer/swift/admin_guide.html#reporting-metrics-to-statsd
> [4] http://m30m.github.io/whisper-calculator/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-10 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2017-01-10 15:28:04 -0500:
> On 10/01/17 14:17, Tim Bell wrote:
> >
> >> On 10 Jan 2017, at 17:41, Zane Bitter  >> > wrote:
> >>
> >> On 10/01/17 05:25, Flavio Percoco wrote:
> >>>
> >>>
>  I'd recommend Heat to not use locations as that will require deployers
>  to either enable them for everyone or have a dedicate glance-api node
>  for Heat.
>  If not use location, do we have other options for user? What
>  should user to do before create a glance image using v2? Download the
>  image data? And then pass the image data to glance api? I really don't
>  think it's good way.
> 
> >>>
> >>> That *IS* how users create images. There used to be copy-from too (which
> >>> may or
> >>> may not come back).
> >>>
> >>> Heat's use case is different and I understand that but as I said in my
> >>> other
> >>> email, I do not think sticking to v1 is the right approach. I'd rather
> >>> move on
> >>> with a deprecation path or compatibility layer.
> >>
> >> "Backwards-compatibility" is a wide-ranging topic, so let's break this
> >> down into 3 more specific questions:
> >>
> >> 1) What is an interface that we could support with the v2 API?
> >>
> >> - If copy-from is not a thing then it sounds to me like the answer is
> >> "none"? We are not ever going to support uploading a multi-GB image
> >> file through Heat and from there to Glance.
> >> - We could have an Image resource that creates a Glance image from a
> >> volume. It's debatable how useful this would be in an orchestration
> >> setting (i.e. in most cases this would have to be part of a larger
> >> workflow anyway), but there are some conceivable uses I guess. Given
> >> that this is completely disjoint from what the current resource type
> >> does, we'd make it easier on everyone if we just gave it a new name.
> >>
> >> 2) How can we avoid breaking existing stacks that use Image resources?
> >>
> >> - If we're not replacing it with anything, then we can just mark the
> >> resource type as first Deprecated, and then Hidden and switch the back
> >> end to use the v2 API for things like deleting. As long as nobody
> >> attempts to replace the image then the rest of the stack should
> >> continue to work fine.
> >>
> >
> > Can we only deprecate the resources using the location function but
> > maintain backwards compatibility if the location function is not used?
> 
> location is a required property:
> 
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Glance::Image
> 
> The resource type literally does not do anything else but expose a Heat 
> interface to a feature of Glance that no longer exists in v2. That's 
> fundamentally why "add v2 support" has been stalled for so long ;)
> 

I think most of this has been beating around the bush, and the statement
above is the heart of the issue.

The functionality was restricted and mostly removed from Glance for a
reason. Heat users will have to face that reality just like users of
other orchestration systems have to.

If a cloud has v1.. great.. take a location.. use it. If they have v2..
location explodes. If you want to get content in to that image, well,
other systems have to deal with this too. Ansible's os_image will upload
a local file to glance for instance. Terraform doesn't even include
image support.

So the way to go is likely to just make location optional, and start
to use v2 when the catalog says to. From there, Heat can probably help
make the v2 API better, and perhaps add support to to the Heat API to
tell the user where they can upload blobs of data for Heat to then feed
into Glance.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-10 Thread Thomas Herve
On Tue, Jan 10, 2017 at 9:28 PM, Zane Bitter  wrote:
> On 10/01/17 14:17, Tim Bell wrote:
>>
>>
>>> On 10 Jan 2017, at 17:41, Zane Bitter >> > wrote:
>>>
>>> On 10/01/17 05:25, Flavio Percoco wrote:



> I'd recommend Heat to not use locations as that will require deployers
> to either enable them for everyone or have a dedicate glance-api node
> for Heat.
> If not use location, do we have other options for user? What
> should user to do before create a glance image using v2? Download the
> image data? And then pass the image data to glance api? I really don't
> think it's good way.
>

 That *IS* how users create images. There used to be copy-from too (which
 may or
 may not come back).

 Heat's use case is different and I understand that but as I said in my
 other
 email, I do not think sticking to v1 is the right approach. I'd rather
 move on
 with a deprecation path or compatibility layer.
>>>
>>>
>>> "Backwards-compatibility" is a wide-ranging topic, so let's break this
>>> down into 3 more specific questions:
>>>
>>> 1) What is an interface that we could support with the v2 API?
>>>
>>> - If copy-from is not a thing then it sounds to me like the answer is
>>> "none"? We are not ever going to support uploading a multi-GB image
>>> file through Heat and from there to Glance.
>>> - We could have an Image resource that creates a Glance image from a
>>> volume. It's debatable how useful this would be in an orchestration
>>> setting (i.e. in most cases this would have to be part of a larger
>>> workflow anyway), but there are some conceivable uses I guess. Given
>>> that this is completely disjoint from what the current resource type
>>> does, we'd make it easier on everyone if we just gave it a new name.
>>>
>>> 2) How can we avoid breaking existing stacks that use Image resources?
>>>
>>> - If we're not replacing it with anything, then we can just mark the
>>> resource type as first Deprecated, and then Hidden and switch the back
>>> end to use the v2 API for things like deleting. As long as nobody
>>> attempts to replace the image then the rest of the stack should
>>> continue to work fine.
>>>
>>
>> Can we only deprecate the resources using the location function but
>> maintain backwards compatibility if the location function is not used?
>
>
> location is a required property:
>
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Glance::Image
>
> The resource type literally does not do anything else but expose a Heat
> interface to a feature of Glance that no longer exists in v2. That's
> fundamentally why "add v2 support" has been stalled for so long ;)

Throwing stuff against the wall, but could we solve the issue in
heatclient? If we change it to handle the location property, upload
the image from the client, and pass the id to Heat, it could be
somewhat transparent to the user. We'd need to do it in Horizon
though. For the heatclient as a library it's not perfect, but it may
be good enough.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [heat] [ironic] Common sessions at PTG

2017-01-10 Thread Michał Jastrzębski
Hey Emilien,

Thanks for starting this, I think it makes a lot of sense for us to
sit in same room for few sessions.

So Kolla will have it's sessions in first half of week, so
unfortunately there is no overlap, hence I don't expect Kolla
community to be well represented. I don't know yet if personally will
stay that long, but maybe at least extend it to Wed? If you guys could
schedule sessions you want us to be on to Wed, I think it would make
things easier for us.

Cheers,
Michal

On 10 January 2017 at 11:56, Emilien Macchi  wrote:
> Greetings folks,
>
> I've asked to TripleO folks to propose design sessions for next PTG in
> Atlanta and mention if some of them would need horizontal
> collaboration with some other projects.
> So far it has been the case for Heat, Ironic and Kolla:
> https://etherpad.openstack.org/p/tripleo-ptg-pike
>
> I just want to let you know that we might want to work together on
> finding common slots so our teams can work together on the topics.
> Please let us know if you have some schedule constraints so far. in
> TripleO, we plan to have sessions from Wednesday to Friday included
> (probably with less people on Friday, who would be travelling).
>
> Thanks for your collaboration!
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-10 Thread Zane Bitter

On 10/01/17 14:17, Tim Bell wrote:



On 10 Jan 2017, at 17:41, Zane Bitter > wrote:

On 10/01/17 05:25, Flavio Percoco wrote:




I'd recommend Heat to not use locations as that will require deployers
to either enable them for everyone or have a dedicate glance-api node
for Heat.
If not use location, do we have other options for user? What
should user to do before create a glance image using v2? Download the
image data? And then pass the image data to glance api? I really don't
think it's good way.



That *IS* how users create images. There used to be copy-from too (which
may or
may not come back).

Heat's use case is different and I understand that but as I said in my
other
email, I do not think sticking to v1 is the right approach. I'd rather
move on
with a deprecation path or compatibility layer.


"Backwards-compatibility" is a wide-ranging topic, so let's break this
down into 3 more specific questions:

1) What is an interface that we could support with the v2 API?

- If copy-from is not a thing then it sounds to me like the answer is
"none"? We are not ever going to support uploading a multi-GB image
file through Heat and from there to Glance.
- We could have an Image resource that creates a Glance image from a
volume. It's debatable how useful this would be in an orchestration
setting (i.e. in most cases this would have to be part of a larger
workflow anyway), but there are some conceivable uses I guess. Given
that this is completely disjoint from what the current resource type
does, we'd make it easier on everyone if we just gave it a new name.

2) How can we avoid breaking existing stacks that use Image resources?

- If we're not replacing it with anything, then we can just mark the
resource type as first Deprecated, and then Hidden and switch the back
end to use the v2 API for things like deleting. As long as nobody
attempts to replace the image then the rest of the stack should
continue to work fine.



Can we only deprecate the resources using the location function but
maintain backwards compatibility if the location function is not used?


location is a required property:

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Glance::Image

The resource type literally does not do anything else but expose a Heat 
interface to a feature of Glance that no longer exists in v2. That's 
fundamentally why "add v2 support" has been stalled for so long ;)



3) How do we handle existing templates in future?

- Again, if we're not replacing it with anything, the -> Deprecated ->
Hidden process is sufficient. (In theory "Hidden" should mean you
can't create new stacks containing that resource type any more, only
continue using existing stacks that contained it. In practice, we
didn't actually implement that and it just gets hidden from the
documentation. Obviously trying to create a new one using the location
field once only the v2 API is available will result in an error.)



My worry is that portable heat templates like the Community App Catalog
( http://apps.openstack.org/#tab=heat-templates) would become much more
complex if we have to produce different resources for Glance V1 and V2
configurations. If, however, we are able to say that the following
definitions of image resources are compatible across the two
configurations, this can be more supportive of a catalog approach and
improve template portability.


Are any of those templates actually using OS::Glance::Image resources 
though? (I'd check myself but I can't find the source repo - 
openstack/app-catalog appears to contain just the catalog and not any of 
the apps?)


cheers,
Zane.


Tim



If we have a different answer to (1) then that could change the
answers to (2) and (3).

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org
?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] [heat] [ironic] Common sessions at PTG

2017-01-10 Thread Emilien Macchi
Greetings folks,

I've asked to TripleO folks to propose design sessions for next PTG in
Atlanta and mention if some of them would need horizontal
collaboration with some other projects.
So far it has been the case for Heat, Ironic and Kolla:
https://etherpad.openstack.org/p/tripleo-ptg-pike

I just want to let you know that we might want to work together on
finding common slots so our teams can work together on the topics.
Please let us know if you have some schedule constraints so far. in
TripleO, we plan to have sessions from Wednesday to Friday included
(probably with less people on Friday, who would be travelling).

Thanks for your collaboration!
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] generic volume groups and consistency groups

2017-01-10 Thread yang, xing
Hi,

Just want give an update on the work to migrate consistency groups to generic 
volume groups.  The migration script and code in CG APIs to handle migrating 
CGs to groups are all merged.  This has an impact on drivers already supporting 
consistency groups.

A group type named default_cgsnapshot_type will be created by the db migration 
script once you’ve upgraded to the latest code.  The following command needs to 
be manually run to migrate data and copy data from consistencygroups to groups 
and from cgsnapshots to group_snapshots tables.  Migrated consistencygroups and 
cgsnapshots will be removed from the database:

cinder-manage db online_data_migrations
--max_count 
--ignore_state

One noticeable change is that the Create CG API will create an entry in the 
groups table, not in the consistencygroups table.  The default_cgsnapshot_type 
is reserved for migrating CGs.  Groups with default_cgsnapshot_type can only be 
operated by using the CG APIs.  After migration is complete and all CG tables 
are removed, we will allow default_cgsnapshot_type to be used by group APIs.

For driver maintainers who want to know more on how to add CG capability to 
generic volume groups, please read the following doc:

https://github.com/openstack/cinder/blob/master/doc/source/devref/groups.rst

Please remember the deadline for drivers already supporting CG to submit a 
patch to add CG capability to groups is Pike-1.  There’s an example on how to 
implement that in the above doc.

Thanks,
Xing



From: yang, xing
Sent: Thursday, November 3, 2016 11:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder] generic volume groups and consistency groups

Hi everyone,

Generic volume groups support was added in Cinder in Newton.  We are planning 
to migrate consistency groups to generic volume groups.  I have submitted a dev 
doc patch to explain how to add consistency groups support in generic volume 
groups in a driver.  Let me know if you have any questions or provide comments 
on the patch.

https://review.openstack.org/#/c/393570/

As discussed at the summit in Barcelona, drivers already supporting CG should 
add CG support in generic volume groups by Pike-1.  Drivers planning to 
introduce CG support should implement the driver interfaces for generic volume 
groups instead.  Drivers wanting generic volume groups but not CG do not need 
code changes because the default implementation should work for every driver.  
Please see details in the above patch.

Thanks,
Xing

IRC: xyang or xyang1



From: yang, xing [xing.y...@dell.com]
Sent: Tuesday, November 1, 2016 10:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Jason Dillaman
Subject: Re: [openstack-dev] [cinder] consistency groups in ceph

Hi Victor,

Please see my answers inline below.

In Newton, we added support for Generic Volume Groups.  See doc below.  CGs 
will be migrated to Generic Volume Groups gradually.  Drivers should not 
implement CGs any more.  Instead, it can add CG support using Generic Volume 
Group interfaces.  I'm working on a dev doc to explain how to do this and will 
send an email to the mailing list when I'm done.  The Generic Volume Group 
interface is very similar to CG interface, except that the Generic Volume Group 
requires an additional Group Type parameter to be created.  Using Group Type, 
CG can be a special type of Generic Volume Group.  Please feel free to grab me 
on Cinder IRC if you have any questions.  My IRC handle is xyang or xyang1.

http://docs.openstack.org/admin-guide/blockstorage-groups.html

Thanks,
Xing



From: Victor Denisov [vdeni...@mirantis.com]
Sent: Monday, October 31, 2016 11:29 PM
To: openstack-dev@lists.openstack.org
Cc: Jason Dillaman
Subject: [openstack-dev] [cinder] consistency groups in ceph

Hi,

I'm working on consistency groups feature in ceph.
My question is about what kind of behavior does cinder expect from
storage backends.
I'm particularly interested in what happens to consistency groups
snapshots when I remove an image from the group:

Let's imagine I have a consistency group called CG. I have images in
the consistency group:
Im1, Im2, Im3, Im4.
Let's imagine we have snapshots of this consistency group:

CGSnap1
CGSnap2
CGSnap3

Snapshots of individual images in a consistency group snapshot I will call
CGSnap2Im1 - Snapshot of image 1 from consistency group snapshot 2.

Qustion 1:
If consistency group CG has 4 images: Im1, Im2, Im3, Im4.
Can CGSnap1 have more images than it already has: Im1, Im2, Im3, Im4, Im5.

Can CGSnap1 have less images than it already has: Im1, Im2, Im3.

[Xing]  Once a snapshot is taken from a CG, it can no longer be changed.  It is 
a point-in-time copy.  CGSnap1 cannot be modified.

Question 2:
If we remove image2 from the consistency group. Does it mean that
snapshots of this 

Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-10 Thread Tim Bell

On 10 Jan 2017, at 17:41, Zane Bitter 
> wrote:

On 10/01/17 05:25, Flavio Percoco wrote:


I'd recommend Heat to not use locations as that will require deployers
to either enable them for everyone or have a dedicate glance-api node
for Heat.
If not use location, do we have other options for user? What
should user to do before create a glance image using v2? Download the
image data? And then pass the image data to glance api? I really don't
think it's good way.


That *IS* how users create images. There used to be copy-from too (which
may or
may not come back).

Heat's use case is different and I understand that but as I said in my
other
email, I do not think sticking to v1 is the right approach. I'd rather
move on
with a deprecation path or compatibility layer.

"Backwards-compatibility" is a wide-ranging topic, so let's break this down 
into 3 more specific questions:

1) What is an interface that we could support with the v2 API?

- If copy-from is not a thing then it sounds to me like the answer is "none"? 
We are not ever going to support uploading a multi-GB image file through Heat 
and from there to Glance.
- We could have an Image resource that creates a Glance image from a volume. 
It's debatable how useful this would be in an orchestration setting (i.e. in 
most cases this would have to be part of a larger workflow anyway), but there 
are some conceivable uses I guess. Given that this is completely disjoint from 
what the current resource type does, we'd make it easier on everyone if we just 
gave it a new name.

2) How can we avoid breaking existing stacks that use Image resources?

- If we're not replacing it with anything, then we can just mark the resource 
type as first Deprecated, and then Hidden and switch the back end to use the v2 
API for things like deleting. As long as nobody attempts to replace the image 
then the rest of the stack should continue to work fine.


Can we only deprecate the resources using the location function but maintain 
backwards compatibility if the location function is not used?

3) How do we handle existing templates in future?

- Again, if we're not replacing it with anything, the -> Deprecated -> Hidden 
process is sufficient. (In theory "Hidden" should mean you can't create new 
stacks containing that resource type any more, only continue using existing 
stacks that contained it. In practice, we didn't actually implement that and it 
just gets hidden from the documentation. Obviously trying to create a new one 
using the location field once only the v2 API is available will result in an 
error.)


My worry is that portable heat templates like the Community App Catalog ( 
http://apps.openstack.org/#tab=heat-templates) would become much more complex 
if we have to produce different resources for Glance V1 and V2 configurations. 
If, however, we are able to say that the following definitions of image 
resources are compatible across the two configurations, this can be more 
supportive of a catalog approach and improve template portability.

Tim


If we have a different answer to (1) then that could change the answers to (2) 
and (3).

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-10 Thread Michał Jastrzębski
I created CIVS poll with options we discussed. Every core member should get
link to poll voting, if that's not the case, please let me know.

On 5 January 2017 at 19:07, Britt Houser (bhouser) 
wrote:

> I think you’re giving a great example of my point that we’re not yet at
> the stage where we can say, “Any tool should be able to deploy kolla
> containers”.  Right?
>
>
>
> *From: *Pete Birley 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Thursday, January 5, 2017 at 9:06 PM
>
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [tc][kolla] Adding new deliverables
>
>
>
> I'll reply to Britts comments, and then duck out, unless explicitly asked
> back, as I don't want to (totally) railroad this conversation:
>
>
>
> The Kolla containers entry-point is a great example of how the field have
> moved on. While it was initially required, in the Kkubernetes world the
> Kolla ABI is actually more of a hindrance than help, as it makes the
> containers much more of a 'black-box' to use. In the other Openstack on
> Kubernetes projects I contribute to, and my own independent work, in we
> actually just define the entry point to the container directly in the k8s
> manifest and make no use of Kolla's entry point and config mechanisms,
> either running another 'init' container to build and bind mount the
> configuration (Harbor), or use Kubernetes configmap object to achieve the
> same result (Openstack Helm). It would be perfectly possible for Kolla
> Ansible (and indeed Salt) to take a similar approach - meaning that rather
> maintaining an ABI that works for all platforms, Kolla would be free to
> just ensure that the required binaries were present in images.
>
>
>
> I agree that this cannot happen overnight, but think that when appropriate
> we should take stock of where we are and how to plot a course that lets all
> of our projects flourish without competing for resources, or being so
> entwined that we become technically paralyzed and overloaded.
>
> Sorry, Sam and Michal! You can have your thread back now :)
>
>
>
> On Fri, Jan 6, 2017 at 1:17 AM, Britt Houser (bhouser) 
> wrote:
>
> I think both Pete and Steve make great points and it should be our long
> term vision.  However, I lean more with Michael that we should make that a
> separate discussion, and it’s probably better done further down the road.
> Yes, Kolla containers have come a long way, and the ABI has been stable for
> awhile, but the vast majority of that “for awhile” was with a single
> deployment tool: ansible.  Now we have kolla-k8s and kolla-salt.  Neither
> one is fully featured yet as ansible, which to me means I don’t think we
> can say for sure that ABI won’t need to change as we try to support many
> deployment tools.  (Someone remind me, didn’t kolla-mesos change the ABI?)
> Anyway, the point is I don’t think we’re at a point of maturity to be
> certain the ABI won’t need changing.  When we have 2-3 deployment tools
> with enough feature parity to say, “Any tool should be able to deploy kolla
> containers” then I think it make sense to have that discussion.  I just
> don’t think we’re there yet.  And until the point, changes to the ABI will
> be quite painful if each project is in outside of the kolla umbrella, IMHO.
>
>
>
> Thx,
>
> britt
>
>
>
> *From: *Pete Birley 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Thursday, January 5, 2017 at 6:47 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [tc][kolla] Adding new deliverables
>
>
>
> Also coming from the perspective of a Kolla-Kubernetes contributor, I am
> worried about the scope that Kolla is extending itself to.
>
>
>
> Moving from a single repo to multiple repo's has made the situation much
> better, but by operating under a single umbrella I feel that we may
> potentially be significantly limiting the potential for each deliverable.
> Alex Schultz, Steve and Sam raise some good points here.
>
>
>
> The interdependency between the projects is causing issues, the current
> reliance that Kolla-Kubernetes has on Kolla-ansible is both undesirable and
> unsustainable in my opinion. This is both because it limits the flexibility
> that we have as Kolla-Kubernetes developers, but also because it places
> burden and rigidity on Kolla-Ansible. This will ultimately prevent both
> projects from being able to take advantage of the capabilities offered to
> them by the deployment mechanism they use.
>
>
>
> Like Steve, I don't think the addition of Kolla-aSlt should affect me, and
> as a result don't feel I should have any say in the project. That said, I'd
> really like to see 

Re: [openstack-dev] [tc][all] Feedback from driver maintainers about future of driver projects

2017-01-10 Thread Scott D'Angelo
Thanks for working on this stevemar.
In the future, could we find a way to send such a survey to a broader
audience? I'm not on a Cinder driver maintainer list, but I work closely
with our driver maintainers and the OpenStack community, so I might be able
to respond more reliably to surveys like this.
thanks,
scottda
scott.dang...@ibm.com

On Tue, Jan 10, 2017 at 12:33 AM, Steve Martinelli 
wrote:

> In preparation for the next TC meeting, a survey was sent out to driver
> maintainers, here is a summary of the feedback that was gathered.
>
> Major observations
>
> ==
>
> * Are drivers an important part of OpenStack? YES!
>
> * Discoverability of drivers needs to be fixed immediately.
>
> * It is important to have visibility in a central place of the status of
> each driver.
>
> * Perspective of a driver developer and a high level person at the company
> should feel like they're part of something.
>
> * OpenStack should stop treating drivers like second-class citizens. They
> want access to the same resources (publish on docs.o.org, config guides,
> etc).
>
> * The initial wording about what constitutes a project was never intended
> for drivers. Drivers are a part of the project. Driver developers
> contribute to OpenStack by creating drivers.
>
> Discoverability
>
> ===
>
> * Consensus: It is currently all over the place. A common mechanism to
> view all supported drivers is needed.
>
> * Cinder list: http://docs.openstack.org/developer/cinder/drivers.html
>
> * Nova list: http://docs.openstack.org/developer/nova/support-matrix.html
>
> * Stackalytics list: http://stackalytics.openstack.org/report/driverlog
>
> * Opinion: If we intend to use the marketplace (or anywhere on
> openstack.org) to list in-tree and out-of-tree drivers, they should have
> CI results available as a requirement. A driver that fails CI is not just a
> vendor problem, it’s an OpenStack problem, it reflects poorly on OpenStack
> and the project.
>
> * Opinion: What constitutes a supported driver, why not list all drivers?
>
> * Opinion: Fixing discoverability can be done independently of governance
> changes. We have the option of tabling the governance discussion until we
> get the discoverability properly fixed, and see then if we still need to do
> anything more.
>
> * Opinion: Between giving full access to vertical resources to driver
> teams, and making the marketplace *the* place for learning about OpenStack
> drivers, we would have solved at least the biggest portion of the problem
> we're facing.
>
> Driver projects - official or not?
>
> ==
>
> * Fact: There is desire from some out-of-tree vendors to become ‘official’
> OpenStack projects, and gain the benefits of that (access to horizontal
> teams).
>
> * Opinion: Let drivers projects become official, there should be no 3rd
> party CI requirement, that can be a tag.
>
> * Opinion: Do not allow drivers projects to become official, that doesn’t
> mean they shouldn’t easily be discoverable.
>
> * Opinion: We don't need to open the flood gates of allowing vendors to be
> teams in the OpenStack governance to make the vendors developers happy.
>
> * Fact: This implies being placed under the TC oversight. It is a
> significant move that could have unintended side-effects, it is hard to
> reverse (kicking out teams we accepted is worse than not including them in
> the first place), and our community is divided on the way forward. So we
> need to give that question our full attention and not rush the answer.
>
> * Opinion: Consider https://github.com/openstack/driverlog an official
> OpenStack project to be listed under governance with a PTL, weekly
> meetings, and all that it required to allow the team to be effective in
> their mission of keeping the marketplace a trustworthy resource for
> learning about OpenStack driver ecosystem.
>
> Driver developers
>
> =
>
> * Opinion: A driver developer that ONLY contributes to vendor specific
> driver code should not have the same influence as other OpenStack
> developers, voting for PTL, TC, and ATC status.
>
> * Opinion: PTLs should leverage the extra-atcs option in the governance
> repo
>
> In-tree vs Out-of-tree
>
> ==
>
> * Cinder has in-tree drivers, but also has out-of-tree drivers when their
> CI is not maintained or when minimum feature requirements are not met. They
> are marked as ‘not supported’ and have a single release to get things
> working before being moved out-of-tree.
>
> * Ironic has a single out-of-tree repo: https://github.com/openstack/
> ironic-staging-drivers -- But also in-tree https://github.com/openstack/
> ironic/tree/master/ironic/drivers
>
> * Neutron has all drivers out-of-tree, with project names like:
> ‘networking-cisco’.
>
> * Many opinions on the “stick-based” approach the cinder team took.
>
> * Opinion: The in-tree vs out-of-tree argument is developer focused.
> Out-of-tree drivers 

Re: [openstack-dev] [os-ansible-deployment] Periodic job in infra to test upgrades?

2017-01-10 Thread Sean M. Collins
Andy McCrae wrote:
> The work we can do now, which if you're able to help with would be really
> great, is:
> 
> Plan how we execute the actual job - at the moment the aio is run through a
> script (scripts/gate-check-commit.sh) - the upgrade test would have to hook
> into this (using the SCENARIO=upgrade) var, or we'd need to look into
> changing this method up entirely (role gates all use tox for example).
> 
> If the current method is sufficient we need to set the job up. Once we have
> a working scenario (regardless of time taken for the test), we can look
> into ways we can ensure this gets run, and what options we have.


Excellent. Thanks for all the info. I'm going to start poking around at
the gate-check-commit script and see if I can build up an AIO node, then
do the upgrade.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-10 Thread Emilien Macchi
On Wed, Jan 4, 2017 at 3:13 AM, Saravanan KR  wrote:
> Hello,
>
> The aim of this mail is to ease the DPDK deployment with TripleO. I
> would like to see if the approach of deriving THT parameter based on
> introspection data, with a high level input would be feasible.
>
> Let me brief on the complexity of certain parameters, which are
> related to DPDK. Following parameters should be configured for a good
> performing DPDK cluster:
> * NeutronDpdkCoreList (puppet-vswitch)
> * ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under review)
> * NovaVcpuPinset (puppet-nova)
>
> * NeutronDpdkSocketMemory (puppet-vswitch)
> * NeutronDpdkMemoryChannels (puppet-vswitch)
> * ComputeKernelArgs (PreNetworkConfig [4]) (under review)
> * Interface to bind DPDK driver (network config templates)
>
> The complexity of deciding some of these parameters is explained in
> the blog [1], where the CPUs has to be chosen in accordance with the
> NUMA node associated with the interface. We are working a spec [2], to
> collect the required details from the baremetal via the introspection.
> The proposal is to create mistral workbook and actions
> (tripleo-common), which will take minimal inputs and decide the actual
> value of parameters based on the introspection data. I have created
> simple workbook [3] with what I have in mind (not final, only
> wireframe). The expected output of this workflow is to return the list
> of inputs for "parameter_defaults",  which will be used for the
> deployment. I would like to hear from the experts, if there is any
> drawbacks with this approach or any other better approach.
>
> This workflow will ease the TripleO UI need to integrate DPDK, as UI
> (user) has to choose only the interface for DPDK [and optionally, the
> number for CPUs required for PMD and Host]. Of-course, the
> introspection should be completed, with which, it will be easy to
> deploy a DPDK cluster.
>
> There is a complexity if the cluster contains heterogeneous nodes, for
> example a cluster having HP and DELL machines with different CPU
> layout, we need to enhance the workflow to take actions based on
> roles/nodes, which brings in a requirement of localizing the above
> mentioned variables per role. For now, consider this proposal for
> homogeneous cluster, if there is a value in this, I will work towards
> heterogeneous clusters too.
>
> Please share your thoughts.

Using Mistral workflows for this use-case seems valuable to me. I like
your step-by-step approach and also the fact it will ease TripleO UI
with this proposal.

> Regards,
> Saravanan KR
>
>
> [1] https://krsacme.github.io/blog/post/dpdk-pmd-cpu-list/
> [2] https://review.openstack.org/#/c/396147/
> [3] https://gist.github.com/krsacme/c5be089d6fa216232d49c85082478419
> [4] 
> https://review.openstack.org/#/c/411797/6/extraconfig/pre_network/host_config_and_reboot.role.j2.yaml
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [architecture] Return of the Arch-WG meeting -- Two proposals up for discussion

2017-01-10 Thread Clint Byrum
https://wiki.openstack.org/wiki/Meetings/Arch-WG

With the US holidays over and people returning to their IRC clients,
we'll be holding another Architecture Working Group meeting this
Thursday at 2000 UTC. In preparation, I thought I'd put this out there
so people can come armed with information so we can fully discuss the
two proposals we have and begin to move them through the process toward
improving OpenStack.

To be clear, however, anything that can happen here, on openstack-dev,
should. So please do send your thoughts to the [architecture] tag so we
can use the time in the meeting to resolve the deeper conflicts in our
thinking.

http://git.openstack.org/cgit/openstack/arch-wg/tree/proposals/base-services.rst

- The "Base Services" proposal has been accepted as a proposal, and
  needs discussion to organize into work items for assignment. Please
  read the proposal if you are interested. Here is the introduction to
  wet your appetite:

  Components of OpenStack do not run in a vacuum. They leverage
  features present in a number of external services to run. Some of
  those dependencies are local (like a hypervisor on a compute node),
  while some of those are global (like a database). "Base services" are
  those global services that an OpenStack component can assume will be
  present in an "OpenStack" installation, and that they can therefore
  leverage to deliver features. Components of course do not *have to*
  use those, but they can.


https://review.openstack.org/#/c/411527/1/proposals/nova-compute-api.rst

- The "Nova Compute API" proposal got a lot of really amazing comments
  just in the proposal review. Thank you everyone who commented there.
  I expect there are still more opinions to gather, but it appears there
  is broad interest in documenting the situation and improving it, with
  some ideas for a future vision already taking shape.


If either of these interest you at all, I'd invite you to read the
proposal and either start a thread on the mailing list, or propose
patches to add background information into the proposal itself. The
goal here is to provide a plan of action that takes us toward better
understanding and improvement of OpenStack's architecture. To do that,
we definitely need your help.

And if you have something you think we should take a look at in addition
to these two, please propose them to the arch-wg repo and bring them to
our attention at the meeting.

Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [infra] drbdmanage is no more GPL2

2017-01-10 Thread Jeremy Stanley
On 2017-01-10 09:44:06 -0600 (-0600), Sean McGinnis wrote:
[...]
> It doesn't look like much has changed here. There has been one commit
> that only slightly modified the new license: [1]
> 
> IANAL, and I don't want to make assumption on what can and can't be
> done, so looking to other more informed folks. Do we need to remove this
> from the Jenkins run CI tests?
> 
> Input would be appreciated.
[...]

Our chosen platform distributors aren't ever going to incorporate
software with such license terms so it's not something I would, from
a CI toolchain perspective, support installing on our test servers.
The only obvious solutions are to stick with testing an older
release upstream (which is only useful for so long) and/or switch to
third-party CI for newer releases (perhaps dropping official support
for the driver entirely if Cinder feels it's warranted).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Ocata upgrade procedure and problems when it's optional in Newton

2017-01-10 Thread Sylvain Bauza


Le 10/01/2017 14:49, Sylvain Bauza a écrit :
> Aloha folks,
> 
> Recently, I was discussing with TripleO folks. Disclaimer, I don't think
> it's only a TripleO related discussion but rather a larger one for all
> our deployers.
> 
> So, the question I was asked was about how to upgrade from Newton to
> Ocata for the Placement API when the deployer is not using yet the
> Placement API for Newton (because it was optional in Newton).
> 
> The quick answer was to say "easy, just upgrade the service and run the
> placement API *before* the scheduler upgrade". That's because we're
> working on a change for the scheduler calling the Placement API instead
> of getting all the compute nodes [1]
> 
> That said, I thought about something else : wait, the Newton compute
> nodes work with the Placement API, cool. Cool, but what if the Placement
> API is optional in Newton ? Then, the Newton computes are stopping to
> call the Placement API because of a nice decorator [2] (okay with me)
> 
> Then, imagine the problem for the upgrade : given we don't have
> deployers running the Placement API in Newton, they would need to
> *first* deploy the (Newton or Ocata) Placement service, then SIGHUP all
> the Newton compute nodes to have them reporting the resources (and
> creating the inventories), then wait for some minutes that all the
> inventories are reported, and then upgrade all the services (but the
> compute nodes of course) to Ocata, including the scheduler service.
> 
> The above looks a different upgrade policy, right?
>  - Either we say you need to run the Newton placement service *before*
> upgrading - and in that case, the Placement service is not optional for
> Newton, right?
>  - Or, we say you need to run the Ocata placement service and then
> restart the compute nodes *before* upgrading the services - and that's a
> very different situation than the current upgrade way.
> 
> For example, I know it's not a Nova stuff, but most of our deployers
> have what they say "controllers" vs. "compute" services, ie. all the
> Nova services but computes running on a single (or more) machine(s). In
> that case, the "controller" upgrade is monotonic and all the services
> are upgraded and restarted at the same stage. If so, that looks
> difficult for those deployers to just be asked to have a very different
> procedure.
> 
> Anyway, I think we need to carefully consider that, and probably find
> some solutions. For example, we could imagine (disclaimer #2, that's
> probably silly solutions, but that's the ones I'm thinking now) :
>  - a DB migration for creating the inventories and allocations before
> upgrading (ie. not asking the computes to register themselves to the
> placement API). That would be terrible because it's a data upgrade, I
> know...
>  - having the scheduler having a backwards compatible behaviour in [1],
> ie. trying to call the Placement API for getting the list of RPs or
> failback to calling all the ComputeNodes if that's not possible. But
> that would mean that the Placement API is still optional for Ocata :/
>  - merging the scheduler calling the Placement API [1] in a point
> release after we deliver Ocata (and still make the Placement API
> mandatory for Ocata) so that we would be sure that all computes are
> reporting their status to the Placement once we restart the scheduler in
> the point release.
> 

FWIW, a possible other solution has been discussed upstream in the
#openstack-nova channel and proposed by Dan Smith : we could remove the
try-once behaviour made in the decorator, backport it to Newton and do a
point release which would allow the compute nodes to try to reconcile
with the Placement API in a self-heal manner.

That would mean that deployers would have to upgrade to the latest
Newton point release before upgrading to Ocata, which is I think the
best supported model.

I'll propose a patch for that in my series as a bottom change for [1].

-Sylvain



> 
> Thoughts ?
> -Sylvain
> 
> 
> [1] https://review.openstack.org/#/c/417961/
> 
> [2]
> https://github.com/openstack/nova/blob/180e6340a595ec047c59365465f36fed7a669ec3/nova/scheduler/client/report.py#L40-L67
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Virtual midcycle/sprint - Jan 11/12

2017-01-10 Thread Paul Belanger
On Tue, Jan 10, 2017 at 07:45:28AM -0700, Alex Schultz wrote:
> On Tue, Dec 6, 2016 at 12:32 PM, Alex Schultz  wrote:
> > Hey everyone,
> >
> > We're going to be running a small midcycle/sprint on January 11-12,
> > 2017 in #puppet-openstack on freenode.  We will be reviewing the work
> > items for Ocata and doing some bug triage. Feel free to add additional
> > topics to the etherpad[0].
> >
> > Thanks,
> > -Alex
> >
> > [0] https://etherpad.openstack.org/p/puppet-openstack-ocata-midcycle
> 
> 
> Just a reminder that we'll be doing this tomorrow and Thursday. So if
> you want to help out or have issues and questions, come see us in
> #puppet-openstack on freenode.
> 
> Thanks,
> -Alex
> 
There is also a wiki page dedicated for virtual sprints, if next time you want
to also update that; a dedicated channel too #openstack-sprint.

[1] https://wiki.openstack.org/wiki/VirtualSprints

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-10 Thread Zane Bitter

On 10/01/17 05:25, Flavio Percoco wrote:




I'd recommend Heat to not use locations as that will require deployers
to either enable them for everyone or have a dedicate glance-api node
for Heat.
If not use location, do we have other options for user? What
should user to do before create a glance image using v2? Download the
image data? And then pass the image data to glance api? I really don't
think it's good way.



That *IS* how users create images. There used to be copy-from too (which
may or
may not come back).

Heat's use case is different and I understand that but as I said in my
other
email, I do not think sticking to v1 is the right approach. I'd rather
move on
with a deprecation path or compatibility layer.


"Backwards-compatibility" is a wide-ranging topic, so let's break this 
down into 3 more specific questions:


1) What is an interface that we could support with the v2 API?

- If copy-from is not a thing then it sounds to me like the answer is 
"none"? We are not ever going to support uploading a multi-GB image file 
through Heat and from there to Glance.
- We could have an Image resource that creates a Glance image from a 
volume. It's debatable how useful this would be in an orchestration 
setting (i.e. in most cases this would have to be part of a larger 
workflow anyway), but there are some conceivable uses I guess. Given 
that this is completely disjoint from what the current resource type 
does, we'd make it easier on everyone if we just gave it a new name.


2) How can we avoid breaking existing stacks that use Image resources?

- If we're not replacing it with anything, then we can just mark the 
resource type as first Deprecated, and then Hidden and switch the back 
end to use the v2 API for things like deleting. As long as nobody 
attempts to replace the image then the rest of the stack should continue 
to work fine.


3) How do we handle existing templates in future?

- Again, if we're not replacing it with anything, the -> Deprecated -> 
Hidden process is sufficient. (In theory "Hidden" should mean you can't 
create new stacks containing that resource type any more, only continue 
using existing stacks that contained it. In practice, we didn't actually 
implement that and it just gets hidden from the documentation. Obviously 
trying to create a new one using the location field once only the v2 API 
is available will result in an error.)



If we have a different answer to (1) then that could change the answers 
to (2) and (3).


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ui] FYI, the tripleo-ui package is currently broken

2017-01-10 Thread Julie Pichon
On 9 January 2017 at 13:20, Julie Pichon  wrote:
> On 6 January 2017 at 14:52, Julie Pichon  wrote:
>> Hi folks,
>>
>> Just a heads-up that the DLRN "current"/dev package for the Tripleo UI
>> is broken in Ocata and will cause the UI to only show a blank page,
>> until we resolve some dependencies issues within the -deps package.
>>
>> If I understand correctly, we ended up with an incomplete package
>> because we were silently ignoring errors during builds [1] - many
>> thanks to Honza for the debugging work, and the patch!!
>
> The good news: the 'stop on error' patch merged, meaning we will catch
> such errors early in the future, and won't be able to merge patches
> until the dependencies are properly sorted out. A backport was also
> proposed at [1].
>
> The bad news: because currently we're already in a "missing
> dependencies" state due to patches that merged with silent errors and
> the older -deps package, no patch can merge on tripleo-ui until the
> -deps package gets updated. I'm not sure about the ETA for the new
> -deps package but the good folks on #rdo are looking into it (see
> also [2]).

Hi all,

The -deps package has been sorted out, so the CI jobs for tripleo-ui
are passing again. Feel free to recheck away! There is a patch going
through the gate as well [1], once that's merged I expect a new
tripleo-ui package will be available at [2] and updating your local dev
repos to the latest dlrn to get it should be sufficient to have a
working UI again.

Thank you for your patience, and many many thanks to apevec, honza and
number80 for resolving this!

Julie

[1] https://review.openstack.org/#/c/416261/
[2] http://trunk.rdoproject.org/centos7/current/

> Thanks,
>
> Julie
>
> [1] https://review.openstack.org/#/c/417866/
> [2] https://review.rdoproject.org/r/#/c/4215/
>
>> In the meantime, if you want to work with the UI package you should
>> get a version built before December 19th, e.g. [2], or you're probably
>> better off using the UI from source for the time being [3].
>>
>> I'll update this thread when this is resolved.
>>
>> Thanks,
>>
>> Julie
>>
>> [1] https://bugs.launchpad.net/tripleo/+bug/1654051
>> [2] 
>> https://trunk.rdoproject.org/centos7-master/04/15/0415ee80b5c8354124290ac933a34823f2567800_c211fbe8/openstack-tripleo-ui-2.0.0-0.20161212153814.2dfbb0b.el7.centos.noarch.rpm
>> [3] 
>> https://github.com/openstack/tripleo-ui/blob/master/README.md#install-tripleo-ui

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [infra] drbdmanage is no more GPL2

2017-01-10 Thread Sean McGinnis
On Mon, Dec 12, 2016 at 07:58:17AM +0100, Mehdi Abaakouk wrote:
> Hi,
> 
> I have recently seen that drbdmanage python library is no more GPL2 but
> need a end user license agreement [1].
> 
> Is this compatible with the driver policy of Cinder ?
> 
> [1] 
> http://git.drbd.org/drbdmanage.git/commitdiff/441dc6a96b0bc6a08d2469fa5a82d97fc08e8ec1
> 
> Regards
> 
> -- 
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
> 

It doesn't look like much has changed here. There has been one commit
that only slightly modified the new license: [1]

IANAL, and I don't want to make assumption on what can and can't be
done, so looking to other more informed folks. Do we need to remove this
from the Jenkins run CI tests?

Input would be appreciated.

Sean

[1] 
http://git.drbd.org/drbdmanage.git/commit/66fb14feaf688276bd922fbd3a006285e79c9eb9

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

2017-01-10 Thread Morales, Victor
Those are sad news for starting this year.  First of all, grazie for being the 
PTL of Neutron during these last releases.  When I started contributing to 
neutron, I noticed that this community is so vibrant and passioned, that this 
energy needs to be properly addressed and you have demonstrated to posses those 
skills required for that.  Thanks for that effort and enthusiasm.

Victor Morales

From: "Armando M." >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, January 9, 2017 at 8:11 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

Hi neutrinos,

The PTL nomination week is fast approaching [0], and as you might have guessed 
by the subject of this email, I am not planning to run for Pike. If I look back 
at [1], I would like to think that I was able to exercise the influence on the 
goals I set out with my first self-nomination [2].

That said, when it comes to a dynamic project like neutron one can't never 
claim to be *done done* and for this reason, I will continue to be part of the 
neutron core team, and help the future PTL drive the next stage of the 
project's journey.

I must admit, I don't write this email lightly, however I feel that it is now 
the right moment for me to step down, and give someone else the opportunity to 
grow in the amazing role of neutron PTL! I have certainly loved every minute of 
it!

Cheers,
Armando

[0] https://releases.openstack.org/ocata/schedule.html
[1] 
https://review.openstack.org/#/q/project:openstack/election+owner:armando-migliaccio
[2] https://review.openstack.org/#/c/223764/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] User survey question

2017-01-10 Thread Jim Rollenhagen
On Fri, Jan 6, 2017 at 8:11 AM, Dmitry Tantsur  wrote:

> On 01/04/2017 07:43 PM, Mario Villaplana wrote:
>
>> 
>>
>

> - What's been your most frustrating or difficult experience with ironic?
>>
>
> I wanted to suggest this ^^^ question as well, thanks Mario.
>

FYI, this is the one I sent in. Thanks for the suggestions.

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [mentoring] Discuss mentor/mentee experience in Boston?

2017-01-10 Thread Namrata Sitlani
Hello,

I am openstack intern for the outreachy program(December 2016 - March 2017).I 
would love to share the experience of learning and working with experienced and 
motivated mentor and the communication shared by them.
Regards,Namrata Sitlani


From: Barrett, Carol L 
Date: Mon, Jan 9, 2017 at 1:59 PM
Subject: Re: [Openstack] [mentoring] Discuss mentor/mentee experience in Boston?
To: Amrith Kumar , "Anne McCormick (amccormi)"
, "openst...@lists.openstack.org"
, "Emily K Hugenbruch
(ekhugenbr...@us.ibm.com)" 


Sounds like a great proposal! If you want some help putting this
together, the Women of OpenStack Mentoring Program team is Wednesday
Jan 11th at Noon Pacific on IRC #openstack-meeting. Come join us!



Thanks
Carol



From: Amrith Kumar [mailto:amr...@tesora.com]
Sent: Monday, January 09, 2017 9:54 AM
To: Anne McCormick (amccormi) ;
openst...@lists.openstack.org
Subject: Re: [Openstack] [mentoring] Discuss mentor/mentee experience in Boston?



I’d love to … I’ve been very lucky to have worked with a wonderful
mentee and (won’t put him on the spot) it may also be nice to have
some mentee’s on the panel. I have no idea what he’ll say about me so
if you are interested I can ask him if he’d like to join a panel.



Which reminds me, I’d asked him to blog about it, but I haven’t seen
anything … you-know-who … blog post coming?



-amrith



From: Anne McCormick (amccormi) [mailto:amcco...@cisco.com]
Sent: Monday, January 9, 2017 12:03 PM
To: openst...@lists.openstack.org
Subject: [Openstack] [mentoring] Discuss mentor/mentee experience in Boston?



Hello,



I have been working as an OpenStack mentor, and would like to submit a
talk proposal about the mentor/mentee experience at the Boston Summit.
Would anyone else like to share their experiences as well?  Please let
me know, and maybe we could do this as a panel.



Thanks!



- Anne McCormick
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] [ceilometer] [panko] ceilometer API deprecation

2017-01-10 Thread Tim Bell

On 10 Jan 2017, at 15:21, gordon chung > 
wrote:



On 10/01/17 07:27 AM, Julien Danjou wrote:
On Mon, Jan 09 2017, William M Edmonds wrote:

I started the conversation on IRC [5], but wanted to send this to the
mailing list and see if others have thoughts/concerns here and figure out
what we should do about this going forward.

Nothing? The code has not been removed, it has been moved to a new
project. Ocata will be the second release for Panko, so if user did not
switch already during Newton, they'll have to do it for Ocata. That's a
lot of overlap. Two cycles to switch to a "new" service should be enough.

well it's not actually two. it'd just be the one cycle in Newton since
it's gone in Ocata. :P

that said, for me, the move to remove it is to avoid any needless
additional work of maintaining two active codebases. we're a small team
so it's work we don't have time for.

as i mentioned in chat, i'm ok with reverting patch and leaving it for
Ocata but if the transition is clean (similiar to how aodh was split)
i'd rather not waste resources on maintaining residual 'dead' code.

cheers,
--
gord


What’s also good is that Panko has equivalent function for Puppet and RPMs:

- https://github.com/openstack/puppet-panko
- https://www.rdoproject.org/documentation/package-list/

In the past, these equivalent functions have sometimes lagged the code release 
so it’s great to see the additional functions there.

Tim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Nova]Making gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial non voting

2017-01-10 Thread Jordan Pittier
Hi,
I don't know if you've noticed but
the gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial job has a
high rate of false negative. I've queried Gerrit and analysed all the
"Verified -2" messages left by Jenkins (i.e Gate failure) for the last 30
days. (script is here [1]).

On project openstack/nova: For the last 58 times where
gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial ran AND jenkins
left a 'Verified -2' message, the job failed 48 times and succeeded 10
times.

On project openstack/tempest: For the last 25 times where
gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial ran AND jenkins
left a 'Verified -2' message, the job failed 14 times and succeeded 11
times.

In order words, when there's a gate failure
gate-tempest-dsvm-full-devstack-plugin-ceph-ubuntu-xenial is the main
culprit, by a significant margin.

I am Tempest core reviewer and this bugs me because it slows the
development of the project I care for reasons that I don't really
understand. I am going to propose a change to make this job non voting on
openstack/tempest.

Jordan

[1] :
https://github.com/JordanP/openstack-snippets/blob/master/analyse-gate-failures/analyse_gate_failures.py

-- 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack-gate][all]support multi-region gate environment

2017-01-10 Thread Sean M. Collins
joehuang wrote:
> In multi-region environment(for example, two regions RegionOne and 
> RegionTwo), KeyStone will be shared in RegionOne and RegionTwo, so the 
> primary node and subnode should use different role, one role to enable the 
> keystone, while another role is to use the keystone in another node, only one 
> role to support multi-region setup seems to be not possible.  The flag 
> "MULTI_REGION" is to make the subnode play with the role where no keystone 
> will run. If we don't use the flag, or maybe use DEVSTACK_GATE_MULTI_REGION?
> 
> This is the first patch in devstack-gate for me, any help or guide will be 
> appreciated.


Basically, it's more of a note for myself at this point.

We don't directly expose a way to define a role in
devstack-gate[1][2][3][4]. We do a lot of heuristics to detect whether
or not the node is a primary node or a subnode.

Ideally, we should really have an environment variable ($ROLE ?) that
can be set by projects in  project-config, and just call the test matrix
script[5] with the role that is being set in the environment variable.

Because otherwise we end up with more if/else checks on random variables
like your patch and the MULTI_KEYSTONE[6] patch, and eventually it
becomes very difficult to maintain and add to.

Does this make sense?


[1]: 
https://github.com/openstack-infra/devstack-gate/blob/8740b6075b53e3c9bfda76d022fcc53904594e9c/devstack-vm-gate.sh#L230
[2]: 
https://github.com/openstack-infra/devstack-gate/blob/8740b6075b53e3c9bfda76d022fcc53904594e9c/devstack-vm-gate.sh#L259

[3]: 
https://github.com/openstack-infra/devstack-gate/blob/8740b6075b53e3c9bfda76d022fcc53904594e9c/devstack-vm-gate.sh#L642

[4]: 
https://github.com/openstack-infra/devstack-gate/blob/8740b6075b53e3c9bfda76d022fcc53904594e9c/devstack-vm-gate.sh#L121

[5]: 
https://github.com/openstack-infra/devstack-gate/blob/8740b6075b53e3c9bfda76d022fcc53904594e9c/devstack-vm-gate.sh#L265

[6]: https://review.openstack.org/#/c/394895/
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] etherpad for ptg

2017-01-10 Thread Steve Martinelli
keystoners,

here is the link to the etherpad for the ptg:
https://etherpad.openstack.org/p/keystone-pike-ptg
here is a link to other project etherpads:
https://wiki.openstack.org/wiki/PTG/Pike/Etherpads

I'll announce this in our meeting today also.

- steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Virtual midcycle/sprint - Jan 11/12

2017-01-10 Thread Alex Schultz
On Tue, Dec 6, 2016 at 12:32 PM, Alex Schultz  wrote:
> Hey everyone,
>
> We're going to be running a small midcycle/sprint on January 11-12,
> 2017 in #puppet-openstack on freenode.  We will be reviewing the work
> items for Ocata and doing some bug triage. Feel free to add additional
> topics to the etherpad[0].
>
> Thanks,
> -Alex
>
> [0] https://etherpad.openstack.org/p/puppet-openstack-ocata-midcycle


Just a reminder that we'll be doing this tomorrow and Thursday. So if
you want to help out or have issues and questions, come see us in
#puppet-openstack on freenode.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] [ceilometer] [panko] ceilometer API deprecation

2017-01-10 Thread gordon chung


On 10/01/17 07:27 AM, Julien Danjou wrote:
> On Mon, Jan 09 2017, William M Edmonds wrote:
>
>> I started the conversation on IRC [5], but wanted to send this to the
>> mailing list and see if others have thoughts/concerns here and figure out
>> what we should do about this going forward.
>
> Nothing? The code has not been removed, it has been moved to a new
> project. Ocata will be the second release for Panko, so if user did not
> switch already during Newton, they'll have to do it for Ocata. That's a
> lot of overlap. Two cycles to switch to a "new" service should be enough.

well it's not actually two. it'd just be the one cycle in Newton since 
it's gone in Ocata. :P

that said, for me, the move to remove it is to avoid any needless 
additional work of maintaining two active codebases. we're a small team 
so it's work we don't have time for.

as i mentioned in chat, i'm ok with reverting patch and leaving it for 
Ocata but if the transition is clean (similiar to how aodh was split) 
i'd rather not waste resources on maintaining residual 'dead' code.

cheers,
-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] About alarms reported by datasource and the alarms generated by vitrage evaluator

2017-01-10 Thread Afek, Ifat (Nokia - IL)
Hi Yinliyin,

At first I thought that changing the deduced to be a property on the alarm 
might help in solving your use case. But now I think most of the problems will 
remain the same:


  *   It won’t solve the general problem of two different monitors that raise 
the same alarm
  *   It won’t solve possible conflicts of timestamp and severity between 
different monitors
  *   It will make the decision of when to delete the alarm more complex 
(delete it when the deduced alarm is deleted? When Nagios alarm is deleted? 
both? And how to change the timestamp and severity in these cases?)

So I don’t think that making this change is beneficial.
What do you think?

Best Regards,
Ifat.


From: "yinli...@zte.com.cn" 
Date: Monday, 9 January 2017 at 05:29
To: "Afek, Ifat (Nokia - IL)" 
Cc: "openstack-dev@lists.openstack.org" , 
"han.jin...@zte.com.cn" , "wang.we...@zte.com.cn" 
, "zhang.yuj...@zte.com.cn" , 
"jia.peiy...@zte.com.cn" , "gong.yah...@zte.com.cn" 

Subject: Re: [openstack-dev] [Vitrage] About alarms reported by datasource and 
the alarms generated by vitrage evaluator




Hi Ifat,

 I think there is a situation that all the alarms are reported by the 
monitored system. We use vitrage to:

1.  Found the relationships of the alarms, and find the root cause.

2.  Deduce the alarm before it really occured. This comprise two 
aspects:

 1) A cause B:  When A occured,  we deduce that B would occur

 2) B is caused by A:  When B occured, we deduce that A must 
occured

In "2",   we do expect vitrage to raise the alarm before the alarm 
is reported because the alarm would be lost or be delayed for some reason.  So 
we would write "raise alarm" actions in the scenarios of the template.  I think 
that the alarm is reported or is deduced should be a state property of the 
alarm. The vertex reported and the vertex deduced of the same alarm should be 
merged to one vertex.



 Best Regards,

 Yinliyin.

























殷力殷 YinLiYin



项目经理   Project Manager
虚拟化上海五部/无线研究院/无线产品经营部 NIV Shanghai Dept. V/Wireless Product R&D 
Institute/Wireless Product Operation


[cid:image001.gif@01D26B5C.646157B0]

[cid:image002.gif@01D26B5C.646157B0]
上海市浦东新区碧波路889号中兴研发大楼D502
D502, ZTE Corporation R Center, 889# Bibo Road,
Zhangjiang Hi-tech Park, Shanghai, P.R.China, 201203
T: +86 21 68896229
M: +86 13641895907
E: yinli...@zte.com.cn
www.zte.com.cn

原始邮件
发件人: <ifat.a...@nokia.com>;
收件人: <openstack-dev@lists.openstack.org>;
抄送人:韩静6838;王维雅00042110;章宇军10200531;贾培源10101785;龚亚辉6092001895;
日 期 :2017年01月07日 02:18
主 题 :Re: [openstack-dev] [Vitrage] About alarms reported by datasource and the 
alarms generated by vitrage evaluator


Hi YinLiYin,

This is an interesting question. Let me divide my answer to two parts.

First, the case that you described with Nagios and Vitrage. This problem 
depends on the specific Nagios tests that you configure in your system, as well 
as on the Vitrage templates that  you use. For example, you can use 
Nagios/Zabbix to monitor the physical layer, and Vitrage to raise deduced 
alarms on the virtual and application layers. This way you will never have 
duplicated alarms. If you want to use Nagios to monitor the other layers  as 
well, you can simply modify Vitrage templates so they don’t raise the deduced 
alarms that Nagios may generate, and use the templates to show RCA between 
different Nagios alarms.

Now let’s talk about the more general case. Vitrage can receive alarms from 
different monitors, including Nagios, Zabbix, collectd and Aodh. If you are 
using more than one monitor, it is  possible that the same alarm (maybe with a 
different name) will be raised twice. We need to create a mechanism to identify 
such cases and create a single alarm with the properties of both monitors. This 
has not been designed in details yet, so if you have  any suggestion we will be 
happy to hear them.

Best Regards,
Ifat.


From: "yinli...@zte.com.cn" <yinli...@zte.com.cn>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Friday, 6 January 2017 at 03:27
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Cc: "gong.yah...@zte.com.cn" <gong.yah...@zte.com.cn>, "han.jin...@zte.com.cn" 
<han.jin...@zte.com.cn>, "wang.we...@zte.com.cn" <wang.we...@zte.com.cn>, 
"jia.peiy...@zte.com.cn" <jia.peiy...@zte.com.cn>, "zhang.yuj...@zte.com.cn" 
<zhang.yuj...@zte.com.cn>
Subject: [openstack-dev] [Vitrage] About alarms reported by datasource and the 
alarms generated by vitrage evaluator


Hi all,

   Vitrage generate alarms acording to the templates. All the alarms raised by 
vitrage has the type "vitrage". Suppose Nagios has 

Re: [openstack-dev] [tc][goals] Microversions in OpenStack projects

2017-01-10 Thread Doug Hellmann
Excerpts from LAM, TIN's message of 2017-01-09 21:43:55 +:
> As noted in [1], "Add microversion to REST APIs" is one of the cross-project
> community goals.  Given the scope and the fact there is still discussion on
> what "microversion" means to each project and the exact technical 
> implementation, what are your thoughts on the direction the community should
> take with regards to this goal.
> 
> Aside from adopting microversions, what other consistent API behaviors should
> the community be aiming for in future release? Also, is this goal a potential
> target for the Q- or R-release.
> 
> [1] https://etherpad.openstack.org/p/community-goals
> 
> 
> Tin Lam
> AT Integrated Cloud Engineer
> 

In general, goals need someone ready to lead the work of writing
the definition and a small team of guides ready to help with
implementation details. I don't know if we have anyone committed
to fill either of those roles for the microversion goal. For now,
it is listed on the backlog to encourage discussion (like this
thread).

If there are other API consistencies we want to introduce, I think
those would need to be defined as separate goals so we can be clear
about the impact and the desired end state, so we can clearly define
what "done" means. As Ed said in his response, if you have suggestions
for those sorts of changes, talking about them with the API working
group would be a good first step.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] Ocata upgrade procedure and problems when it's optional in Newton

2017-01-10 Thread Sylvain Bauza
Aloha folks,

Recently, I was discussing with TripleO folks. Disclaimer, I don't think
it's only a TripleO related discussion but rather a larger one for all
our deployers.

So, the question I was asked was about how to upgrade from Newton to
Ocata for the Placement API when the deployer is not using yet the
Placement API for Newton (because it was optional in Newton).

The quick answer was to say "easy, just upgrade the service and run the
placement API *before* the scheduler upgrade". That's because we're
working on a change for the scheduler calling the Placement API instead
of getting all the compute nodes [1]

That said, I thought about something else : wait, the Newton compute
nodes work with the Placement API, cool. Cool, but what if the Placement
API is optional in Newton ? Then, the Newton computes are stopping to
call the Placement API because of a nice decorator [2] (okay with me)

Then, imagine the problem for the upgrade : given we don't have
deployers running the Placement API in Newton, they would need to
*first* deploy the (Newton or Ocata) Placement service, then SIGHUP all
the Newton compute nodes to have them reporting the resources (and
creating the inventories), then wait for some minutes that all the
inventories are reported, and then upgrade all the services (but the
compute nodes of course) to Ocata, including the scheduler service.

The above looks a different upgrade policy, right?
 - Either we say you need to run the Newton placement service *before*
upgrading - and in that case, the Placement service is not optional for
Newton, right?
 - Or, we say you need to run the Ocata placement service and then
restart the compute nodes *before* upgrading the services - and that's a
very different situation than the current upgrade way.

For example, I know it's not a Nova stuff, but most of our deployers
have what they say "controllers" vs. "compute" services, ie. all the
Nova services but computes running on a single (or more) machine(s). In
that case, the "controller" upgrade is monotonic and all the services
are upgraded and restarted at the same stage. If so, that looks
difficult for those deployers to just be asked to have a very different
procedure.

Anyway, I think we need to carefully consider that, and probably find
some solutions. For example, we could imagine (disclaimer #2, that's
probably silly solutions, but that's the ones I'm thinking now) :
 - a DB migration for creating the inventories and allocations before
upgrading (ie. not asking the computes to register themselves to the
placement API). That would be terrible because it's a data upgrade, I
know...
 - having the scheduler having a backwards compatible behaviour in [1],
ie. trying to call the Placement API for getting the list of RPs or
failback to calling all the ComputeNodes if that's not possible. But
that would mean that the Placement API is still optional for Ocata :/
 - merging the scheduler calling the Placement API [1] in a point
release after we deliver Ocata (and still make the Placement API
mandatory for Ocata) so that we would be sure that all computes are
reporting their status to the Placement once we restart the scheduler in
the point release.


Thoughts ?
-Sylvain


[1] https://review.openstack.org/#/c/417961/

[2]
https://github.com/openstack/nova/blob/180e6340a595ec047c59365465f36fed7a669ec3/nova/scheduler/client/report.py#L40-L67

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Feedback from driver maintainers about future of driver projects

2017-01-10 Thread Steve Martinelli
Hey Neil, it's entirely possible I missed some folks. Feel free to reply to
this thread.

I'll include some of the preamble and questions from my original note.

-

The TC has come up with two strategies for clarification: (1) Make sure
external drivers are properly listed and discoverable on OpenStack
websites, and (2) Establish a new "driver team" concept (see
https://review.openstack.org/#/c/403829/).

As maintainers for neutron, cinder and ironic drivers, we’re asking you for
your opinion on this matter. We need to know if we are solving the right
problem here.

Are driver teams looking for recognition? I.e. receiving ATC status, see
their work recognized as an official part of “OpenStack”?
Are driver teams looking for discoverability? I.e. having their drivers
listed together with all the other available drivers on the OpenStack
Marketplace or DriverLog websites ?
Are driver teams looking for documentation visibility? I.e. having an
official place to host configuration guides? On docs.o.org?
Are driver teams looking for independence? I.e. the ability to manage and
nominate their own core members?
Anything else we missed?

On Jan 10, 2017 5:55 AM, "Neil Jerram"  wrote:

> Hi Steve,
>
> On Tue, Jan 10, 2017 at 7:35 AM Steve Martinelli 
> wrote:
>
>> In preparation for the next TC meeting, a survey was sent out to driver
>> maintainers, here is a summary of the feedback that was gathered.
>>
>
> Did you send that survey to me, for networking-calico?  I'm afraid I don't
> recall it - so worried either that I missed, or that networking-calico was
> missed for some reason.
>
> Thanks,
>  Neil
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-sfc] Limitation on port chains + flow classifiers

2017-01-10 Thread Duarte Cardoso, Igor
Hi networking-sfc,

While working on the SFC Graphs patch, I observed the following limitation when 
creating port-chains: http://paste.openstack.org/show/594387/.

My objective was to have 2 port-chains acting on the same classification of 
traffic but from different logical source ports - my expectation was that there 
wouldn't be any clash here.

However, the flow-classifiers clash when they are associated to those 2 
different port-chains.

The exception is raised in [1] and the attributes of the flow-classifier being 
checked are in [2], where neither logical source port or logical destination 
port are specified.

Is there a specific reasoning behind this or can it be considered a bug? For 
the SFC Graphs work, it's important that this limitation be lifted - I'm happy 
to submit a patch to correct it.
Let me know your thoughts.

[1] 
https://github.com/openstack/networking-sfc/blob/9b4a918177768a036c192a62fa473841c333b644/networking_sfc/db/sfc_db.py#L259
[2] 
https://github.com/openstack/networking-sfc/blob/b8dd1151343fef826043f408cd3027c5133fde30/networking_sfc/db/flowclassifier_db.py#L159

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ovsdpdk mitaka

2017-01-10 Thread Mooney, Sean K
Hi
In mitaka all support for ovs-dpdk was merged upstream into the standard 
neutron openvswitch agent and ml2 driver.
The networking-ovs-dpdk-agent was removed in liberty and the ml2 driver was 
removed in mitaka.
For mitaka+ networking-ovs-dpdk primarily provides a devstack plugin to compile 
and install ovs and dpdk from source, a puppet module(now deprecated) todo the 
same
and a learn action based firewall driver.

The stable mitaka branch of networking-ovs-dpdk has only been tested with 
Ubuntu 14.04,centos7 and I believe fedora 22
It may work on 16.04 but I am not sure if there are systemd patches from newton 
that are not backported.
Regards
sean

From: Shaughnessy, David [mailto:david.shaughne...@intel.com]
Sent: Tuesday, January 10, 2017 12:04 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] ovsdpdk mitaka

Hi Santosh.
There is a getting started guide in the networking-ovs-dpdk project that should 
be of some help.[1]
It’s written for Ubuntu 14.04, but the master branch/ stable newton is for 
Ubuntu 16.04.
Regards.
David.


[1] 
https://github.com/openstack/networking-ovs-dpdk/blob/stable/mitaka/doc/source/getstarted/devstack/ubuntu.rst


From: Santosh S [mailto:santoshsethu2...@gmail.com]
Sent: Tuesday, January 3, 2017 10:45 AM
To: 
openstack-dev@lists.openstack.org; 
Santosh S >
Subject: [openstack-dev] ovsdpdk mitaka


Hello Folks,,

I am a learner in openstack to understand the cloud computing.
Here, I am attempting to install a networking-ovs-dpdk-agent in openstack
mitaka release on controller and compute node setup with ubuntu 16.04.

Could you please help me what steps i need to follow to bring ovspdk up in
this 2 node setup.

It would be great if you help me on this.

Thank you
Santosh

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] [ceilometer] [panko] ceilometer API deprecation

2017-01-10 Thread Julien Danjou
On Mon, Jan 09 2017, William M Edmonds wrote:

> I started the conversation on IRC [5], but wanted to send this to the
> mailing list and see if others have thoughts/concerns here and figure out
> what we should do about this going forward.

Nothing? The code has not been removed, it has been moved to a new
project. Ocata will be the second release for Panko, so if user did not
switch already during Newton, they'll have to do it for Ocata. That's a
lot of overlap. Two cycles to switch to a "new" service should be enough.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ovsdpdk mitaka

2017-01-10 Thread Shaughnessy, David
Hi Santosh.
There is a getting started guide in the networking-ovs-dpdk project that should 
be of some help.[1]
It’s written for Ubuntu 14.04, but the master branch/ stable newton is for 
Ubuntu 16.04.
Regards.
David.


[1] 
https://github.com/openstack/networking-ovs-dpdk/blob/stable/mitaka/doc/source/getstarted/devstack/ubuntu.rst


From: Santosh S [mailto:santoshsethu2...@gmail.com]
Sent: Tuesday, January 3, 2017 10:45 AM
To: openstack-dev@lists.openstack.org; Santosh S 
Subject: [openstack-dev] ovsdpdk mitaka


Hello Folks,,

I am a learner in openstack to understand the cloud computing.
Here, I am attempting to install a networking-ovs-dpdk-agent in openstack
mitaka release on controller and compute node setup with ubuntu 16.04.

Could you please help me what steps i need to follow to bring ovspdk up in
this 2 node setup.

It would be great if you help me on this.

Thank you
Santosh
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-api-metadata managing firewall

2017-01-10 Thread Jens Rosenboom
2017-01-10 4:33 GMT+01:00 Sam Morrison :
> Hi nova-devs,
>
> I raised a bug about nova-api-metadata messing with iptables on a host
>
> https://bugs.launchpad.net/nova/+bug/1648643
>
> It got closed as won’t fix but I think it could do with a little more
> discussion.
>
> Currently nova-api-metadata will create an iptable rule and also delete
> other rules on the host. This was needed for back in the nova-network days
> as there was some trickery going on there.
> Now with neutron and neutron-metadata-proxy nova-api-metadata is little more
> that a web server much like nova-api.
>
> I may be missing some use case but I don’t think nova-api-metadata needs to
> care about firewall rules (much like nova-api doesn’t care about firewall
> rules)

I agree with Sam on this. Looking a bit into the code, the mangling part of the
iptables rules is only called in nova/network/l3.py, which seems to happen only
when nova-network is being used. The installation of the global nova-iptables
setup however happens unconditionally in nova/api/manager.py as soon as the
nova-api-metadata service is started, which doesn't make much sense in a
Neutron environment. So I would propose to either make this setup happen
only when nova-network is used or at least allow an deployer to turn it off via
a config option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Inconsistency of parameter type in Nova API Reference

2017-01-10 Thread Sean Dague
On 01/10/2017 01:39 AM, Takashi Natsume wrote:
> Hi Nova developers.
> 
> In Nova API Reference(*1),
> the following parameters' values are 'null' in HTTP request body samples.
> And their parameter types are defined as 'string'.
> 
> * 'confirmResize' parameter in "Confirm Resized Server (confirmResize
> Action)"
> * 'lock' parameter in "Lock Server (lock Action)"
> * 'pause' parameter in "Pause Server (pause Action)"
> * 'resume' parameter in "Resume Suspended Server (resume Action)"
> * 'revertResize' parameter in "Revert Resized Server (revertResize Action)"
> * 'os-start' parameter in "Start Server (os-start Action)"
> * 'os-stop' parameter in "Stop Server (os-stop Action)"
> * 'suspend' parameter in "Suspend Server (suspend Action)"
> 
> On the other hand,
> the following parameter's value is 'null' in the HTTP request body sample.
> But the parameter type is defined as 'none'.
> 
> * 'trigger_crash_dump' in "Trigger Crash Dump In Server"
> 
> IMO, there is inconsistency of parameter types.
> Should they be unified as 'none'?

+1. 'none' seem appropriate here and not confuse this with strings.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] PTG planning etherpad

2017-01-10 Thread Dulko, Michal
Hi,

PTG planning etherpad wasn't advertised on the list, so I'm linking it
below. It's still pretty empty, so I guess it's time to start filling
it up.

https://etherpad.openstack.org/p/ATL-cinder-ptg-planning

Thanks,
Michal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

2017-01-10 Thread Neil Jerram
I very much agree with Nate's comment here, and have particularly
appreciated our discussions in connection with networking-calico.  I look
forward to your continuing involvement and help, even if not as PTL.

Thanks Armando!


On Mon, Jan 9, 2017 at 4:02 PM Nate Johnston 
wrote:

> Thank you Armando for your leadership in transforming the Stadium, for
> always
> being being helpful and patient shepherding a gaggle of small projects.
> You
> set an example for how to deal with difficult Layer 8 and Layer 9
> issues[1].
> Thank you for your leadership and your service.
>
> --N.
>
> [1] https://en.wikipedia.org/wiki/Layer_8 if you've hever heard the term
> before
>
> On Mon, Jan 09, 2017 at 03:11:01PM +0100, Armando M. wrote:
> > Hi neutrinos,
> >
> > The PTL nomination week is fast approaching [0], and as you might have
> > guessed by the subject of this email, I am not planning to run for Pike.
> If
> > I look back at [1], I would like to think that I was able to exercise the
> > influence on the goals I set out with my first self-nomination [2].
> >
> > That said, when it comes to a dynamic project like neutron one can't
> never
> > claim to be *done done* and for this reason, I will continue to be part
> of
> > the neutron core team, and help the future PTL drive the next stage of
> the
> > project's journey.
> >
> > I must admit, I don't write this email lightly, however I feel that it is
> > now the right moment for me to step down, and give someone else the
> > opportunity to grow in the amazing role of neutron PTL! I have certainly
> > loved every minute of it!
> >
> > Cheers,
> > Armando
> >
> > [0] https://releases.openstack.org/ocata/schedule.html
> > [1] https://review.openstack.org/#/q/project:openstack/elect
> > ion+owner:armando-migliaccio
> > [2] https://review.openstack.org/#/c/223764/
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Which service is using port 8778?

2017-01-10 Thread Andy McCrae
Sorry to resurrect a few weeks old thread, but I had a few questions.


> Yes, we should stop with the magic ports. Part of the reason of
> switching over to apache was to alleviate all of that.
>
> -Sean
>

Is this for devstack specifically?
I can see the motivation for Devstack, since it reduces the concern for
managing port allocations.

Is the idea that we move away from ports and everything is on 80 with a
VHost to differentiate between services/endpoints?

It seems to me that it would still be good to have a "designated" (and
unique - or as unique as possible at least within OpenStack) port for
services. We may not have all services on the same hosts, for example,
using a single VIP for load balancing. The issue then is that it becomes
hard to differentiate the LB pool based on the request.
I.e. How would i differentiate between Horizon requests and requests for
any other service on port 80, the VIP is the same, but the backends may be
completely different (so all requests aren't handled by the same Apache
server).

Assuming, in that case, having a designated port is the only way (and if it
isn't I'd love to discuss alternate, and simpler, methods of achieving
this) it then seems that assigning a dedicated port for services in
Devstack would make sense - it would ensure that there is no overlap, and
in a way the error received when the ports overlapped is a genuine issue
that would need to be addressed. Although if that is the case, perhaps
there is a better way to manage that.

Essentially it seems better to handle port conflicts (within the OpenStack
ecosystem, at least) at source rather than pass that on to the deployer to
randomly pick ports and avoid conflicts.

Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Feedback from driver maintainers about future of driver projects

2017-01-10 Thread Neil Jerram
Hi Steve,

On Tue, Jan 10, 2017 at 7:35 AM Steve Martinelli 
wrote:

> In preparation for the next TC meeting, a survey was sent out to driver
> maintainers, here is a summary of the feedback that was gathered.
>

Did you send that survey to me, for networking-calico?  I'm afraid I don't
recall it - so worried either that I missed, or that networking-calico was
missed for some reason.

Thanks,
 Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-10 Thread Flavio Percoco

On 10/01/17 08:59 +, Huangtianhua wrote:



-邮件原件-
发件人: Flavio Percoco [mailto:fla...@redhat.com]
发送时间: 2017年1月10日 15:34
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [heat] glance v2 support?

On 10/01/17 12:35 +0530, Rabi Mishra wrote:

On Mon, Jan 9, 2017 at 4:45 PM, Flavio Percoco  wrote:


On 06/01/17 09:34 +0530, Rabi Mishra wrote:


On Fri, Jan 6, 2017 at 4:38 AM, Emilien Macchi 
wrote:

Greetings Heat folks!


My question is simple:
When do you plan to support Glance v2?
https://review.openstack.org/#/c/240450/

The spec looks staled while Glance v1 was deprecated in Newton (and
v2 was started in Kilo!).


Hi Emilien,


I think we've not been able to move to v2 due to v1/v2
incompatibility[1] with respect to the location[2] property. Moving
to v2 would break all existing templates using that property.

I've seen several discussions around that without any conclusion.  I
think we can support a separate v2 image resource and deprecate the
current one, unless there is a better path available.



Hi Rabi,

Could you elaborate on why Heat depends on the location attribute?
I'm not familiar with Heat and knowing this might help me to propose
something (or at least understand the difficulties).

I don't think putting this on hold will be of any help. V1 ain't
coming back and the improvements for v2 are still under heavy coding.
I'd probably recommend moving to v2 with a proper deprecation path
rather than sticking to v1.



Hi Flavio,

As much as we would like to move to v2, I think we still don't have a
acceptable solution for the question below. There is an earlier ML
thread[1], where it was discussed in detail.

- What's the migration path for images created with v1 that use the
location attribute pointing to an external location?


Moving to Glance v2 shouldn't break this. As in, Glance will still be able to 
pull the images from external locations.

Also, to be precise more precise, you actually *can* use locations in V2.
Glance's node needs to have 2 settings enabled. The first is 
`show_multple_locations` and the second one is a policy config[0]. It's however 
not recommended to expose that to end users but that's why it was shielded 
behind policies.
---As you said, we can't use location in v2 by default. IMO, If glance v2 is 
compatible with v1, the option should be enabled by default.



I don't think this will happen. It's no news, tbh. The Glance community has been
comunicating this since the Kilo release. It was written in different times,
under different assumptions and the price is still being paid.


I'd recommend Heat to not use locations as that will require deployers to 
either enable them for everyone or have a dedicate glance-api node for Heat.
If not use location, do we have other options for user? What should user to 
do before create a glance image using v2? Download the image data? And then 
pass the image data to glance api? I really don't think it's good way.



That *IS* how users create images. There used to be copy-from too (which may or
may not come back).

Heat's use case is different and I understand that but as I said in my other
email, I do not think sticking to v1 is the right approach. I'd rather move on
with a deprecation path or compatibility layer.


Flavio


All that being said, switching to v2 won't prevent Glance from reading images 
from external locations if the image records exist already.
Yes, but how to create a new glance image?

[0] https://github.com/openstack/glance/blob/master/etc/policy.json#L16-L18


While answering the above we've to keep in mind the following constraint.

- Any change in the image id(new image) would potentially result in
nova servers using them in the template being rebuilt/replaced, and we
would like to avoid it.

There was a suggestion to allow the 'copy-from'  with v2, which would
possibly make it easier for us. Is that still an option?


May be, in the long future. The improvements for v2 are still under heavy 
development.


I assume we can probably use glance upload api to upload the image
data(after getting it from the external location) for an existing image?
Last time i tried to do it, it seems to be not allowed for an 'active'
image. It's  possible I'm missing something here.  We don't have a way
at present,  for a user to upload an image to heat engine( not sure if
we would like do to it either) or heat engine downloading the image
from an 'external location' and then uploading it to glance while
creating/updating an image resource.


Downloading the image locally and uploading it is a workaround, yes. Not ideal 
but it's simple. However, you won't need it for the migration to v2, I believe, 
since you can re-use existing images. Heat won't be able to create new images 
and have them point to external locations, though, unless the settings I 
mentioned above have been enabled.


Also, glance location api could 

[openstack-dev] [all] List of all Pike PTG Etherpads

2017-01-10 Thread Thierry Carrez
Hi everyone,

As suggested by notmyname on IRC, I created a wiki page listing all the
Atlanta PTG room planning etherpads, for easier reference:

https://wiki.openstack.org/wiki/PTG/Pike/Etherpads

I tried to find all the ones mentioned on the ML so far, but I certainly
missed some, so feel free to directly add to the document.

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [heat] glance v2 support?

2017-01-10 Thread Huangtianhua


-邮件原件-
发件人: Flavio Percoco [mailto:fla...@redhat.com] 
发送时间: 2017年1月10日 15:34
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [heat] glance v2 support?

On 10/01/17 12:35 +0530, Rabi Mishra wrote:
>On Mon, Jan 9, 2017 at 4:45 PM, Flavio Percoco  wrote:
>
>> On 06/01/17 09:34 +0530, Rabi Mishra wrote:
>>
>>> On Fri, Jan 6, 2017 at 4:38 AM, Emilien Macchi 
>>> wrote:
>>>
>>> Greetings Heat folks!

 My question is simple:
 When do you plan to support Glance v2?
 https://review.openstack.org/#/c/240450/

 The spec looks staled while Glance v1 was deprecated in Newton (and 
 v2 was started in Kilo!).


 Hi Emilien,
>>>
>>> I think we've not been able to move to v2 due to v1/v2 
>>> incompatibility[1] with respect to the location[2] property. Moving 
>>> to v2 would break all existing templates using that property.
>>>
>>> I've seen several discussions around that without any conclusion.  I 
>>> think we can support a separate v2 image resource and deprecate the 
>>> current one, unless there is a better path available.
>>>
>>
>> Hi Rabi,
>>
>> Could you elaborate on why Heat depends on the location attribute? 
>> I'm not familiar with Heat and knowing this might help me to propose 
>> something (or at least understand the difficulties).
>>
>> I don't think putting this on hold will be of any help. V1 ain't 
>> coming back and the improvements for v2 are still under heavy coding. 
>> I'd probably recommend moving to v2 with a proper deprecation path 
>> rather than sticking to v1.
>>
>>
>Hi Flavio,
>
>As much as we would like to move to v2, I think we still don't have a 
>acceptable solution for the question below. There is an earlier ML 
>thread[1], where it was discussed in detail.
>
>- What's the migration path for images created with v1 that use the 
>location attribute pointing to an external location?

Moving to Glance v2 shouldn't break this. As in, Glance will still be able to 
pull the images from external locations.

Also, to be precise more precise, you actually *can* use locations in V2.
Glance's node needs to have 2 settings enabled. The first is 
`show_multple_locations` and the second one is a policy config[0]. It's however 
not recommended to expose that to end users but that's why it was shielded 
behind policies.
---As you said, we can't use location in v2 by default. IMO, If glance v2 is 
compatible with v1, the option should be enabled by default.

I'd recommend Heat to not use locations as that will require deployers to 
either enable them for everyone or have a dedicate glance-api node for Heat.
If not use location, do we have other options for user? What should user to 
do before create a glance image using v2? Download the image data? And then 
pass the image data to glance api? I really don't think it's good way.

All that being said, switching to v2 won't prevent Glance from reading images 
from external locations if the image records exist already.
Yes, but how to create a new glance image?

[0] https://github.com/openstack/glance/blob/master/etc/policy.json#L16-L18

>While answering the above we've to keep in mind the following constraint.
>
>- Any change in the image id(new image) would potentially result in 
>nova servers using them in the template being rebuilt/replaced, and we 
>would like to avoid it.
>
>There was a suggestion to allow the 'copy-from'  with v2, which would 
>possibly make it easier for us. Is that still an option?

May be, in the long future. The improvements for v2 are still under heavy 
development.

>I assume we can probably use glance upload api to upload the image 
>data(after getting it from the external location) for an existing image?
>Last time i tried to do it, it seems to be not allowed for an 'active'
>image. It's  possible I'm missing something here.  We don't have a way 
>at present,  for a user to upload an image to heat engine( not sure if 
>we would like do to it either) or heat engine downloading the image 
>from an 'external location' and then uploading it to glance while 
>creating/updating an image resource.

Downloading the image locally and uploading it is a workaround, yes. Not ideal 
but it's simple. However, you won't need it for the migration to v2, I believe, 
since you can re-use existing images. Heat won't be able to create new images 
and have them point to external locations, though, unless the settings I 
mentioned above have been enabled.

>Also, glance location api could probably have been useful here. 
>However, we were advised in the earlier thread not to use it, as 
>exposing the location to the end user is perceived as a security risk.

++

Flavio

>
>[1]  
>http://lists.openstack.org/pipermail/openstack-dev/2016-May/094598.html
>
>
>Cheers,
>> Flavio
>>
>>
>>> [1] 
>>> https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability
>>> [2] 

Re: [openstack-dev] [heat] glance v2 support?

2017-01-10 Thread Rabi Mishra
On Tue, Jan 10, 2017 at 1:03 PM, Flavio Percoco  wrote:

> On 10/01/17 12:35 +0530, Rabi Mishra wrote:
>
>> On Mon, Jan 9, 2017 at 4:45 PM, Flavio Percoco  wrote:
>>
>> On 06/01/17 09:34 +0530, Rabi Mishra wrote:
>>>
>>> On Fri, Jan 6, 2017 at 4:38 AM, Emilien Macchi 
 wrote:

 Greetings Heat folks!

>
> My question is simple:
> When do you plan to support Glance v2?
> https://review.openstack.org/#/c/240450/
>
> The spec looks staled while Glance v1 was deprecated in Newton (and v2
> was started in Kilo!).
>
>
> Hi Emilien,
>

 I think we've not been able to move to v2 due to v1/v2
 incompatibility[1]
 with respect to the location[2] property. Moving to v2 would break all
 existing templates using that property.

 I've seen several discussions around that without any conclusion.  I
 think
 we can support a separate v2 image resource and deprecate the current
 one,
 unless there is a better path available.


>>> Hi Rabi,
>>>
>>> Could you elaborate on why Heat depends on the location attribute? I'm
>>> not
>>> familiar with Heat and knowing this might help me to propose something
>>> (or
>>> at
>>> least understand the difficulties).
>>>
>>> I don't think putting this on hold will be of any help. V1 ain't coming
>>> back and
>>> the improvements for v2 are still under heavy coding. I'd probably
>>> recommend
>>> moving to v2 with a proper deprecation path rather than sticking to v1.
>>>
>>>
>>> Hi Flavio,
>>
>> As much as we would like to move to v2, I think we still don't have a
>> acceptable solution for the question below. There is an earlier ML
>> thread[1], where it was discussed in detail.
>>
>> - What's the migration path for images created with v1 that use the
>> location attribute pointing to an external location?
>>
>
> Moving to Glance v2 shouldn't break this. As in, Glance will still be able
> to
> pull the images from external locations.
>
> Also, to be precise more precise, you actually *can* use locations in V2.
> Glance's node needs to have 2 settings enabled. The first is
> `show_multple_locations` and the second one is a policy config[0]. It's
> however
> not recommended to expose that to end users but that's why it was shielded
> behind policies.
>
> I'd recommend Heat to not use locations as that will require deployers to
> either
> enable them for everyone or have a dedicate glance-api node for Heat.
>
> All that being said, switching to v2 won't prevent Glance from reading
> images
> from external locations if the image records exist already.
>
> [0] https://github.com/openstack/glance/blob/master/etc/policy.j
> son#L16-L18
>
> While answering the above we've to keep in mind the following constraint.
>>
>> - Any change in the image id(new image) would potentially result in nova
>> servers using them in the template being rebuilt/replaced, and we would
>> like to avoid it.
>>
>> There was a suggestion to allow the 'copy-from'  with v2, which would
>> possibly make it easier for us. Is that still an option?
>>
>
> May be, in the long future. The improvements for v2 are still under heavy
> development.
>
> I assume we can probably use glance upload api to upload the image
>> data(after getting it from the external location) for an existing image?
>> Last time i tried to do it, it seems to be not allowed for an 'active'
>> image. It's  possible I'm missing something here.  We don't have a way at
>> present,  for a user to upload an image to heat engine( not sure if we
>> would like do to it either) or heat engine downloading the image from an
>> 'external location' and then uploading it to glance while
>> creating/updating
>> an image resource.
>>
>
> Downloading the image locally and uploading it is a workaround, yes. Not
> ideal
> but it's simple. However, you won't need it for the migration to v2, I
> believe,
> since you can re-use existing images.


AFAIK, we can't do without it, unless 'copy-from' is made available soon,
for two reasons.

-  Image files are always 'external' to heat-engine, unless heatclient is
hacked to
push it to the engine along with the templates/environments, which is not
something
we would do.

- To make the existing templates(with location) usable with glance v2
(newer heat versions)

Heat won't be able to create new images
> and have them point to external locations, though, unless the settings I
> mentioned above have been enabled.
>
> Also, glance location api could probably have been useful here. However, we
>> were advised in the earlier thread not to use it, as exposing the location
>> to the end user is perceived as a security risk.
>>
>
> ++
>
> Flavio
>
>
>
>> [1]  http://lists.openstack.org/pipermail/openstack-dev/2016-May/
>> 094598.html
>>
>>
>> Cheers,
>>
>>> Flavio
>>>
>>>
>>> [1] https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability
 [2] 

Re: [openstack-dev] [all][tc] Exposing project team's metadata in README files

2017-01-10 Thread Flavio Percoco

On 09/01/17 19:39 +0200, Andrey Kurilin wrote:

On Mon, Jan 9, 2017 at 6:37 PM, Flavio Percoco  wrote:


On 09/01/17 17:55 +0200, Andrey Kurilin wrote:


Hi, Flavio!

Does it possible to create badges per project release?



mmh, it's currently not possible.

Could you elaborate on why you need a specific tag per release? IS it to
tag the
releases that were done after the inclusion in the big tent?



OpenStack has a bunch of different tags. "the inclusion in the big tent" is
one of tags that should not match to all project releases once project did
it.
Another one is a "single-vendor" and so on. I think most of tags should be
applied per project release.


Why do you think it shouldn't match all the releases? Tags like single-vendor
and official impact all the releases regardless what the status was when the
release was cut.


One more "feature request": creation of custom badges(I'm interested in
"tested on %(operation_system)s".



mmh, this will have to be implemented elsewhere since these badges are specific
for governance tags[0]. I'd recommend using shields[1] for this.

Flavio

[0] http://governance.openstack.org/reference/tags/index.html
[1] http://shields.io/




Flavio


On Mon, Jan 9, 2017 at 3:23 PM, Flavio Percoco  wrote:


Just a heads up!


There are still unmerged patches on this effort. If you have a couple of
spare
brain cycles, it'd be awesome to get the patches relative to the projects
you've
+2 votes on in.

I'll proceed to abandon remaining patches in 2 weeks from now assuming
that
projects are not interested in having them. As I mentioned in previous
emails,
you're free to update the patches to match your project needs.

Here's the link to see the remaining patches:
https://review.openstack.org/#/q/status:open+topic:project-badges

Thanks a lot,
Flavio


On 12/10/16 14:50 +0200, Flavio Percoco wrote:

Greetings,


One of the common complains about the existing project organization in
the big
tent is that it's difficult to wrap our heads around the many projects
there
are, their current state (in/out the big tent), their tags, etc.

This information is available on the governance website[0]. Each
official
project team has a page there containing the information related to the
deliverables managed by that team. Unfortunately, I don't think this
page
is
checked often enough and I believe it's not known by everyone.

In the hope that we can make this information clearer to people browsing
the
many repos (most likely on github), I'd like to propose that we include
the
information of each deliverable in the readme file. This information
would be
rendered along with the rest of the readme (at least on Github, which
might not
be our main repo but it's the place most humans go to to check our
projects).

Rather than duplicating this information, I'd like to find a way to just
"include it" in the Readme file. As far as showing the "official" badge
goes, I
believe it'd be quite simple. We can do it the same way CI tags are
exposed when
using travis (just include an image). As for the rest of the tags, it
might
require some extra hacking.

So, before I start digging more into this, I wanted to get other
opinions/ideas
on this topic and how we can make this information more evident to the
rest of
the community (and people not as familiar with our processes as some of
us are).

Thanks in advance,
Flavio

[0] http://governance.openstack.org/reference/projects/index.html

--
@flaper87
Flavio Percoco





--
@flaper87
Flavio Percoco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.op
enstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Best regards,
Andrey Kurilin.



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
e
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Best regards,
Andrey Kurilin.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [tc][all] Feedback from driver maintainers about future of driver projects

2017-01-10 Thread Flavio Percoco

On 10/01/17 02:33 -0500, Steve Martinelli wrote:

In preparation for the next TC meeting, a survey was sent out to driver
maintainers, here is a summary of the feedback that was gathered.


Thanks for sending this out, Steve.

I've posted a link to this summary on the currently open reviews on this topic
and I'll comment further there.

Cheers,
Flavio


Major observations

==

* Are drivers an important part of OpenStack? YES!

* Discoverability of drivers needs to be fixed immediately.

* It is important to have visibility in a central place of the status of
each driver.

* Perspective of a driver developer and a high level person at the company
should feel like they're part of something.

* OpenStack should stop treating drivers like second-class citizens. They
want access to the same resources (publish on docs.o.org, config guides,
etc).

* The initial wording about what constitutes a project was never intended
for drivers. Drivers are a part of the project. Driver developers
contribute to OpenStack by creating drivers.

Discoverability

===

* Consensus: It is currently all over the place. A common mechanism to view
all supported drivers is needed.

* Cinder list: http://docs.openstack.org/developer/cinder/drivers.html

* Nova list: http://docs.openstack.org/developer/nova/support-matrix.html

* Stackalytics list: http://stackalytics.openstack.org/report/driverlog

* Opinion: If we intend to use the marketplace (or anywhere on openstack.org)
to list in-tree and out-of-tree drivers, they should have CI results
available as a requirement. A driver that fails CI is not just a vendor
problem, it’s an OpenStack problem, it reflects poorly on OpenStack and the
project.

* Opinion: What constitutes a supported driver, why not list all drivers?

* Opinion: Fixing discoverability can be done independently of governance
changes. We have the option of tabling the governance discussion until we
get the discoverability properly fixed, and see then if we still need to do
anything more.

* Opinion: Between giving full access to vertical resources to driver
teams, and making the marketplace *the* place for learning about OpenStack
drivers, we would have solved at least the biggest portion of the problem
we're facing.

Driver projects - official or not?

==

* Fact: There is desire from some out-of-tree vendors to become ‘official’
OpenStack projects, and gain the benefits of that (access to horizontal
teams).

* Opinion: Let drivers projects become official, there should be no 3rd
party CI requirement, that can be a tag.

* Opinion: Do not allow drivers projects to become official, that doesn’t
mean they shouldn’t easily be discoverable.

* Opinion: We don't need to open the flood gates of allowing vendors to be
teams in the OpenStack governance to make the vendors developers happy.

* Fact: This implies being placed under the TC oversight. It is a
significant move that could have unintended side-effects, it is hard to
reverse (kicking out teams we accepted is worse than not including them in
the first place), and our community is divided on the way forward. So we
need to give that question our full attention and not rush the answer.

* Opinion: Consider https://github.com/openstack/driverlog an official
OpenStack project to be listed under governance with a PTL, weekly
meetings, and all that it required to allow the team to be effective in
their mission of keeping the marketplace a trustworthy resource for
learning about OpenStack driver ecosystem.

Driver developers

=

* Opinion: A driver developer that ONLY contributes to vendor specific
driver code should not have the same influence as other OpenStack
developers, voting for PTL, TC, and ATC status.

* Opinion: PTLs should leverage the extra-atcs option in the governance repo

In-tree vs Out-of-tree

==

* Cinder has in-tree drivers, but also has out-of-tree drivers when their
CI is not maintained or when minimum feature requirements are not met. They
are marked as ‘not supported’ and have a single release to get things
working before being moved out-of-tree.

* Ironic has a single out-of-tree repo:
https://github.com/openstack/ironic-staging-drivers -- But also in-tree
https://github.com/openstack/ironic/tree/master/ironic/drivers

* Neutron has all drivers out-of-tree, with project names like:
‘networking-cisco’.

* Many opinions on the “stick-based” approach the cinder team took.

* Opinion: The in-tree vs out-of-tree argument is developer focused.
Out-of-tree drivers have obvious benefits (develop quickly, maintain their
own team, no need for a core to review the patch). But a vendor that is
looking to make sure a driver is supported will not be searching git repos
(goes back to discoverability).

* Opinion: May be worth handling the projects that keep supported drivers
in-tree differently that we handle projects that have everything
out-of-tree.

thanks for