Re: [openstack-dev] [OSC][ironic][mogan][nova] mogan and nova co-existing

2017-05-30 Thread Zhenguo Niu
On Wed, May 31, 2017 at 12:20 PM, Ed Leafe  wrote:

> On May 30, 2017, at 9:36 PM, Zhenguo Niu  wrote:
>
> > as placement is not splitted out from nova now, and there would be users
> who only want a baremetal cloud, so we don't add resources to placement
> yet, but it's easy for us to turn to placement to match the node type with
> mogan flavors.
>
> Placement is a separate service, independent of Nova. It tracks Ironic
> nodes as individual resources, not as a "pretend" VM. The Nova integration
> for selecting an Ironic node as a resource is still being developed, as we
> need to update our view of the mess that is "flavors", but the goal is to
> have a single flavor for each Ironic machine type, rather than the current
> state of flavors pretending that an Ironic node is a VM with certain
> RAM/CPU/disk quantities.
>

Yes, I understand the current efforts of improving the baremetal nodes
scheduling. It's not conflict with mogan's goal, and when it is done, we
can share the same scheduling strategy with placement :)

Mogan is a service for a specific group of users who really want a
baremetal resource instead of a generic compute resource, on API side, we
can expose RAID, advanced partitions, nics bonding, firmware management,
and other baremetal specific capabilities to users. And unlike nova's host
based availability zone, host aggregates, server groups (ironic nodes share
the same host), mogan can make it possible to divide baremetal nodes into
such groups, and make Rack aware for affinity and anti-affinity when
scheduling.


>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSC][ironic][mogan][nova] mogan and nova co-existing

2017-05-30 Thread Ed Leafe
On May 30, 2017, at 9:36 PM, Zhenguo Niu  wrote:

> as placement is not splitted out from nova now, and there would be users who 
> only want a baremetal cloud, so we don't add resources to placement yet, but 
> it's easy for us to turn to placement to match the node type with mogan 
> flavors.

Placement is a separate service, independent of Nova. It tracks Ironic nodes as 
individual resources, not as a "pretend" VM. The Nova integration for selecting 
an Ironic node as a resource is still being developed, as we need to update our 
view of the mess that is "flavors", but the goal is to have a single flavor for 
each Ironic machine type, rather than the current state of flavors pretending 
that an Ironic node is a VM with certain RAM/CPU/disk quantities.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSC][ironic][mogan][nova] mogan and nova co-existing

2017-05-30 Thread Zhenguo Niu
On Wed, May 31, 2017 at 10:20 AM, Ed Leafe  wrote:

> On May 30, 2017, at 9:08 PM, Zhenguo Niu  wrote:
>
> > There would be a collision if nova and mogan consume the same ironic
> nodes cluster, as both of them will see all the available node resources.
> So if someone wants to choose mogan for baremetal compute management, the
> recommended deployment is Mogan+Ironic for baremetals and Nova+Libvirt for
> VMs, this way we treat baremetals and vms as different compute resources.
> In a cloud with both vms and baremetals, it's more clear to have different
> set of APIs to manage them if users really care about what resources they
> got instead of just the performance. We also create a mogan horizon plugin
> which adds separated baremetal servers panel[1].
>
> So Mogan does not use the placement service for tracking resources?
>

as placement is not splitted out from nova now, and there would be users
who only want a baremetal cloud, so we don't add resources to placement
yet, but it's easy for us to turn to placement to match the node type with
mogan flavors.


>
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSC][ironic][mogan][nova] mogan and nova co-existing

2017-05-30 Thread Ed Leafe
On May 30, 2017, at 9:08 PM, Zhenguo Niu  wrote:

> There would be a collision if nova and mogan consume the same ironic nodes 
> cluster, as both of them will see all the available node resources. So if 
> someone wants to choose mogan for baremetal compute management, the 
> recommended deployment is Mogan+Ironic for baremetals and Nova+Libvirt for 
> VMs, this way we treat baremetals and vms as different compute resources. In 
> a cloud with both vms and baremetals, it's more clear to have different set 
> of APIs to manage them if users really care about what resources they got 
> instead of just the performance. We also create a mogan horizon plugin which 
> adds separated baremetal servers panel[1].

So Mogan does not use the placement service for tracking resources?


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSC][ironic][mogan][nova] mogan and nova co-existing

2017-05-30 Thread Zhenguo Niu
Thanks Ruby for bringing this up!

There would be a collision if nova and mogan consume the same ironic nodes
cluster, as both of them will see all the available node resources. So if
someone wants to choose mogan for baremetal compute management, the
recommended deployment is Mogan+Ironic for baremetals and Nova+Libvirt for
VMs, this way we treat baremetals and vms as different compute resources.
In a cloud with both vms and baremetals, it's more clear to have different
set of APIs to manage them if users really care about what resources they
got instead of just the performance. We also create a mogan horizon plugin
which adds separated baremetal servers panel[1].

But for users who don't care whether it's a vm or baremetal server, they
just want to ask OpenStack for a specific flavor of compute resource to run
the workloads, it's definitely no need to deploy Mogan to separate
baremetals to a different compute resource to expose full baremetal
capabilities.


[1] https://pasteboard.co/cJ889Y7IA.png


On Mon, May 29, 2017 at 9:55 PM, Loo, Ruby  wrote:

> Hi Zhenguo (and others),
>
>
>
> is there a description/email thread/documentation about how mogan and nova
> co-exists in the same cloud? In particular, will it be possible for mogan
> and nova (with ironic driver) to run? Is this something that we will
> recommend or not recommend or not mention? Because I don't see how the end
> user will know to issue a mogan command to get a baremetal server, vs a
> nova-boot command to get a baremetal server. And/or does anyone envison
> that horizon will hide all that from the user somehow?
>
>
>
> --ruby
>
>
>
> *From: *Zhenguo Niu 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Thursday, May 25, 2017 at 10:41 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [OSC][ironic][mogan] Can we share the same
> keyword 'baremetal'?
>
>
>
> 
>
>
>
> As I understand, baremetal instance in nova is a 'specical virtual
> machine'(raw performance). Users claim the instance by specifying a flavor
> with 'vcpus', 'memory', "root_gb" instead of real hardware specs like (cpu
> model/cores, hard drives type/amount, nics type/amount), then he get an
> instance with properties like 'vm_state' and other 'virtual' stuff. As
> baremetal in nova use the same model and same set of API that designed for
> vms, so even for end users, it's not that easy to know which instance is a
> baremetal server, so maybe it's good to call that baremetal server a
> special vm instance.
>
>
>
> So, yes the end user actually know that there is a difference between
> getting a bremetal instance via mogan or via nova :)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-30 Thread Jay Pipes

On 05/30/2017 05:07 PM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2017-05-30 14:52:01 -0400:

Sorry for the delay in getting back on this... comments inline.

On 05/18/2017 06:13 PM, Adrian Turjak wrote:

Hello fellow OpenStackers,

For the last while I've been looking at options for multi-region
multi-master Keystone, as well as multi-master for other services I've
been developing and one thing that always came up was there aren't many
truly good options for a true multi-master backend.


Not sure whether you've looked into Galera? We had a geo-distributed
12-site Galera cluster servicing our Keystone assignment/identity
information WAN-replicated. Worked a charm for us at AT Much easier
to administer than master-slave replication topologies and the
performance (yes, even over WAN links) of the ws-rep replication was
excellent. And yes, I'm aware Galera doesn't have complete snapshot
isolation support, but for Keystone's workloads (heavy, heavy read, very
little write) it is indeed ideal.



This has not been my experience.

We had a 3 site, 9 node global cluster and it was _extremely_ sensitive
to latency. We'd lose even read ability whenever we had a latency storm
due to quorum problems.

Our sites were London, Dallas, and Sydney, so it was pretty common for
there to be latency between any of them.

I lost track of it after some reorgs, but I believe the solution was
to just have a single site 3-node galera for writes, and then use async
replication for reads. We even helped land patches in Keystone to allow
split read/write host configuration.


Interesting, thanks for the info. Can I ask, were you using the Galera 
cluster for read-heavy data like Keystone identity/assignment storage? 
Or did you have write-heavy data mixed in (like Keystone's old UUID 
token storage...)


It should be noted that CockroachDB's documentation specifically calls 
out that it is extremely sensitive to latency due to the way it measures 
clock skew... so might not be suitable for WAN-separated clusters?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr]

2017-05-30 Thread Zainal Abidin
Dear Openstack Dev Kuryr Team,

I'm Zainal abidin from Jakarta Indonesia, we have questions regarding
connectivity from kubernetes to openstack. Our environment is vmware
vsphere 5.5, we established coreos + kubernetes as our microservices docker
deployment. We use coreos etcd v2.3.7 and k8s v1.5.4, we deploy mysql
percona xtradb proxysql in kubernetes. Percona xtradb cluster used port
3306 and registered in proxysql via port 3306 and 6032, the only way to
avoid mysql locking table we connect to percona galera proxysql via kubectl
exec -it [proxysql-pod-name] -p 3306 -h [k8s cluster-ip] -u root -p , but
that only possible from inside kubernetes cluster. We have many apps
external to kubernetes which can't used k8s cluster-ip, so we deploy
openstack liberty with 3 nodes in ubuntu 14.04LTS. Node1
(keystone,neutron), node2 (storage) and node3 (nova), our questions:

1. Need your help to connect coreos + k8s to openstack liberty
2. How to enable kuryr in openstack liberty
3. We need external loadbalancer besides k8s cluster-ip for external access
percona xtradb proxysql to loadbalance incoming tcp dml or ddl to avoid
mysql locking table across percona xtradb databases instances
4. Some documentation mentioned that we should use cloud-provider parameter
for kube-api.service and kubelet.service but we not sure how to do that

Very sorry if our questions are not clear or not relevant, we will be
greatly thankful if your team can help us in any way possible. Thank you
for your time, looking forward for your replay,

Thank you,

Best regards,

Zainal abidin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][kolla][stable][security][infra][all] guidelines for managing releases of binary artifacts

2017-05-30 Thread Doug Hellmann
Based on two other recent threads [1][2] and some discussions on
IRC, I have written up some guidelines [3] that try to address the
concerns I have with us publishing binary artifacts while still
allowing the kolla team and others to move ahead with the work they
are trying to do.

I would appreciate feedback about whether these would complicate
builds or make them impossible, as well as whether folks think they
go far enough to mitigate the risks described in those email threads.

Doug

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116677.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117282.html
[3] https://review.openstack.org/#/c/469265/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-30 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2017-05-30 16:11:41 -0500:
> On 05/30/2017 04:08 PM, Emilien Macchi wrote:
> > On Tue, May 30, 2017 at 8:36 PM, Matthew Thode
> >  wrote:
> >> We have a problem in requirements that projects that don't have the
> >> cycle-with-intermediary release model (most of the cycle-with-milestones
> >> model) don't get integrated with requirements until the cycle is fully
> >> done.  This causes a few problems.
> >>
> >> * These projects don't produce a consumable release for requirements
> >> until end of cycle (which does not accept beta releases).
> >>
> >> * The former causes old requirements to be kept in place, meaning caps,
> >> exclusions, etc. are being kept, which can cause conflicts.
> >>
> >> * Keeping the old version in requirements means that cross dependencies
> >> are not tested with updated versions.
> >>
> >> This has hit us with the mistral and tripleo projects particularly
> >> (tagged in the title).  They disallow pbr-3.0.0 and in the case of
> >> mistral sqlalchemy updates.
> >>
> >> [mistral]
> >> mistral - blocking sqlalchemy - milestones
> >>
> >> [tripleo]
> >> os-refresh-config - blocking pbr - milestones
> >> os-apply-config - blocking pbr - milestones
> >> os-collect-config - blocking pbr - milestones
> > 
> > These are cycle-with-milestones., like os-net-config for example,
> > which wasn't mentioned in this email. It has the same releases as
> > os-net-config also, so I'm confused why these 3 cause issue, I
> > probably missed something.
> > 
> > Anyway, I'm happy to change os-*-config (from TripleO) to be
> > cycle-with-intermediary. Quick question though, which tag would you
> > like to see, regarding what we already did for pike-1?
> > 
> > Thanks,
> > 
> 
> Pike is fine as it's just master that has this issue.  The problem is
> that the latest release blocks the pbr from upper-constraints from being
> coinstallable.

The issue is that even with beta releases like we publish at
milestones, new versions of these projects won't be installed in
gate jobs because we have to give pip special instructions to allow
pre-releases and we, as a rule, do not give it those instructions.
The result is that we need anything that is going to be installed
as via pip to be releasable at any point in the cycle, to address
dependency issues like we're dealing with here, and that means
changing the model back to cycle-with-intermediary.

This isn't something I foresaw when we talked about making all of
the TripleO components use a consistent model in the past. I'm sorry
for the oversight.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] NSX CI seems to be at 100% fail

2017-05-30 Thread Matt Riedemann

I've reported a bug here:

https://bugs.launchpad.net/nova/+bug/1694543

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-30 Thread Matthew Thode
On 05/30/2017 04:08 PM, Emilien Macchi wrote:
> On Tue, May 30, 2017 at 8:36 PM, Matthew Thode
>  wrote:
>> We have a problem in requirements that projects that don't have the
>> cycle-with-intermediary release model (most of the cycle-with-milestones
>> model) don't get integrated with requirements until the cycle is fully
>> done.  This causes a few problems.
>>
>> * These projects don't produce a consumable release for requirements
>> until end of cycle (which does not accept beta releases).
>>
>> * The former causes old requirements to be kept in place, meaning caps,
>> exclusions, etc. are being kept, which can cause conflicts.
>>
>> * Keeping the old version in requirements means that cross dependencies
>> are not tested with updated versions.
>>
>> This has hit us with the mistral and tripleo projects particularly
>> (tagged in the title).  They disallow pbr-3.0.0 and in the case of
>> mistral sqlalchemy updates.
>>
>> [mistral]
>> mistral - blocking sqlalchemy - milestones
>>
>> [tripleo]
>> os-refresh-config - blocking pbr - milestones
>> os-apply-config - blocking pbr - milestones
>> os-collect-config - blocking pbr - milestones
> 
> These are cycle-with-milestones., like os-net-config for example,
> which wasn't mentioned in this email. It has the same releases as
> os-net-config also, so I'm confused why these 3 cause issue, I
> probably missed something.
> 
> Anyway, I'm happy to change os-*-config (from TripleO) to be
> cycle-with-intermediary. Quick question though, which tag would you
> like to see, regarding what we already did for pike-1?
> 
> Thanks,
> 

Pike is fine as it's just master that has this issue.  The problem is
that the latest release blocks the pbr from upper-constraints from being
coinstallable.

>> [nova]
>> os-vif - blocking pbr - intermediary
>>
>> [horizon]
>> django-openstack-auth - blocking django - intermediary
>>
>>
>> So, here's what needs doing.
>>
>> Those projects that are already using the cycle-with-intermediary model
>> should just do a release.
>>
>> For those that are using cycle-with-milestones, you will need to change
>> to the cycle-with-intermediary model, and do a full release, both can be
>> done at the same time.
>>
>> If anyone has any questions or wants clarifications this thread is good,
>> or I'm on irc as prometheanfire in the #openstack-requirements channel.
>>
>> --
>> Matthew Thode (prometheanfire)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 


-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-30 Thread Emilien Macchi
On Tue, May 30, 2017 at 8:36 PM, Matthew Thode
 wrote:
> We have a problem in requirements that projects that don't have the
> cycle-with-intermediary release model (most of the cycle-with-milestones
> model) don't get integrated with requirements until the cycle is fully
> done.  This causes a few problems.
>
> * These projects don't produce a consumable release for requirements
> until end of cycle (which does not accept beta releases).
>
> * The former causes old requirements to be kept in place, meaning caps,
> exclusions, etc. are being kept, which can cause conflicts.
>
> * Keeping the old version in requirements means that cross dependencies
> are not tested with updated versions.
>
> This has hit us with the mistral and tripleo projects particularly
> (tagged in the title).  They disallow pbr-3.0.0 and in the case of
> mistral sqlalchemy updates.
>
> [mistral]
> mistral - blocking sqlalchemy - milestones
>
> [tripleo]
> os-refresh-config - blocking pbr - milestones
> os-apply-config - blocking pbr - milestones
> os-collect-config - blocking pbr - milestones

These are cycle-with-milestones., like os-net-config for example,
which wasn't mentioned in this email. It has the same releases as
os-net-config also, so I'm confused why these 3 cause issue, I
probably missed something.

Anyway, I'm happy to change os-*-config (from TripleO) to be
cycle-with-intermediary. Quick question though, which tag would you
like to see, regarding what we already did for pike-1?

Thanks,

> [nova]
> os-vif - blocking pbr - intermediary
>
> [horizon]
> django-openstack-auth - blocking django - intermediary
>
>
> So, here's what needs doing.
>
> Those projects that are already using the cycle-with-intermediary model
> should just do a release.
>
> For those that are using cycle-with-milestones, you will need to change
> to the cycle-with-intermediary model, and do a full release, both can be
> done at the same time.
>
> If anyone has any questions or wants clarifications this thread is good,
> or I'm on irc as prometheanfire in the #openstack-requirements channel.
>
> --
> Matthew Thode (prometheanfire)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-30 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2017-05-30 14:52:01 -0400:
> Sorry for the delay in getting back on this... comments inline.
> 
> On 05/18/2017 06:13 PM, Adrian Turjak wrote:
> > Hello fellow OpenStackers,
> > 
> > For the last while I've been looking at options for multi-region
> > multi-master Keystone, as well as multi-master for other services I've
> > been developing and one thing that always came up was there aren't many
> > truly good options for a true multi-master backend.
> 
> Not sure whether you've looked into Galera? We had a geo-distributed 
> 12-site Galera cluster servicing our Keystone assignment/identity 
> information WAN-replicated. Worked a charm for us at AT Much easier 
> to administer than master-slave replication topologies and the 
> performance (yes, even over WAN links) of the ws-rep replication was 
> excellent. And yes, I'm aware Galera doesn't have complete snapshot 
> isolation support, but for Keystone's workloads (heavy, heavy read, very 
> little write) it is indeed ideal.
> 

This has not been my experience.

We had a 3 site, 9 node global cluster and it was _extremely_ sensitive
to latency. We'd lose even read ability whenever we had a latency storm
due to quorum problems.

Our sites were London, Dallas, and Sydney, so it was pretty common for
there to be latency between any of them.

I lost track of it after some reorgs, but I believe the solution was
to just have a single site 3-node galera for writes, and then use async
replication for reads. We even helped land patches in Keystone to allow
split read/write host configuration.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI Squad Meeting Summary (week 21) - Devmode OVB, RDO Cloud and config management

2017-05-30 Thread Emilien Macchi
On Fri, May 26, 2017 at 4:58 PM, Attila Darazs  wrote:
> If the topics below interest you and you want to contribute to the
> discussion, feel free to join the next meeting:
>
> Time: Thursdays, 14:30-15:30 UTC
> Place: https://bluejeans.com/4113567798/
>
> Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
>
> = Periodic & Promotion OVB jobs Quickstart transition =
>
> We had a lively technical discussions this week. Gabriele's work on
> transitioning the periodic & promotion jobs is nearly complete, only needs
> reviews at this point. We won't set a transition date for these as it is not
> really impacting folks long term if these jobs are failing for a few days at
> this point. We'll transition when everything is ready.
>
> = RDO Cloud & Devmode OVB =
>
> We continued planning the introduction of RDO Cloud for the upstream OVB
> jobs. We're still at the point of account setup.
>
> The new OVB based devmode seems to be working fine. If you have access to
> RDO Cloud, and haven't tried it already, give it a go. It can set up a full
> master branch based deployment within 2 hours, including any pending changes
> baked into the under & overcloud.
>
> When you have your account info sourced, all it takes is
>
> $ ./devmode.sh --ovb
>
> from your tripleo-quickstart repo! See here[1] for more info.
>
> = Container jobs on nodepool multinode =
>
> Gabriele is stuck with these new Quickstart jobs. We would need a deep dive
> into debugging and using the container based TripleO deployments. Let us
> know if you can do one!

I've pinged some folks around, let's see if someone volunteers to make it.

> = How to handle Quickstart configuration =
>
> This a never-ending topic, on which we managed to spend a good chunk of time
> this week as well. Where should we put various configs? Should we duplicate
> a bunch of variables or cut them into small files?
>
> For now it seems we can agree on 3 levels of configuration:
>
> * nodes config (i.e. how many nodes we want for the deployment)
> * envionment + provisioner settings (i.e. you want to run on rdocloud with
> ovb, or on a local machine with libvirt)
> * featureset (a certain set of features enabled/disabled for the jobs, like
> pacemaker and ssl)
>
> This seems rather straightforward until we encounter exceptions. We're going
> to figure out the edge cases and rework the current configs to stick to the
> rules.
>
>
> That's it for this week. Thank you for reading the summary.
>
> Best regards,
> Attila
>
> [1] http://docs.openstack.org/developer/tripleo-quickstart/devmode-ovb.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-30 Thread Matthew Thode
On 05/30/2017 02:51 PM, Doug Hellmann wrote:
> Excerpts from Matthew Thode's message of 2017-05-30 13:36:02 -0500:
>> We have a problem in requirements that projects that don't have the
>> cycle-with-intermediary release model (most of the cycle-with-milestones
>> model) don't get integrated with requirements until the cycle is fully
>> done.  This causes a few problems.
>>
>> * These projects don't produce a consumable release for requirements
>> until end of cycle (which does not accept beta releases).
>>
>> * The former causes old requirements to be kept in place, meaning caps,
>> exclusions, etc. are being kept, which can cause conflicts.
>>
>> * Keeping the old version in requirements means that cross dependencies
>> are not tested with updated versions.
>>
>> This has hit us with the mistral and tripleo projects particularly
>> (tagged in the title).  They disallow pbr-3.0.0 and in the case of
>> mistral sqlalchemy updates.
>>
>> [mistral]
>> mistral - blocking sqlalchemy - milestones
>>
>> [tripleo]
>> os-refresh-config - blocking pbr - milestones
>> os-apply-config - blocking pbr - milestones
>> os-collect-config - blocking pbr - milestones
>>
>> [nova]
>> os-vif - blocking pbr - intermediary
>>
>> [horizon]
>> django-openstack-auth - blocking django - intermediary
>>
>>
>> So, here's what needs doing.
>>
>> Those projects that are already using the cycle-with-intermediary model
>> should just do a release.
>>
>> For those that are using cycle-with-milestones, you will need to change
>> to the cycle-with-intermediary model, and do a full release, both can be
>> done at the same time.
>>
>> If anyone has any questions or wants clarifications this thread is good,
>> or I'm on irc as prometheanfire in the #openstack-requirements channel.
>>
> 
> We probably want to add a review criteria to the requirements list that
> projects using the cycle-with-milestone model are not added to the list
> to avoid this issue in the future.
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Good idea, added in https://review.openstack.org/469234

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-30 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2017-05-30 13:36:02 -0500:
> We have a problem in requirements that projects that don't have the
> cycle-with-intermediary release model (most of the cycle-with-milestones
> model) don't get integrated with requirements until the cycle is fully
> done.  This causes a few problems.
> 
> * These projects don't produce a consumable release for requirements
> until end of cycle (which does not accept beta releases).
> 
> * The former causes old requirements to be kept in place, meaning caps,
> exclusions, etc. are being kept, which can cause conflicts.
> 
> * Keeping the old version in requirements means that cross dependencies
> are not tested with updated versions.
> 
> This has hit us with the mistral and tripleo projects particularly
> (tagged in the title).  They disallow pbr-3.0.0 and in the case of
> mistral sqlalchemy updates.
> 
> [mistral]
> mistral - blocking sqlalchemy - milestones
> 
> [tripleo]
> os-refresh-config - blocking pbr - milestones
> os-apply-config - blocking pbr - milestones
> os-collect-config - blocking pbr - milestones
> 
> [nova]
> os-vif - blocking pbr - intermediary
> 
> [horizon]
> django-openstack-auth - blocking django - intermediary
> 
> 
> So, here's what needs doing.
> 
> Those projects that are already using the cycle-with-intermediary model
> should just do a release.
> 
> For those that are using cycle-with-milestones, you will need to change
> to the cycle-with-intermediary model, and do a full release, both can be
> done at the same time.
> 
> If anyone has any questions or wants clarifications this thread is good,
> or I'm on irc as prometheanfire in the #openstack-requirements channel.
> 

We probably want to add a review criteria to the requirements list that
projects using the cycle-with-milestone model are not added to the list
to avoid this issue in the future.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-30 Thread Jay Pipes

On 05/30/2017 02:36 PM, Matthew Thode wrote:

[nova]
os-vif - blocking pbr - intermediary


Sorry for the delay. We'll fix this up today. We'll need to cut a new 
release of os-traits too given a bug we ran into today...


Thanks for keeping us honest!

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-30 Thread Jay Pipes

Sorry for the delay in getting back on this... comments inline.

On 05/18/2017 06:13 PM, Adrian Turjak wrote:

Hello fellow OpenStackers,

For the last while I've been looking at options for multi-region
multi-master Keystone, as well as multi-master for other services I've
been developing and one thing that always came up was there aren't many
truly good options for a true multi-master backend.


Not sure whether you've looked into Galera? We had a geo-distributed 
12-site Galera cluster servicing our Keystone assignment/identity 
information WAN-replicated. Worked a charm for us at AT Much easier 
to administer than master-slave replication topologies and the 
performance (yes, even over WAN links) of the ws-rep replication was 
excellent. And yes, I'm aware Galera doesn't have complete snapshot 
isolation support, but for Keystone's workloads (heavy, heavy read, very 
little write) it is indeed ideal.


(BTW, the cluster setup and node-join operations for CockroachDB and 
Galera are virtually identical...)


> Recently I've been

looking at Cockroachdb and while I haven't had the chance to do any
testing I'm curious if anyone else has looked into it. It sounds like
the perfect solution, and if it can be proved to be stable enough it
could solve a lot of problems.

So, specifically in the realm of Keystone, since we are using sqlalchemy
we already have Postgresql support, and since Cockroachdb does talk
Postgres it shouldn't be too hard to back Keystone with it.


OK, now I understand why you didn't consider Galera :) Sounds like 
you're pinned to PostgreSQL for your RDBMS needs...


For the record, CockroachDB doesn't "support PostgreSQL". It supports 
the binary pgsql client/server protocol and, oddly, a view of the 
internal system information in PostgreSQL's pg_catalog schema (though 
also publishes the standard information_schema schema).


The actual *SQL* that CockroachDB uses is definitely not PostgreSQL's 
variant of SQL. CockroachDB's version of SQL is actually pretty close to 
MySQL's version of SQL in a number of ways:


 * EXPLAIN
 * SHOW (TABLES, COLUMNS, CREATE TABLE, DATABASES, etc)
 * RENAME (TABLE, DATABASE, COLUMN, etc)

In other ways, CockroachDB's version of SQL is more like PostgreSQL's 
including:


 * UPSERT (MySQL uses the INSERT ... ON DUPLICATE KEY UPDATE construct)

> At that

stage you have a Keystone DB that could be multi-region, multi-master,
consistent, and mostly impervious to disaster. Is that not the holy
grail for a service like Keystone? Combine that with fernet tokens and
suddenly Keystone becomes a service you can't really kill, and can
mostly forget about.

I'm welcome to being called mad, but I am curious if anyone has looked
at this. I'm likely to do some tests at some stage regarding this,
because I'm hoping this is the solution I've been hoping to find for
quite a long time.

Further reading:
https://www.cockroachlabs.com/
https://github.com/cockroachdb/cockroach
https://www.cockroachlabs.com/docs/build-a-python-app-with-cockroachdb-sqlalchemy.html


Another link for folks to read:

https://jepsen.io/analyses/cockroachdb-beta-20160829

I think it's worth investigating and thoroughly testing CockroachDB. 
But, it's pretty new, frankly, and I wouldn't bet a production system on 
it. Also, please do follow up on the performance of CockroachDB, which 
as aphyr notes in the jepsen link above, was far, far below other RDBMS 
that have been tested.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-05-30 Thread Matthew Thode
We have a problem in requirements that projects that don't have the
cycle-with-intermediary release model (most of the cycle-with-milestones
model) don't get integrated with requirements until the cycle is fully
done.  This causes a few problems.

* These projects don't produce a consumable release for requirements
until end of cycle (which does not accept beta releases).

* The former causes old requirements to be kept in place, meaning caps,
exclusions, etc. are being kept, which can cause conflicts.

* Keeping the old version in requirements means that cross dependencies
are not tested with updated versions.

This has hit us with the mistral and tripleo projects particularly
(tagged in the title).  They disallow pbr-3.0.0 and in the case of
mistral sqlalchemy updates.

[mistral]
mistral - blocking sqlalchemy - milestones

[tripleo]
os-refresh-config - blocking pbr - milestones
os-apply-config - blocking pbr - milestones
os-collect-config - blocking pbr - milestones

[nova]
os-vif - blocking pbr - intermediary

[horizon]
django-openstack-auth - blocking django - intermediary


So, here's what needs doing.

Those projects that are already using the cycle-with-intermediary model
should just do a release.

For those that are using cycle-with-milestones, you will need to change
to the cycle-with-intermediary model, and do a full release, both can be
done at the same time.

If anyone has any questions or wants clarifications this thread is good,
or I'm on irc as prometheanfire in the #openstack-requirements channel.

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 22

2017-05-30 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:
> 
> There's no TC meeting this week. Thierry did a second weekly status
> report[^1]. There will be a TC meeting next week (Tuesday, 6th June
> at 20:00 UTC) with the intention of discussing the proposals about
> postgreSQL (of which more below). Here are my comments on pending TC
> activity that either seems relevant or needs additional input.
> 
> [^1]: 
> 
> 
> # Pending Stuff
> 
> ## Queens Community Goals
> 
> Proposals for community-wide goals[^2] for the Queens cycle have started
> coming in. These are changes which, if approved, all projects are
> expected to satisfy. In Pike the goals are:
> 
> * [all control plane APIs deployable as WSGI 
> apps](https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html)
> * [supporting Python 
> 3.5](https://governance.openstack.org/tc/goals/pike/python35.html)
> 
> The full suite of goals for Queens has not yet been decided.
> Identifying goals is a community-wide process. Your ideas are
> wanted.
> 
> ### Split Tempest Plugins into Separate Repos
> 
> This goal for Queens is already approved. Any project which manages
> its tempest tests as a plugin should move those tests into a
> separate repo. The goal is at[^3]. The review for it[^4] has further
> discussion on why it is a good idea.
> 
> The original goal did not provide instructions on how to do it.
> There is a proposal in progress[^5] to add a link to an etherpad[^6]
> with instructions.
> 
> Note that this goal only applies to tempest _plugins_. Projects
> which have their tests in the core of tempest have nothing to do. I
> wonder if it wouldn't be more fair for all projects to use plugins
> for their tempest tests?

All projects may have plugins, but all projects with tests used by
the Interop WG (formerly DefCore) for trademark certification must
place at least those tests in the tempest repo, to be managed by
the QA team [1]. As new projects are added to those trademark
programs, the tests are supposed to move to the central repo to
ensure the additional review criteria are applied properly.

[1] 
https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html

> 
> ### Two Proposals on Improving Version Discovery
> 
> Monty has been writing API-WG guidelines about how to properly use
> the service catalog and do version discovery[^7]. Building from that
> he's proposed two new goals:
> 
> * [Add Queens goal to add collection 
> links](https://review.openstack.org/#/c/468436/)
> * [Add Queens goal for full discovery 
> alignment](https://review.openstack.org/#/c/468437/)
> 
> The first is a small step in the direction of improving version
> discovery, the second is all the steps to getting all projects
> supporting proper version discovery, in case we are feeling extra
> capable.
> 
> Both of these need review from project contributors, first to see if there
> is agreement on the strategies, second to see if they are
> achievable.
> 
> [^2]: 
> [^3]: 
> 
> [^4]: 
> [^5]: 
> [^6]: 
> [^7]: 
> 
> ## etcd as a base service
> 
> etcd has been proposed as a base service[^8]. A "base" service is
> one that that can be expected to be present in any OpenStack
> deployment. The hope is that by declaring this we can finally
> bootstrap the distributed locking, group membership and service
> liveness functionality that we've been talking about for years. If
> you want this please say so on the review. You want this.
> 
> If for some reason you _don't_ want this, then you'll want to
> register your reasons as soon as possible. The review will merge
> soon.
> 
> [^8]: 
> 
> ## openstack-tc IRC channel
> 
> With the decrease in the number of TC meetings on IRC there's a plan
> to have [office hours](https://review.openstack.org/#/c/467256/)
> where some significant chunk of the TC will be available. Initially
> this was going to be in the `#openstack-dev` channel but in the
> hopes of making the logs readable after the fact, a [new channel is
> proposed](https://review.openstack.org/#/c/467386/).
> 
> This is likely to pass soon, unless objections are raised. If you
> have some, please raise them on the review.
> 
> ## postgreSQL
> 
> The discussions around postgreSQL have yet to resolve. See [last week's
> report](https://anticdent.org/tc-report-21.html) for additional
> information. Because things are blocked and there have been some
> expressions of review fatigue there will be, as mentioned above, a
> TC meeting next week on 6th June, 20:00 UTC. Show up if you have an
> opinion if or how 

[openstack-dev] [ironic] this week's priorities and subteam reports

2017-05-30 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. booting from volume:
1.1. the next patch: https://review.openstack.org/#/c/406290
2. driver composition documentation:
2.1. explaining the defaults: https://review.openstack.org/466741
2.2. ipmi docs update: https://review.openstack.org/466734
3. OSC commands for ironic driver-related commands
3.1. finish and review the spec: https://review.openstack.org/#/c/439907/


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 15 May 2017 and 22 May 2017)
- Ironic: 243 bugs (-9) + 251 wishlist items. 24 new (+3), 191 in progress 
(-9), 0 critical, 26 high and 32 incomplete
- Inspector: 12 bugs + 28 wishlist items. 3 new (+2), 12 in progress (-2), 0 
critical, 1 high (-1) and 3 incomplete
- Nova bugs with Ironic tag: 12 (+1). 2 new, 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- Standalone CI tests (vsaienk0)
- next patch to be reviewed: https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests

Generic boot-from-volume (TheJulia, dtantsur)
-
- specs and blueprints:
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/volume-connection-information.html
- code: https://review.openstack.org/#/q/topic:bug/1526231
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/boot-from-volume-reference-drivers.html
- code: https://review.openstack.org/#/q/topic:bug/1559691
- https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume
- code: 
https://review.openstack.org/#/q/topic:bp/ironic-boot-from-volume
- status as of most recent weekly meeting:
- mjturek is working on getting together devstack config updates/script 
changes in order to support this configuration No updates
- hshiina uploaded some devstack patches [See etherpad]
- hshiina is looking in Nova side changes and is attempting to obtain 
clarity on some of the issues that tenant network separation introduced into 
the deployment workflow.
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/406290 Wiring in attach/detach 
operations
https://review.openstack.org/#/c/413324 iPXE template
https://review.openstack.org/#/c/454243/ - WIP logic changes for 
deployment process.  Tenant network separation introduced some additional 
complexity, quick conceptual feedback requested.
https://review.openstack.org/#/c/214586/ - Volume Connection 
Information Rest API Change
Additional patches exist, for python-ironicclient and one for nova.  
Links in the patch/note tracking etherpad.

Rolling upgrades and grenade-partial (rloo, jlvillal)
-
- spec approved; code patches: 
https://review.openstack.org/#/q/topic:bug/1526283
- status as of most recent weekly meeting:
- Based on feedback from vdrok and jlvillal, rloo is rethinking/reworking 
the next patch to see if it can be simplified: 'Add version column' 
https://review.openstack.org/#/c/412397/
- Testing work: done as per spec, but rloo wants to ask vasyl whether we 
can improve. grenade test will do upgrade so we have old API sending requests 
to old and/or new conductor, but rloo doesn't think there is anything to 
control -which- conductor handles the request, so what if old conductor handles 
all the requests?

Reference architecture guide (jroll, dtantsur)
--
- no updates, dtantsur plans to start working on some text for the 
install-guide this week

Python 3.5 compatibility (Nisha, Ankit)
---
- Topic: 
https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases
- this include all projects, not only ironic
- please tag all reviews with topic "goal-python35"
- Nisha will be taking over this work
- Status as on May 18.
- Raised a patch in devstack for enabling swift and ironic for python3.5. 
https://review.openstack.org/#/c/464932/
- Swift is not completely compatible with python3.5. Getting error while 
installing devstack. Raised a bug in swift 
https://bugs.launchpad.net/swift/+bug/1691090
- Status as on 

[openstack-dev] [tc] [all] TC Report 22

2017-05-30 Thread Chris Dent


There's no TC meeting this week. Thierry did a second weekly status
report[^1]. There will be a TC meeting next week (Tuesday, 6th June
at 20:00 UTC) with the intention of discussing the proposals about
postgreSQL (of which more below). Here are my comments on pending TC
activity that either seems relevant or needs additional input.

[^1]: 

# Pending Stuff

## Queens Community Goals

Proposals for community-wide goals[^2] for the Queens cycle have started
coming in. These are changes which, if approved, all projects are
expected to satisfy. In Pike the goals are:

* [all control plane APIs deployable as WSGI 
apps](https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html)
* [supporting Python 
3.5](https://governance.openstack.org/tc/goals/pike/python35.html)

The full suite of goals for Queens has not yet been decided.
Identifying goals is a community-wide process. Your ideas are
wanted.

### Split Tempest Plugins into Separate Repos

This goal for Queens is already approved. Any project which manages
its tempest tests as a plugin should move those tests into a
separate repo. The goal is at[^3]. The review for it[^4] has further
discussion on why it is a good idea.

The original goal did not provide instructions on how to do it.
There is a proposal in progress[^5] to add a link to an etherpad[^6]
with instructions.

Note that this goal only applies to tempest _plugins_. Projects
which have their tests in the core of tempest have nothing to do. I
wonder if it wouldn't be more fair for all projects to use plugins
for their tempest tests?

### Two Proposals on Improving Version Discovery

Monty has been writing API-WG guidelines about how to properly use
the service catalog and do version discovery[^7]. Building from that
he's proposed two new goals:

* [Add Queens goal to add collection 
links](https://review.openstack.org/#/c/468436/)
* [Add Queens goal for full discovery 
alignment](https://review.openstack.org/#/c/468437/)

The first is a small step in the direction of improving version
discovery, the second is all the steps to getting all projects
supporting proper version discovery, in case we are feeling extra
capable.

Both of these need review from project contributors, first to see if there
is agreement on the strategies, second to see if they are
achievable.

[^2]: 
[^3]: 

[^4]: 
[^5]: 
[^6]: 
[^7]: 

## etcd as a base service

etcd has been proposed as a base service[^8]. A "base" service is
one that that can be expected to be present in any OpenStack
deployment. The hope is that by declaring this we can finally
bootstrap the distributed locking, group membership and service
liveness functionality that we've been talking about for years. If
you want this please say so on the review. You want this.

If for some reason you _don't_ want this, then you'll want to
register your reasons as soon as possible. The review will merge
soon.

[^8]: 

## openstack-tc IRC channel

With the decrease in the number of TC meetings on IRC there's a plan
to have [office hours](https://review.openstack.org/#/c/467256/)
where some significant chunk of the TC will be available. Initially
this was going to be in the `#openstack-dev` channel but in the
hopes of making the logs readable after the fact, a [new channel is
proposed](https://review.openstack.org/#/c/467386/).

This is likely to pass soon, unless objections are raised. If you
have some, please raise them on the review.

## postgreSQL

The discussions around postgreSQL have yet to resolve. See [last week's
report](https://anticdent.org/tc-report-21.html) for additional
information. Because things are blocked and there have been some
expressions of review fatigue there will be, as mentioned above, a
TC meeting next week on 6th June, 20:00 UTC. Show up if you have an
opinion if or how postgreSQL should or should not have a continuing
presence in OpenStack. Some links:

* [original proposal documenting the lack of community attention to
  postgreSQL](https://review.openstack.org/#/c/427880/)
* [a shorter, less MySQL-oriented 
version](https://review.openstack.org/#/c/465589/)
* [related email
  
thread](http://lists.openstack.org/pipermail/openstack-dev/2017-May/116642.html)
* [active vs external approaches to RDBMS
  
management](http://lists.openstack.org/pipermail/openstack-dev/2017-May/117148.html)

## Draft Vision for the TC

johnthetubaguy, dtroyer and I (cdent) continue to work on digesting
the feedback[^9] to the TC Vision document[^10]. We've made a bit of
progress but there's more work to do. If you have new 

Re: [openstack-dev] [kolla][osprofiler][keystone][neutron][nova] osprofiler in paste deploy files

2017-05-30 Thread Matthieu Simonin


- Mail original -
> De: "Lance Bragstad" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Mardi 30 Mai 2017 16:33:17
> Objet: Re: [openstack-dev] [kolla][osprofiler][keystone][neutron][nova] 
> osprofiler in paste deploy files
> 
> On Mon, May 29, 2017 at 4:08 AM, Matthieu Simonin  > wrote:
> 
> > Hello,
> >
> > I'd like to have more insight on OSProfiler support in paste-deploy files
> > as it seems not similar across projects.
> > As a result, the way you can enable it on Kolla side differs. Here are
> > some examples:
> >
> > a) Nova paste.ini already contains OSProfiler middleware[1].
> >
> > b) Keystone paste.ini doesn't contain OSProfiler but the file is exposed
> > in Kolla-ansible.
> > Thus it can be overwritten[2] by providing an alternate paste file using a
> > node_custom_config directory.
> >
> 
> I'm looking through keystone's sample paste file we keep in the project and
> we do have osprofiler in our v2 and v3 pipelines [0] [1]. It looks like it
> has been in keystone's sample paste file since Mitaka [2]

My bad, Kolla is maintaining a copy (without osprofiler) of the file which will 
replace the one shipped with Keystone (with osprofiler).

> 
> 
> [0]
> https://github.com/openstack/keystone/blob/58d7eaca41f83a52e100cbae9afe7d3faf1b9693/etc/keystone-paste.ini#L43-L44
> [1]
> https://github.com/openstack/keystone/blob/58d7eaca41f83a52e100cbae9afe7d3faf1b9693/etc/keystone-paste.ini#L68
> [2]
> https://github.com/openstack/keystone/commit/639e36adbfa0f58ce2c3f31856b4343e9197aa0e
> 
> 
> >
> > c) Neutron paste.ini doesn't contain OSProfiler middleware[3]. For
> > devstack, a hook can reconfigure the file at deploy time[4].
> > For Kolla, it seems that the only solution right now is to rebuild the
> > whole docker image.
> >
> > As a user of Kolla and OSprofiler a) is the most convenient thing.
> >
> > Regarding b) and c), is it a deliberate choice to ship the paste deploy
> > files without OSProfiler middleware?
> >
> > Do you think we could converge ? ideally having a) for every API services ?
> >
> > Best,
> >
> > Matt
> >
> > [1]: https://github.com/openstack/nova/blob/0d31fb303e07b7ed9f55b9c823b43e
> > 6db5153ee6/etc/nova/api-paste.ini#L29-L37
> > [2]: https://github.com/openstack/kolla-ansible/blob/
> > fe61612ec6db469cccf2d2b4f0bd404ad4ced112/ansible/roles/
> > keystone/tasks/config.yml#L119
> > [3]: https://github.com/openstack/neutron/blob/
> > e4557a7793fbf3461bfae36ead41ee4d349920ab/neutron/tests/
> > contrib/hooks/osprofiler
> > [4]: https://github.com/openstack/neutron/blob/
> > e4557a7793fbf3461bfae36ead41ee4d349920ab/etc/api-paste.ini#L6-L9
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security bug in diskimage-builder

2017-05-30 Thread Emilien Macchi
On Tue, May 30, 2017 at 3:43 PM, Ben Nemec  wrote:
>
>
> On 05/30/2017 08:00 AM, Emilien Macchi wrote:
>>
>> On Mon, May 29, 2017 at 9:02 PM, Jeremy Stanley  wrote:
>>>
>>> On 2017-05-29 15:43:43 +0200 (+0200), Emilien Macchi wrote:

 On Wed, May 24, 2017 at 7:45 PM, Ben Nemec 
 wrote:
>>>
>>> [...]
>
> Emilien, I think we should create a tripleo-coresec group in
> launchpad that can be used for this. We have had
> tripleo-affecting security bugs in the past and I imagine we
> will again. I'm happy to help out with that, although I will
> admit my launchpad-fu is kind of weak so I don't know off the
> top of my head how to do it.


 That or re-use an existing Launchpad group used by OpenStack VMT?
>>>
>>>
>>> The OpenStack VMT doesn't triage bugs for deliverables aside from
>>> those tagged with vulnerability:managed in governance. For those we
>>> recommend private security bugs only be automatically shared with
>>> the openstack-vuln-mgmt team in LP, and then we manually subscribe
>>> something-coresec to the report once we're sure it was reported
>>> against the correct project. For deliverables without VMT oversight,
>>> it makes sense to have private security bugs automatically shared
>>> with those something-coresec teams directly.
>>>
>>>
>>> https://governance.openstack.org/tc/reference/tags/vulnerability_managed.html
>>
>>
>> I created https://launchpad.net/~tripleo-coresec
>>
>> With me (Pacific Time soon), shardy (Europe), bnemec (East coast) and
>
>
> If by "coast" you mean the Great Lakes then yes, but I'm in the central time
> zone. ;-)

lol.
I added James to cover (real) East coast, so we cover most of our TZs.

Thanks,

> Thanks for getting this set up guys.
>
>
>> fungi (East coast) for now. If we feel like we need more people we'll
>> think about it.
>> I'll explore Launchpad to see how we can use this group to handle Security
>> bugs.
>>
>> Thanks,
>>
>>> --
>>> Jeremy Stanley
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security bug in diskimage-builder

2017-05-30 Thread Emilien Macchi
On Tue, May 30, 2017 at 3:10 PM, Jeremy Stanley  wrote:
> On 2017-05-30 15:00:11 +0200 (+0200), Emilien Macchi wrote:
> [...]
>> I'll explore Launchpad to see how we can use this group to handle
>> Security bugs.
>
> I'll save you some time! ;)

Many thanks, indeed it helped.

> Go to https://launchpad.net/tripleo/+sharing (repeat for any other
> projects the TripleO team has on LP) and add a row to that table for
> the new LP team you've created with sharing set to "Private
> Security: All". Also make sure the "Private Security: All" sharing
> option is removed from other teams.
>
> You may also see some rows in that table for individuals or other
> groups who are subscribed to specific private bugs. These show up
> with a sharing setting like "Private Security: Some" and can be
> safely ignored.
>
> Note that access to the sharing settings requires you to be in
> either the Maintainer or Driver group for the project in question (I
> don't remember which).
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Meeting time

2017-05-30 Thread MONTEIRO, FELIPE C
I've introduced a patch [0] for updating the Murano meeting time. But the 
meeting time proposed will have to be adjusted to account for vastly different 
time zones: America and Asia for example. 

Tentatively ~UTC 12:00 pm might be feasible, but more feedback is needed to 
make a better update to the meeting time.

[0] https://review.openstack.org/#/c/468182/ 

-Original Message-
From: Paul Bourke [mailto:paul.bou...@oracle.com] 
Sent: Wednesday, May 24, 2017 10:46 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [murano] Meeting time

Hi Felipe,

 From our end I think one hour earlier would be great.

 > I can create a patch in infra and add you and others to it to
 > allow for people to effectively vote for what times you prefer.

Sure thing that sounds good!

-Paul

On 24/05/17 03:56, Felipe Monteiro wrote:
> Hi Paul,
>
> I'm open to changing the meeting time, although I'd like some input from
> Murano cores, too. What times work for you and your colleagues? I can
> create a patch in infra and add you and others to it to allow for people
> to effectively vote for what times you prefer.
>
> Felipe
>
> On Tue, May 23, 2017 at 12:08 PM, Paul Bourke  > wrote:
>
> Hi Felipe / Murano community,
>
> I was wondering how would people feel about revising the time for
> the Murano weekly meeting?
>
> Personally the current time is difficult for me to attend as it
> falls at the end of a work day, I also have some colleagues that
> would like to attend but can't at the current time.
>
> Given recent low attendance, would another time suit people better?
>
> Thanks,
> -Paul
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
>   >
> 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=X4GwEru-SJ9DRnCxhze-aw=mFDfLtaGRcYt7FxkfR7w2EWxPpu8EzTxAVddPedUxQU=ivivO_E09y2_-8Bg_dSETDQ5WdOUMYwS6sQlsAC2MM0=
>  
> 
>   >
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=X4GwEru-SJ9DRnCxhze-aw=mFDfLtaGRcYt7FxkfR7w2EWxPpu8EzTxAVddPedUxQU=ivivO_E09y2_-8Bg_dSETDQ5WdOUMYwS6sQlsAC2MM0=
>  
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=X4GwEru-SJ9DRnCxhze-aw=mFDfLtaGRcYt7FxkfR7w2EWxPpu8EzTxAVddPedUxQU=ivivO_E09y2_-8Bg_dSETDQ5WdOUMYwS6sQlsAC2MM0=
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all][logging] logging debugging improvement work status

2017-05-30 Thread Doug Hellmann
The oslo.log changes to include exception details are in version
3.27.0.

Doug

Excerpts from ChangBo Guo's message of 2017-05-27 16:22:27 +0800:
> Thanks Doug, I will release it on next Monday.
> 
> 2017-05-25 22:15 GMT+08:00 Doug Hellmann :
> 
> > One outcome from the forum session about improving logging debugging
> > was agreement that the proposal to add more details about exceptions
> > to the logs. The spec [1] was updated and has been approved, and
> > the patches to implement the work in oslo.log have also been approved
> > [2].
> >
> > The changes should be included in the Oslo releases next week. I
> > think it makes sense to hold off until then, given the holiday
> > weekend for many of the Oslo team members. As soon as the constraints
> > are updated to allow the new version of oslo.log, the log output
> > produced by devstack will change so that any log message emitted
> > in the context of handling an exception will include that exception
> > detail at the end of the log message (see the spec for details about
> > configuring that behavior).
> >
> > After we start seeing this run in the gate for a bit, we can evaluate
> > if we need to tweak the format or skip any other of Python's built-in
> > exception types.
> >
> > Thanks to Dims, Flavio, gcb, and Eric Fried for their help with
> > code reviews, and to the rest of the Oslo team and everyone who
> > participated in the discussion of the spec online and in Boston.
> >
> > Doug
> >
> > [1] http://specs.openstack.org/openstack/oslo-specs/specs/
> > pike/improving-logging-debugging.html
> > [2] https://review.openstack.org/#/q/topic:improve-logging-debugging
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-30 Thread John Griffith
On Tue, May 30, 2017 at 5:47 AM, Spyros Trigazis  wrote:

> FYI, there is already a cinder volume driver for docker available, written
> in golang, from rexray [1].
>
> Our team recently contributed to libstorage [3], it could support manila
> too. Rexray
> also supports the popular cloud providers.
>
> Magnum's docker swarm cluster driver, already leverages rexray for cinder
> integration. [2]
>
> Cheers,
> Spyros
>
> [1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
> [2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
> [3] http://git.openstack.org/cgit/openstack/magnum/tree/
> magnum/drivers/common/templates/swarm/fragments/
> volume-service.sh?h=stable/ocata
>
> On 27 May 2017 at 12:15, zengchen  wrote:
>
>> Hi John & Ben:
>>  I have committed a patch[1] to add a new repository to Openstack. Please
>> take a look at it. Thanks very much!
>>
>>  [1]: https://review.openstack.org/#/c/468635
>>
>> Best Wishes!
>> zengchen
>>
>>
>>
>>
>>
>> 在 2017-05-26 21:30:48,"John Griffith"  写道:
>>
>>
>>
>> On Thu, May 25, 2017 at 10:01 PM, zengchen  wrote:
>>
>>>
>>> Hi john:
>>> I have seen your updates on the bp. I agree with your plan on how to
>>> develop the codes.
>>> However, there is one issue I have to remind you that at present,
>>> Fuxi not only can convert
>>>  Cinder volume to Docker, but also Manila file. So, do you consider to
>>> involve Manila part of codes
>>>  in the new Fuxi-golang?
>>>
>> Agreed, that's a really good and important point.  Yes, I believe Ben
>> Swartzlander
>>
>> is interested, we can check with him and make sure but I certainly hope
>> that Manila would be interested.
>>
>>> Besides, IMO, It is better to create a repository for Fuxi-golang,
>>> because
>>>  Fuxi is the project of Openstack,
>>>
>> Yeah, that seems fine; I just didn't know if there needed to be any more
>> conversation with other folks on any of this before charing ahead on new
>> repos etc.  Doesn't matter much to me though.
>>
>>
>>>
>>>Thanks very much!
>>>
>>> Best Wishes!
>>> zengchen
>>>
>>>
>>>
>>>
>>> At 2017-05-25 22:47:29, "John Griffith" 
>>> wrote:
>>>
>>>
>>>
>>> On Thu, May 25, 2017 at 5:50 AM, zengchen  wrote:
>>>
 Very sorry to foget attaching the link for bp of rewriting Fuxi with go
 language.
 https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang


 At 2017-05-25 19:46:54, "zengchen"  wrote:

 Hi guys:
 hongbin had committed a bp of rewriting Fuxi with go language[1].
 My question is where to commit codes for it.
 We have two choice, 1. create a new repository, 2. create a new
 branch.  IMO, the first one is much better. Because
 there are many differences in the layer of infrastructure, such as CI.
 What's your opinion? Thanks very much

 Best Wishes
 zengchen


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Hi Zengchen,
>>>
>>> For now I was thinking just use Github and PR's outside of the OpenStack
>>> projects to bootstrap things and see how far we can get.  I'll update the
>>> BP this morning with what I believe to be the key tasks to work through.
>>>
>>> Thanks,
>>> John
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Hi Spryos,

Thanks for pointing that out, actually I wasn't aware that Magnum had
adopted RexRay so that's good to know.  There are actually a number of
options out there RexRay, Cinder-Docker-Driver etc.  They're all cool, and
if they work for people that's great!

The only problem I've had with any of these other options is that most are
under ownership of a single storage vendor.  Even though the licensing is
agreeable, and in theory it shouldn't be a problem to contribute, 

Re: [openstack-dev] [kolla][osprofiler][keystone][neutron][nova] osprofiler in paste deploy files

2017-05-30 Thread Lance Bragstad
On Mon, May 29, 2017 at 4:08 AM, Matthieu Simonin  wrote:

> Hello,
>
> I'd like to have more insight on OSProfiler support in paste-deploy files
> as it seems not similar across projects.
> As a result, the way you can enable it on Kolla side differs. Here are
> some examples:
>
> a) Nova paste.ini already contains OSProfiler middleware[1].
>
> b) Keystone paste.ini doesn't contain OSProfiler but the file is exposed
> in Kolla-ansible.
> Thus it can be overwritten[2] by providing an alternate paste file using a
> node_custom_config directory.
>

I'm looking through keystone's sample paste file we keep in the project and
we do have osprofiler in our v2 and v3 pipelines [0] [1]. It looks like it
has been in keystone's sample paste file since Mitaka [2]


[0]
https://github.com/openstack/keystone/blob/58d7eaca41f83a52e100cbae9afe7d3faf1b9693/etc/keystone-paste.ini#L43-L44
[1]
https://github.com/openstack/keystone/blob/58d7eaca41f83a52e100cbae9afe7d3faf1b9693/etc/keystone-paste.ini#L68
[2]
https://github.com/openstack/keystone/commit/639e36adbfa0f58ce2c3f31856b4343e9197aa0e


>
> c) Neutron paste.ini doesn't contain OSProfiler middleware[3]. For
> devstack, a hook can reconfigure the file at deploy time[4].
> For Kolla, it seems that the only solution right now is to rebuild the
> whole docker image.
>
> As a user of Kolla and OSprofiler a) is the most convenient thing.
>
> Regarding b) and c), is it a deliberate choice to ship the paste deploy
> files without OSProfiler middleware?
>
> Do you think we could converge ? ideally having a) for every API services ?
>
> Best,
>
> Matt
>
> [1]: https://github.com/openstack/nova/blob/0d31fb303e07b7ed9f55b9c823b43e
> 6db5153ee6/etc/nova/api-paste.ini#L29-L37
> [2]: https://github.com/openstack/kolla-ansible/blob/
> fe61612ec6db469cccf2d2b4f0bd404ad4ced112/ansible/roles/
> keystone/tasks/config.yml#L119
> [3]: https://github.com/openstack/neutron/blob/
> e4557a7793fbf3461bfae36ead41ee4d349920ab/neutron/tests/
> contrib/hooks/osprofiler
> [4]: https://github.com/openstack/neutron/blob/
> e4557a7793fbf3461bfae36ead41ee4d349920ab/etc/api-paste.ini#L6-L9
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-30 Thread Spyros Trigazis
On 30 May 2017 at 15:26, Hongbin Lu  wrote:

> Please consider leveraging Fuxi instead.
>

Is there a missing functionality from rexray?


> Kuryr/Fuxi team is working very hard to deliver the docker network/storage
> plugins. I wish you will work with us to get them integrated with
> Magnum-provisioned cluster.
>

Patches are welcome to support fuxi as an *option* instead of rexray, so
users can choose.


> Currently, COE clusters provisioned by Magnum is far away from
> enterprise-ready. I think the Magnum project will be better off if it can
> adopt Kuryr/Fuxi which will give you a better OpenStack integration.
>
>
>
> Best regards,
>
> Hongbin
>

fuxi feature request: Add authentication using a trustee and a trustID.

Cheers,
Spyros


>
>
> *From:* Spyros Trigazis [mailto:strig...@gmail.com]
> *Sent:* May-30-17 7:47 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for
> Fuxi-golang
>
>
>
> FYI, there is already a cinder volume driver for docker available, written
>
> in golang, from rexray [1].
>
>
> Our team recently contributed to libstorage [3], it could support manila
> too. Rexray
> also supports the popular cloud providers.
>
> Magnum's docker swarm cluster driver, already leverages rexray for cinder
> integration. [2]
>
> Cheers,
> Spyros
>
>
>
> [1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
>
> [2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
>
> [3] http://git.openstack.org/cgit/openstack/magnum/tree/magn
> um/drivers/common/templates/swarm/fragments/volume-
> service.sh?h=stable/ocata
>
>
>
> On 27 May 2017 at 12:15, zengchen  wrote:
>
> Hi John & Ben:
>
>  I have committed a patch[1] to add a new repository to Openstack. Please
> take a look at it. Thanks very much!
>
>
>
>  [1]: https://review.openstack.org/#/c/468635
>
>
>
> Best Wishes!
>
> zengchen
>
>
>
>
>
> 在 2017-05-26 21:30:48,"John Griffith"  写道:
>
>
>
>
>
> On Thu, May 25, 2017 at 10:01 PM, zengchen  wrote:
>
>
>
> Hi john:
>
> I have seen your updates on the bp. I agree with your plan on how to
> develop the codes.
>
> However, there is one issue I have to remind you that at present, Fuxi
> not only can convert
>
>  Cinder volume to Docker, but also Manila file. So, do you consider to
> involve Manila part of codes
>
>  in the new Fuxi-golang?
>
> Agreed, that's a really good and important point.  Yes, I believe Ben
> Swartzlander
>
>
>
> is interested, we can check with him and make sure but I certainly hope
> that Manila would be interested.
>
> Besides, IMO, It is better to create a repository for Fuxi-golang, because
>
>  Fuxi is the project of Openstack,
>
> Yeah, that seems fine; I just didn't know if there needed to be any more
> conversation with other folks on any of this before charing ahead on new
> repos etc.  Doesn't matter much to me though.
>
>
>
>
>
>Thanks very much!
>
>
>
> Best Wishes!
>
> zengchen
>
>
>
>
> At 2017-05-25 22:47:29, "John Griffith"  wrote:
>
>
>
>
>
> On Thu, May 25, 2017 at 5:50 AM, zengchen  wrote:
>
> Very sorry to foget attaching the link for bp of rewriting Fuxi with go
> language.
> https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang
>
>
>
> At 2017-05-25 19:46:54, "zengchen"  wrote:
>
> Hi guys:
>
> hongbin had committed a bp of rewriting Fuxi with go language[1]. My
> question is where to commit codes for it.
>
> We have two choice, 1. create a new repository, 2. create a new branch.
> IMO, the first one is much better. Because
>
> there are many differences in the layer of infrastructure, such as CI.
> What's your opinion? Thanks very much
>
>
>
> Best Wishes
>
> zengchen
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Hi Zengchen,
>
>
>
> For now I was thinking just use Github and PR's outside of the OpenStack
> projects to bootstrap things and see how far we can get.  I'll update the
> BP this morning with what I believe to be the key tasks to work through.
>
>
>
> Thanks,
>
> John
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

[openstack-dev] [neutron][l2gw] OVS code currently broken

2017-05-30 Thread Gary Kotton
Hi,
Please note that the L2 GW code is currently broken due to the commit 
e6333593ae6005c4b0d73d9dfda5eb47f40dd8da
If someone has the cycles can they please take a look.
Thanks
gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM Heartbeat / Healthcheck Monitoring

2017-05-30 Thread Vikash Kumar
Thanks Sam , Will sure review it.

On Tue, 30 May 2017, 17:59 Sam P,  wrote:

> Hi Vikash,
>
>   Greg submit the spec [1] for intrusive instance monitoring.
>   Your review will be highly appreciated..
>  [1] https://review.openstack.org/#/c/469070/
> --- Regards,
> Sampath
>
>
>
> On Sat, May 20, 2017 at 4:49 PM, Vikash Kumar
>  wrote:
> > Thanks Sam
> >
> >
> > On Sat, 20 May 2017, 06:51 Sam P,  wrote:
> >>
> >> Hi Vikash,
> >>  Great... I will add you as reviewer to this spec.
> >>  Thank you..
> >> --- Regards,
> >> Sampath
> >>
> >>
> >>
> >> On Fri, May 19, 2017 at 1:06 PM, Vikash Kumar
> >>  wrote:
> >> > Hi Greg,
> >> >
> >> > Please include my email in this spec also. We are also dealing
> with
> >> > HA
> >> > of Virtual Instances (especially for Vendors) and will participate.
> >> >
> >> > On Thu, May 18, 2017 at 11:33 PM, Waines, Greg
> >> > 
> >> > wrote:
> >> >>
> >> >> Yes I am good with writing spec for this in masakari-spec.
> >> >>
> >> >>
> >> >>
> >> >> Do you use gerrit for this git ?
> >> >>
> >> >> Do you have a template for your specs ?
> >> >>
> >> >>
> >> >>
> >> >> Greg.
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> From: Sam P 
> >> >> Reply-To: "openstack-dev@lists.openstack.org"
> >> >> 
> >> >> Date: Thursday, May 18, 2017 at 1:51 PM
> >> >> To: "openstack-dev@lists.openstack.org"
> >> >> 
> >> >> Subject: Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM
> >> >> Heartbeat
> >> >> / Healthcheck Monitoring
> >> >>
> >> >>
> >> >>
> >> >> Hi Greg,
> >> >>
> >> >> Thank you Adam for followup.
> >> >>
> >> >> This is new feature for masakari-monitors and think  Masakari can
> >> >>
> >> >> accommodate this feature in  masakari-monitors.
> >> >>
> >> >> From the implementation prospective, it is not that hard to do.
> >> >>
> >> >> However, as you can see in our Boston presentation, Masakari will
> >> >>
> >> >> replace its monitoring parts ( which is masakari-monitors) with,
> >> >>
> >> >> nova-host-alerter, **-process-alerter, and **-instance-alerter. (**
> >> >>
> >> >> part is not defined yet..:p)...
> >> >>
> >> >> Therefore, I would like to save this specifications, and make sure we
> >> >>
> >> >> will not miss  anything in the transformation..
> >> >>
> >> >> Does is make sense to write simple spec for this in masakari-spec
> [1]?
> >> >>
> >> >> So we can discuss about the requirements how to implement it.
> >> >>
> >> >>
> >> >>
> >> >> [1] https://github.com/openstack/masakari-specs
> >> >>
> >> >>
> >> >>
> >> >> --- Regards,
> >> >>
> >> >> Sampath
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> On Thu, May 18, 2017 at 2:29 AM, Adam Spiers 
> wrote:
> >> >>
> >> >> I don't see any reason why masakari couldn't handle that, but you'd
> >> >>
> >> >> have to ask Sampath and the masakari team whether they would consider
> >> >>
> >> >> that in scope for their roadmap.
> >> >>
> >> >>
> >> >>
> >> >> Waines, Greg  wrote:
> >> >>
> >> >>
> >> >>
> >> >> Sure.  I can propose a new user story.
> >> >>
> >> >>
> >> >>
> >> >> And then are you thinking of including this user story in the scope
> of
> >> >>
> >> >> what masakari would be looking at ?
> >> >>
> >> >>
> >> >>
> >> >> Greg.
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> From: Adam Spiers 
> >> >>
> >> >> Reply-To: "openstack-dev@lists.openstack.org"
> >> >>
> >> >> 
> >> >>
> >> >> Date: Wednesday, May 17, 2017 at 10:08 AM
> >> >>
> >> >> To: "openstack-dev@lists.openstack.org"
> >> >>
> >> >> 
> >> >>
> >> >> Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat /
> >> >>
> >> >> Healthcheck Monitoring
> >> >>
> >> >>
> >> >>
> >> >> Thanks for the clarification Greg.  This sounds like it has the
> >> >>
> >> >> potential to be a very useful capability.  May I suggest that you
> >> >>
> >> >> propose a new user story for it, along similar lines to this existing
> >> >>
> >> >> one?
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html
> >> >>
> >> >>
> >> >>
> >> >> Waines, Greg
> >> >> >
> >> >>
> >> >> wrote:
> >> >>
> >> >> Yes that’s correct.
> >> >>
> >> >> VM Heartbeating / Health-check Monitoring would introduce intrusive /
> >> >>
> >> >> white-box type monitoring of VMs / Instances.
> >> >>
> >> >>
> >> >>
> >> >> I realize this is somewhat in the gray-zone of what a cloud should be
> >> >>
> >> >> monitoring or not,
> >> >>
> >> >> but I believe it provides an alternative for Applications deployed in
> >> >> VMs
> >> >>
> >> >> 

Re: [openstack-dev] Security bug in diskimage-builder

2017-05-30 Thread Ben Nemec



On 05/30/2017 08:00 AM, Emilien Macchi wrote:

On Mon, May 29, 2017 at 9:02 PM, Jeremy Stanley  wrote:

On 2017-05-29 15:43:43 +0200 (+0200), Emilien Macchi wrote:

On Wed, May 24, 2017 at 7:45 PM, Ben Nemec  wrote:

[...]

Emilien, I think we should create a tripleo-coresec group in
launchpad that can be used for this. We have had
tripleo-affecting security bugs in the past and I imagine we
will again. I'm happy to help out with that, although I will
admit my launchpad-fu is kind of weak so I don't know off the
top of my head how to do it.


That or re-use an existing Launchpad group used by OpenStack VMT?


The OpenStack VMT doesn't triage bugs for deliverables aside from
those tagged with vulnerability:managed in governance. For those we
recommend private security bugs only be automatically shared with
the openstack-vuln-mgmt team in LP, and then we manually subscribe
something-coresec to the report once we're sure it was reported
against the correct project. For deliverables without VMT oversight,
it makes sense to have private security bugs automatically shared
with those something-coresec teams directly.

https://governance.openstack.org/tc/reference/tags/vulnerability_managed.html


I created https://launchpad.net/~tripleo-coresec

With me (Pacific Time soon), shardy (Europe), bnemec (East coast) and


If by "coast" you mean the Great Lakes then yes, but I'm in the central 
time zone. ;-)


Thanks for getting this set up guys.


fungi (East coast) for now. If we feel like we need more people we'll
think about it.
I'll explore Launchpad to see how we can use this group to handle Security bugs.

Thanks,


--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [classifier] Common Classification Framework meeting

2017-05-30 Thread Duarte Cardoso, Igor
Hi all,

Friendly reminder that there will be a Common Classification Framework meeting 
in about half an hour at #openstack-meeting.

Today's agenda: 
https://wiki.openstack.org/wiki/Neutron/CommonClassificationFramework#Discussion_Topic_30_May_2017

The spec seems to have reached general agreement and an attempted final 
patchset has now been submitted: https://review.openstack.org/#/c/333993/

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-30 Thread Hongbin Lu
Please consider leveraging Fuxi instead. Kuryr/Fuxi team is working very hard 
to deliver the docker network/storage plugins. I wish you will work with us to 
get them integrated with Magnum-provisioned cluster. Currently, COE clusters 
provisioned by Magnum is far away from enterprise-ready. I think the Magnum 
project will be better off if it can adopt Kuryr/Fuxi which will give you a 
better OpenStack integration.

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: May-30-17 7:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

FYI, there is already a cinder volume driver for docker available, written
in golang, from rexray [1].

Our team recently contributed to libstorage [3], it could support manila too. 
Rexray
also supports the popular cloud providers.

Magnum's docker swarm cluster driver, already leverages rexray for cinder 
integration. [2]

Cheers,
Spyros

[1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
[2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata

On 27 May 2017 at 12:15, zengchen 
> wrote:
Hi John & Ben:
 I have committed a patch[1] to add a new repository to Openstack. Please take 
a look at it. Thanks very much!

 [1]: https://review.openstack.org/#/c/468635

Best Wishes!
zengchen




在 2017-05-26 21:30:48,"John Griffith" 
> 写道:



On Thu, May 25, 2017 at 10:01 PM, zengchen 
> wrote:

Hi john:
I have seen your updates on the bp. I agree with your plan on how to 
develop the codes.
However, there is one issue I have to remind you that at present, Fuxi not 
only can convert
 Cinder volume to Docker, but also Manila file. So, do you consider to involve 
Manila part of codes
 in the new Fuxi-golang?
Agreed, that's a really good and important point.  Yes, I believe Ben 
Swartzlander

is interested, we can check with him and make sure but I certainly hope that 
Manila would be interested.
Besides, IMO, It is better to create a repository for Fuxi-golang, because
 Fuxi is the project of Openstack,
Yeah, that seems fine; I just didn't know if there needed to be any more 
conversation with other folks on any of this before charing ahead on new repos 
etc.  Doesn't matter much to me though.


   Thanks very much!

Best Wishes!
zengchen



At 2017-05-25 22:47:29, "John Griffith" 
> wrote:



On Thu, May 25, 2017 at 5:50 AM, zengchen 
> wrote:
Very sorry to foget attaching the link for bp of rewriting Fuxi with go 
language.
https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang

At 2017-05-25 19:46:54, "zengchen" 
> wrote:

Hi guys:
hongbin had committed a bp of rewriting Fuxi with go language[1]. My 
question is where to commit codes for it.
We have two choice, 1. create a new repository, 2. create a new branch.  IMO, 
the first one is much better. Because
there are many differences in the layer of infrastructure, such as CI.  What's 
your opinion? Thanks very much

Best Wishes
zengchen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Hi Zengchen,

For now I was thinking just use Github and PR's outside of the OpenStack 
projects to bootstrap things and see how far we can get.  I'll update the BP 
this morning with what I believe to be the key tasks to work through.

Thanks,
John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Tripleo] deploy software on Openstack controller on the Overcloud

2017-05-30 Thread Alex Schultz
On Mon, May 29, 2017 at 5:05 AM, Dnyaneshwar Pawar
 wrote:
> Hi,
>
> I am tying to deploy a software on openstack controller on the overcloud.
> One way to do this is by modifying ‘overcloud image’ so that all packages of
> our software are added to image and then run overcloud deploy.
> Other way is to write heat template and puppet module which will deploy the
> required packages.
>
> Question: Which of above two approaches is better?
>
> Note: Configuration part of the software will be done via separate heat
> template and puppet module.
>

Usually you do both.  Depending on how the end user is expected to
deploy, if they are using the TripleoPackages service[0] in their
role, the puppet installation of the package won't actually work (we
override the package provider to noop) so it needs to be in the
images.  That being said, usually there is also a bit of puppet that
needs to be written to configure the end service and as a best
practice (and for development purposes), it's a good idea to also
capture the package in the manifest.

Thanks,
-Alex

[0] 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/tripleo-packages.yaml

>
> Thanks and Regards,
> Dnyaneshwar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] neutron-lib impact: attribute functions and constants now in neutron-lib

2017-05-30 Thread Boden Russell
If your project uses neutron.api.v2.attributes please read on.

A bulk of neutron.api.v2.attributes has been rehomed into neutron-lib
[1][2] and we've begun consuming these changes in neutron and stadium
projects.

Today we are working to consume:
- The core resource/collection name constants [3] such as NETWORK,
NETWORKS, etc..
- Many of the "helper functions" from attributes [4].

Subsequent patches will work to consume to remainder of attributes
including the global resource attribute map.


Suggested actions:
- If your project uses any of the core resource/collection constants
from attributes and is not included in [3], please move your imports
over and use neutron-lib.
- If your project uses any of the helper functions from attributes,
please move your code over to neutron-lib's implementation. Best I can
tell [4] covers all uses, but perhaps I missed something.

Feel free to catch me on #openstack-neutron as 'boden' if you have any
questions.

Thanks

[1] https://review.openstack.org/#/c/394244/
[2] https://review.openstack.org/#/c/449277/
[3]
https://review.openstack.org/#/q/message:%22use+core+resource+attribute+constants%22
[4] https://review.openstack.org/#/q/message:%22use+attribute+functions%22

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security bug in diskimage-builder

2017-05-30 Thread Jeremy Stanley
On 2017-05-30 15:00:11 +0200 (+0200), Emilien Macchi wrote:
[...]
> I'll explore Launchpad to see how we can use this group to handle
> Security bugs.

I'll save you some time! ;)

Go to https://launchpad.net/tripleo/+sharing (repeat for any other
projects the TripleO team has on LP) and add a row to that table for
the new LP team you've created with sharing set to "Private
Security: All". Also make sure the "Private Security: All" sharing
option is removed from other teams.

You may also see some rows in that table for individuals or other
groups who are subscribed to specific private bugs. These show up
with a sharing setting like "Private Security: Some" and can be
safely ignored.

Note that access to the sharing settings requires you to be in
either the Maintainer or Driver group for the project in question (I
don't remember which).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security bug in diskimage-builder

2017-05-30 Thread Emilien Macchi
On Mon, May 29, 2017 at 9:02 PM, Jeremy Stanley  wrote:
> On 2017-05-29 15:43:43 +0200 (+0200), Emilien Macchi wrote:
>> On Wed, May 24, 2017 at 7:45 PM, Ben Nemec  wrote:
> [...]
>> > Emilien, I think we should create a tripleo-coresec group in
>> > launchpad that can be used for this. We have had
>> > tripleo-affecting security bugs in the past and I imagine we
>> > will again. I'm happy to help out with that, although I will
>> > admit my launchpad-fu is kind of weak so I don't know off the
>> > top of my head how to do it.
>>
>> That or re-use an existing Launchpad group used by OpenStack VMT?
>
> The OpenStack VMT doesn't triage bugs for deliverables aside from
> those tagged with vulnerability:managed in governance. For those we
> recommend private security bugs only be automatically shared with
> the openstack-vuln-mgmt team in LP, and then we manually subscribe
> something-coresec to the report once we're sure it was reported
> against the correct project. For deliverables without VMT oversight,
> it makes sense to have private security bugs automatically shared
> with those something-coresec teams directly.
>
> https://governance.openstack.org/tc/reference/tags/vulnerability_managed.html

I created https://launchpad.net/~tripleo-coresec

With me (Pacific Time soon), shardy (Europe), bnemec (East coast) and
fungi (East coast) for now. If we feel like we need more people we'll
think about it.
I'll explore Launchpad to see how we can use this group to handle Security bugs.

Thanks,

> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [masakari] Intrusive Instance Monitoring

2017-05-30 Thread Jeremy Stanley
On 2017-05-30 12:04:54 + (+), Waines, Greg wrote:
> Thanks Jeremy ... the remote gerrit setting was my problem ... I
> had it set to Vitrage because I am also doing some work there.
> 
> I switched it to masakari for this work and was able to submit my
> spec.

Was it set globally somewhere/somehow? Normally git-review will
create the gerrit remote based on the data it finds in the repo's
.gitreview file (which correctly set the project name to
openstack/masakari, I double-checked that at least). If you have a
project set in a [gitreview] section of your ~/.gitconfig or
something like that, you'll definitely want to remove it and let
git-review's default per-repo behavior take over.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM Heartbeat / Healthcheck Monitoring

2017-05-30 Thread Sam P
Hi Vikash,

  Greg submit the spec [1] for intrusive instance monitoring.
  Your review will be highly appreciated..
 [1] https://review.openstack.org/#/c/469070/
--- Regards,
Sampath



On Sat, May 20, 2017 at 4:49 PM, Vikash Kumar
 wrote:
> Thanks Sam
>
>
> On Sat, 20 May 2017, 06:51 Sam P,  wrote:
>>
>> Hi Vikash,
>>  Great... I will add you as reviewer to this spec.
>>  Thank you..
>> --- Regards,
>> Sampath
>>
>>
>>
>> On Fri, May 19, 2017 at 1:06 PM, Vikash Kumar
>>  wrote:
>> > Hi Greg,
>> >
>> > Please include my email in this spec also. We are also dealing with
>> > HA
>> > of Virtual Instances (especially for Vendors) and will participate.
>> >
>> > On Thu, May 18, 2017 at 11:33 PM, Waines, Greg
>> > 
>> > wrote:
>> >>
>> >> Yes I am good with writing spec for this in masakari-spec.
>> >>
>> >>
>> >>
>> >> Do you use gerrit for this git ?
>> >>
>> >> Do you have a template for your specs ?
>> >>
>> >>
>> >>
>> >> Greg.
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> From: Sam P 
>> >> Reply-To: "openstack-dev@lists.openstack.org"
>> >> 
>> >> Date: Thursday, May 18, 2017 at 1:51 PM
>> >> To: "openstack-dev@lists.openstack.org"
>> >> 
>> >> Subject: Re: [openstack-dev] [vitrage] [nova] [HA] [masakari] VM
>> >> Heartbeat
>> >> / Healthcheck Monitoring
>> >>
>> >>
>> >>
>> >> Hi Greg,
>> >>
>> >> Thank you Adam for followup.
>> >>
>> >> This is new feature for masakari-monitors and think  Masakari can
>> >>
>> >> accommodate this feature in  masakari-monitors.
>> >>
>> >> From the implementation prospective, it is not that hard to do.
>> >>
>> >> However, as you can see in our Boston presentation, Masakari will
>> >>
>> >> replace its monitoring parts ( which is masakari-monitors) with,
>> >>
>> >> nova-host-alerter, **-process-alerter, and **-instance-alerter. (**
>> >>
>> >> part is not defined yet..:p)...
>> >>
>> >> Therefore, I would like to save this specifications, and make sure we
>> >>
>> >> will not miss  anything in the transformation..
>> >>
>> >> Does is make sense to write simple spec for this in masakari-spec [1]?
>> >>
>> >> So we can discuss about the requirements how to implement it.
>> >>
>> >>
>> >>
>> >> [1] https://github.com/openstack/masakari-specs
>> >>
>> >>
>> >>
>> >> --- Regards,
>> >>
>> >> Sampath
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> On Thu, May 18, 2017 at 2:29 AM, Adam Spiers  wrote:
>> >>
>> >> I don't see any reason why masakari couldn't handle that, but you'd
>> >>
>> >> have to ask Sampath and the masakari team whether they would consider
>> >>
>> >> that in scope for their roadmap.
>> >>
>> >>
>> >>
>> >> Waines, Greg  wrote:
>> >>
>> >>
>> >>
>> >> Sure.  I can propose a new user story.
>> >>
>> >>
>> >>
>> >> And then are you thinking of including this user story in the scope of
>> >>
>> >> what masakari would be looking at ?
>> >>
>> >>
>> >>
>> >> Greg.
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> From: Adam Spiers 
>> >>
>> >> Reply-To: "openstack-dev@lists.openstack.org"
>> >>
>> >> 
>> >>
>> >> Date: Wednesday, May 17, 2017 at 10:08 AM
>> >>
>> >> To: "openstack-dev@lists.openstack.org"
>> >>
>> >> 
>> >>
>> >> Subject: Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat /
>> >>
>> >> Healthcheck Monitoring
>> >>
>> >>
>> >>
>> >> Thanks for the clarification Greg.  This sounds like it has the
>> >>
>> >> potential to be a very useful capability.  May I suggest that you
>> >>
>> >> propose a new user story for it, along similar lines to this existing
>> >>
>> >> one?
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html
>> >>
>> >>
>> >>
>> >> Waines, Greg
>> >> >
>> >>
>> >> wrote:
>> >>
>> >> Yes that’s correct.
>> >>
>> >> VM Heartbeating / Health-check Monitoring would introduce intrusive /
>> >>
>> >> white-box type monitoring of VMs / Instances.
>> >>
>> >>
>> >>
>> >> I realize this is somewhat in the gray-zone of what a cloud should be
>> >>
>> >> monitoring or not,
>> >>
>> >> but I believe it provides an alternative for Applications deployed in
>> >> VMs
>> >>
>> >> that do not have an external monitoring/management entity like a VNF
>> >> Manager
>> >>
>> >> in the MANO architecture.
>> >>
>> >> And even for VMs with VNF Managers, it provides a highly reliable
>> >>
>> >> alternate monitoring path that does not rely on Tenant Networking.
>> >>
>> >>
>> >>
>> >> You’re correct, that VM HB/HC Monitoring would leverage
>> >>
>> >> https://wiki.libvirt.org/page/Qemu_guest_agent
>> >>
>> >> that would require the agent to be installed in the images for talking

Re: [openstack-dev] [masakari] Intrusive Instance Monitoring

2017-05-30 Thread Sam P
Hi Greg,

 Great.. thank you. I will ask people to review this..

--- Regards,
Sampath



On Tue, May 30, 2017 at 9:06 PM, Waines, Greg  wrote:
> Hey Sam,
>
>
>
> Was able to submit the blueprint and spec.
>
>
>
> Blueprint:
> https://blueprints.launchpad.net/masakari/+spec/intrusive-instance-monitoring
>
> Spec: https://review.openstack.org/#/c/469070/
>
>
>
> Greg.
>
>
>
> From: Sam P 
> Reply-To: "openstack-dev@lists.openstack.org"
> 
> Date: Monday, May 29, 2017 at 10:01 PM
> To: "openstack-dev@lists.openstack.org" 
> Subject: Re: [openstack-dev] [masakari] Intrusive Instance Monitoring
>
>
>
> Hi Greg,
>
>
>
> # Thank you Jeremy..!
>
>
>
> I couldn't find any problem with repo side.
>
> As Jeremy pointed out, could you please check the `git remote show gerrit`.
>
>
>
> BTW, could you please create a BP in [1] and link it to your spec when
>
> you commit it.
>
> In this way, we could track all the changes related to this task.
>
> Please include the related bp Name in commit massage of your spec as,
>
>
>
> Implements: bp name-of-your-bp
>
> # Please refer to open to review spec [2] for more details.
>
> # You may find more details on [3]
>
>
>
> [1] https://blueprints.launchpad.net/masakari
>
> [2] https://review.openstack.org/#/c/458023/4//COMMIT_MSG
>
> [3]
> https://docs.openstack.org/infra/manual/developers.html#working-on-specifications-and-blueprints
>
> --- Regards,
>
> Sampath
>
>
>
>
>
>
>
> On Tue, May 30, 2017 at 4:39 AM, Jeremy Stanley  wrote:
>
> On 2017-05-29 14:48:10 + (+), Waines, Greg wrote:
>
> Was just trying to submit my spec for Intrusive Instance
>
> Monitoring for review.
>
>
>
> And I get the following warning after committing when I do the
>
> ‘git review’
>
>
>
> gwaines@gwaines-VirtualBox:~/openstack/masakari-specs$ git review
>
> You are about to submit multiple commits. This is expected if you are
>
> submitting a commit that is dependent on one or more in-review
>
> commits. Otherwise you should consider squashing your changes into one
>
> commit before submitting.
>
>
>
> The outstanding commits are:
>
>
>
> f09deee (HEAD -> myBranch) Initial draft specification of Intrusive Instance
> Monitoring.
>
> 21aeb96 (origin/master, origin/HEAD, master) Prepare specs repository for
> Pike
>
> 83d1a0a Implement reserved_host, auto_priority and rh_priority recovery
> methods
>
> 4e746cb Add periodic task to clean up workflow failure
>
> 2c10be4 Add spec repo structure
>
> a82016f Added .gitreview
>
>
>
> Do you really want to submit the above commits?
>
> Type 'yes' to confirm, other to cancel: no
>
> Aborting.
>
> gwaines@gwaines-VirtualBox:~/openstack/masakari-specs$
>
>
>
> Seems like my clone picked up someone else’s open commit ?
>
>
>
> Any help would be appreciated,
>
> The full log of my git session is below,
>
> [...]
>
>
>
> The output doesn't show any open changes, but rather seems to
>
> indicate that the parent is the commit at the tip of origin/master.
>
> This condition shouldn't normally happen unless Gerrit doesn't
>
> actually know about any of those commits for some reason.
>
>
>
> One thing, I notice your `git review -s` output in your log was
>
> empty. Make sure the output of `git remote show gerrit` looks
>
> something like this (obviously with your username in place of mine):
>
>
>
>  * remote gerrit
>
>Fetch URL:
> ssh://fu...@review.openstack.org:29418/openstack/masakari-specs.git
>
>Push  URL:
> ssh://fu...@review.openstack.org:29418/openstack/masakari-specs.git
>
>HEAD branch: master
>
>Remote branch:
>
>  master tracked
>
>Local ref configured for 'git push':
>
>  master pushes to master (up to date)
>
>
>
> Using git-review 1.25.0 I attempted to replicate the issue like
>
> this, but everything worked normally:
>
>
>
>  fungi@dhole:~/work/openstack/openstack$ git clone
> https://github.com/openstack/masakari-specs.git
>
>  Cloning into 'masakari-specs'...
>
>  remote: Counting objects: 61, done.
>
>  remote: Total 61 (delta 0), reused 0 (delta 0), pack-reused 61
>
>  Unpacking objects: 100% (61/61), done.
>
>  fungi@dhole:~/work/openstack/openstack$ cd masakari-specs/
>
>  fungi@dhole:~/work/openstack/openstack/masakari-specs$ git log
>
>  commit 21aeb965acea0b3ebe8448715bb88df4409dd402
>
>  Author: Abhishek Kekane 
>
>  Date:   Wed Apr 19 16:00:53 2017 +0530
>
>
>
>  Prepare specs repository for Pike
>
>
>
>  Add directories, index file, and template symlinks for Pike specs.
>
>
>
>  Change-Id: I7dce74430e4569a5978f8f4b953db3b20125c53e
>
>
>
>  commit 83d1a0aae17e4e8110ac64c7975a8520592712f9
>
>  Author: Abhishek Kekane 
>
>  Date:   Fri Jan 20 12:00:12 2017 +0530
>
>
>
>  Implement reserved_host, 

Re: [openstack-dev] [masakari] Intrusive Instance Monitoring

2017-05-30 Thread Waines, Greg
Hey Sam,

Was able to submit the blueprint and spec.

Blueprint:  
https://blueprints.launchpad.net/masakari/+spec/intrusive-instance-monitoring
Spec: https://review.openstack.org/#/c/469070/

Greg.

From: Sam P 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Monday, May 29, 2017 at 10:01 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [masakari] Intrusive Instance Monitoring

Hi Greg,

# Thank you Jeremy..!

I couldn't find any problem with repo side.
As Jeremy pointed out, could you please check the `git remote show gerrit`.

BTW, could you please create a BP in [1] and link it to your spec when
you commit it.
In this way, we could track all the changes related to this task.
Please include the related bp Name in commit massage of your spec as,

Implements: bp name-of-your-bp
# Please refer to open to review spec [2] for more details.
# You may find more details on [3]

[1] https://blueprints.launchpad.net/masakari
[2] https://review.openstack.org/#/c/458023/4//COMMIT_MSG
[3] 
https://docs.openstack.org/infra/manual/developers.html#working-on-specifications-and-blueprints
--- Regards,
Sampath



On Tue, May 30, 2017 at 4:39 AM, Jeremy Stanley 
> wrote:
On 2017-05-29 14:48:10 + (+), Waines, Greg wrote:
Was just trying to submit my spec for Intrusive Instance
Monitoring for review.

And I get the following warning after committing when I do the
‘git review’

gwaines@gwaines-VirtualBox:~/openstack/masakari-specs$ git review
You are about to submit multiple commits. This is expected if you are
submitting a commit that is dependent on one or more in-review
commits. Otherwise you should consider squashing your changes into one
commit before submitting.

The outstanding commits are:

f09deee (HEAD -> myBranch) Initial draft specification of Intrusive Instance 
Monitoring.
21aeb96 (origin/master, origin/HEAD, master) Prepare specs repository for Pike
83d1a0a Implement reserved_host, auto_priority and rh_priority recovery methods
4e746cb Add periodic task to clean up workflow failure
2c10be4 Add spec repo structure
a82016f Added .gitreview

Do you really want to submit the above commits?
Type 'yes' to confirm, other to cancel: no
Aborting.
gwaines@gwaines-VirtualBox:~/openstack/masakari-specs$

Seems like my clone picked up someone else’s open commit ?

Any help would be appreciated,
The full log of my git session is below,
[...]

The output doesn't show any open changes, but rather seems to
indicate that the parent is the commit at the tip of origin/master.
This condition shouldn't normally happen unless Gerrit doesn't
actually know about any of those commits for some reason.

One thing, I notice your `git review -s` output in your log was
empty. Make sure the output of `git remote show gerrit` looks
something like this (obviously with your username in place of mine):

 * remote gerrit
   Fetch URL: 
ssh://fu...@review.openstack.org:29418/openstack/masakari-specs.git
   Push  URL: 
ssh://fu...@review.openstack.org:29418/openstack/masakari-specs.git
   HEAD branch: master
   Remote branch:
 master tracked
   Local ref configured for 'git push':
 master pushes to master (up to date)

Using git-review 1.25.0 I attempted to replicate the issue like
this, but everything worked normally:

 fungi@dhole:~/work/openstack/openstack$ git clone 
https://github.com/openstack/masakari-specs.git
 Cloning into 'masakari-specs'...
 remote: Counting objects: 61, done.
 remote: Total 61 (delta 0), reused 0 (delta 0), pack-reused 61
 Unpacking objects: 100% (61/61), done.
 fungi@dhole:~/work/openstack/openstack$ cd masakari-specs/
 fungi@dhole:~/work/openstack/openstack/masakari-specs$ git log
 commit 21aeb965acea0b3ebe8448715bb88df4409dd402
 Author: Abhishek Kekane 
>
 Date:   Wed Apr 19 16:00:53 2017 +0530

 Prepare specs repository for Pike

 Add directories, index file, and template symlinks for Pike specs.

 Change-Id: I7dce74430e4569a5978f8f4b953db3b20125c53e

 commit 83d1a0aae17e4e8110ac64c7975a8520592712f9
 Author: Abhishek Kekane 
>
 Date:   Fri Jan 20 12:00:12 2017 +0530

 Implement reserved_host, auto_priority and rh_priority recovery methods

 Implements: bp implement-recovery-methods
 Change-Id: I83ce204d8f25b240fa6ce723dc15192ae9b4e191

 commit 4e746cb5a39df5aa833ab32ce7ba961637753a15
 Author: Abhishek Kekane 
>
 Date:   Fri Jan 20 11:38:09 2017 +0530

 fungi@dhole:~/work/openstack/openstack/masakari-specs$ git review -s
 Creating a git remote called 'gerrit' that maps to:
 

Re: [openstack-dev] [masakari] Intrusive Instance Monitoring

2017-05-30 Thread Waines, Greg
Thanks Jeremy ... the remote gerrit setting was my problem ... I had it set to 
Vitrage because I am also doing some work there.

I switched it to masakari for this work and was able to submit my spec.

thanks again,
Greg.

From: Jeremy Stanley 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Monday, May 29, 2017 at 3:39 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [masakari] Intrusive Instance Monitoring

On 2017-05-29 14:48:10 + (+), Waines, Greg wrote:
Was just trying to submit my spec for Intrusive Instance
Monitoring for review.
And I get the following warning after committing when I do the
‘git review’
gwaines@gwaines-VirtualBox:~/openstack/masakari-specs$ git review
You are about to submit multiple commits. This is expected if you are
submitting a commit that is dependent on one or more in-review
commits. Otherwise you should consider squashing your changes into one
commit before submitting.
The outstanding commits are:
f09deee (HEAD -> myBranch) Initial draft specification of Intrusive Instance 
Monitoring.
21aeb96 (origin/master, origin/HEAD, master) Prepare specs repository for Pike
83d1a0a Implement reserved_host, auto_priority and rh_priority recovery methods
4e746cb Add periodic task to clean up workflow failure
2c10be4 Add spec repo structure
a82016f Added .gitreview
Do you really want to submit the above commits?
Type 'yes' to confirm, other to cancel: no
Aborting.
gwaines@gwaines-VirtualBox:~/openstack/masakari-specs$
Seems like my clone picked up someone else’s open commit ?
Any help would be appreciated,
The full log of my git session is below,
[...]

The output doesn't show any open changes, but rather seems to
indicate that the parent is the commit at the tip of origin/master.
This condition shouldn't normally happen unless Gerrit doesn't
actually know about any of those commits for some reason.

One thing, I notice your `git review -s` output in your log was
empty. Make sure the output of `git remote show gerrit` looks
something like this (obviously with your username in place of mine):

* remote gerrit
  Fetch URL: 
ssh://fu...@review.openstack.org:29418/openstack/masakari-specs.git
  Push  URL: 
ssh://fu...@review.openstack.org:29418/openstack/masakari-specs.git
  HEAD branch: master
  Remote branch:
master tracked
  Local ref configured for 'git push':
master pushes to master (up to date)

Using git-review 1.25.0 I attempted to replicate the issue like
this, but everything worked normally:

fungi@dhole:~/work/openstack/openstack$ git clone 
https://github.com/openstack/masakari-specs.git
Cloning into 'masakari-specs'...
remote: Counting objects: 61, done.
remote: Total 61 (delta 0), reused 0 (delta 0), pack-reused 61
Unpacking objects: 100% (61/61), done.
fungi@dhole:~/work/openstack/openstack$ cd masakari-specs/
fungi@dhole:~/work/openstack/openstack/masakari-specs$ git log
commit 21aeb965acea0b3ebe8448715bb88df4409dd402
Author: Abhishek Kekane 
>
Date:   Wed Apr 19 16:00:53 2017 +0530

Prepare specs repository for Pike

Add directories, index file, and template symlinks for Pike specs.

Change-Id: I7dce74430e4569a5978f8f4b953db3b20125c53e

commit 83d1a0aae17e4e8110ac64c7975a8520592712f9
Author: Abhishek Kekane 
>
Date:   Fri Jan 20 12:00:12 2017 +0530

Implement reserved_host, auto_priority and rh_priority recovery methods

Implements: bp implement-recovery-methods
Change-Id: I83ce204d8f25b240fa6ce723dc15192ae9b4e191

commit 4e746cb5a39df5aa833ab32ce7ba961637753a15
Author: Abhishek Kekane 
>
Date:   Fri Jan 20 11:38:09 2017 +0530

fungi@dhole:~/work/openstack/openstack/masakari-specs$ git review -s
Creating a git remote called 'gerrit' that maps to:
ssh://fu...@review.openstack.org:29418/openstack/masakari-specs.git
fungi@dhole:~/work/openstack/openstack/masakari-specs$ git checkout -b 
myBranch
Switched to a new branch 'myBranch'
fungi@dhole:~/work/openstack/openstack/masakari-specs$ cp 
doc/source/specs/pike/implemented/pike-template.rst 
doc/source/specs/pike/implemented/vmHeartbeat.masa
kari.specfile.rst
fungi@dhole:~/work/openstack/openstack/masakari-specs$ git add 
specs/pike/implemented/vmHeartbeat.masakari.specfile.rst
fungi@dhole:~/work/openstack/openstack/masakari-specs$ git commit
[myBranch 9e5c70e] Test commit
 1 file changed, 389 insertions(+)
 create mode 100644 specs/pike/implemented/vmHeartbeat.masakari.specfile.rst
fungi@dhole:~/work/openstack/openstack/masakari-specs$ git review
remote: Processing changes: new: 1, refs: 1, done

Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-30 Thread Spyros Trigazis
FYI, there is already a cinder volume driver for docker available, written
in golang, from rexray [1].

Our team recently contributed to libstorage [3], it could support manila
too. Rexray
also supports the popular cloud providers.

Magnum's docker swarm cluster driver, already leverages rexray for cinder
integration. [2]

Cheers,
Spyros

[1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
[2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
[3]
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata

On 27 May 2017 at 12:15, zengchen  wrote:

> Hi John & Ben:
>  I have committed a patch[1] to add a new repository to Openstack. Please
> take a look at it. Thanks very much!
>
>  [1]: https://review.openstack.org/#/c/468635
>
> Best Wishes!
> zengchen
>
>
>
>
>
> 在 2017-05-26 21:30:48,"John Griffith"  写道:
>
>
>
> On Thu, May 25, 2017 at 10:01 PM, zengchen  wrote:
>
>>
>> Hi john:
>> I have seen your updates on the bp. I agree with your plan on how to
>> develop the codes.
>> However, there is one issue I have to remind you that at present,
>> Fuxi not only can convert
>>  Cinder volume to Docker, but also Manila file. So, do you consider to
>> involve Manila part of codes
>>  in the new Fuxi-golang?
>>
> Agreed, that's a really good and important point.  Yes, I believe Ben
> Swartzlander
>
> is interested, we can check with him and make sure but I certainly hope
> that Manila would be interested.
>
>> Besides, IMO, It is better to create a repository for Fuxi-golang, because
>>  Fuxi is the project of Openstack,
>>
> Yeah, that seems fine; I just didn't know if there needed to be any more
> conversation with other folks on any of this before charing ahead on new
> repos etc.  Doesn't matter much to me though.
>
>
>>
>>Thanks very much!
>>
>> Best Wishes!
>> zengchen
>>
>>
>>
>>
>> At 2017-05-25 22:47:29, "John Griffith"  wrote:
>>
>>
>>
>> On Thu, May 25, 2017 at 5:50 AM, zengchen  wrote:
>>
>>> Very sorry to foget attaching the link for bp of rewriting Fuxi with go
>>> language.
>>> https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang
>>>
>>>
>>> At 2017-05-25 19:46:54, "zengchen"  wrote:
>>>
>>> Hi guys:
>>> hongbin had committed a bp of rewriting Fuxi with go language[1]. My
>>> question is where to commit codes for it.
>>> We have two choice, 1. create a new repository, 2. create a new branch.
>>> IMO, the first one is much better. Because
>>> there are many differences in the layer of infrastructure, such as CI.
>>> What's your opinion? Thanks very much
>>>
>>> Best Wishes
>>> zengchen
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> Hi Zengchen,
>>
>> For now I was thinking just use Github and PR's outside of the OpenStack
>> projects to bootstrap things and see how far we can get.  I'll update the
>> BP this morning with what I believe to be the key tasks to work through.
>>
>> Thanks,
>> John
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [L2-Gateway] Query on redundant configuration of OpenStack's L2 gateway

2017-05-30 Thread Ricardo Noriega De Soto
Hi Ran Xiao,

Please, take a look at this doc I found online:

https://docs.google.com/document/d/1U7M78uNhBp8eZIu0YH04u6Vzj6t2NpvDY4peX8svsXY/edit#heading=h.xochfa5fqf06

You might want to contact those folks!

Cheers

On Tue, May 23, 2017 at 1:21 PM, Ran Xiao  wrote:

> Hi All,
>
>   I have a query on usage of L2GW NB API.
>   I have to integrate L2GW with ODL.
>   And there are two L2GW nodes named l2gw1 and l2gw2.
>   OVS HW VTEP Emulator is running on each node.
>   Does the following command work for configuring these two nodes a L2GW
> HA Cluster?
>
>   neutron l2-gateway-create gw_name --device name=l2gw1,interface_names=eth2
> \
> --device name=l2gw2,interface_names=
> eth2
>
>   Version : stable/ocata
>
>   Thanks in advance.
>
> BR,
> Ran Xiao
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
Red Hat
irc: rnoriega @freenode
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] error handling

2017-05-30 Thread Yujun Zhang (ZTE)
On Tue, May 30, 2017 at 3:59 PM Afek, Ifat (Nokia - IL/Kfar Sava) <
ifat.a...@nokia.com> wrote:

> Hi Yujun,
>
>
>
> You started an interesting discussion. I think that the distinction
> between an operational error and a programmer error is correct and we
> should always keep that in mind.
>
>
>
> I agree that having an overall design for error handling in Vitrage is a
> good idea; but I disagree that until then we better let it crash.
>
>
>
> I think that Vitrage is made out of many pieces that don’t necessarily
> depend on one another. For example, if one datasource fails, everything
> else can work as usual – so why crash? Similarly, if one template fails to
> load, all other templates can still be activated.
>

This usually or always happens during initialization phase, doesn't it? It
is a period with human inspecting and should be detected in the deployment
or user acceptance test. So if something fails, it is better to isolate
them before continue running, e.g. correct the invalid template, invalid
data source configuration or remove the template and disable the data
source. This is because such error is permanent and they won't recover
automatically.

Here we need to distinguish the error that data source is temporarily
unavailable due to network connection issue or data source not up yet. In
this case, I agree we'd better start the rest component and perform a retry
periodically until it recovers.


> Another aspect is that the main purpose of Vitrage is to provide insights.
> In case of a failure in one datasource/template, some of the insights might
> be missing. But this will not lead to inaccurate behavior or to wrong
> actions being executed in the system. IMO, we should give the user as much
> information as possible given that we have only part of the input.
>

I agree, if enough insights could be provided by the running system. We can
improve the handling of permanent error. What is even better is supporting
of a hot load for the components and templates.

What I don't like much is sometimes errors are handled but without enough
details. In this case, a crash with trace stack is more useful than a user
"friendly" message like "failed to start xxx component" or "invalid
configuration file" (I'm not talking about vitrage, it is quite common in
many projects)

My preference is "good error handling" > "no error handling" > "bad error
handling". Though it is difficult to distinguish what is a good error
handling and what is bad...

Regarding the use cases that you mentioned:
>
>
>
>1. invalid configuration file
>
> [Ifat] This should depend on the specific configuration. If keystone is
> misconfigured, nothing will work of course. But if for example Zabbix is
> misconfigured, Vitrage should work and show the topology and the non-Zabbix
> alarms.
>

Agree. It should be handled in a different way regarding what kind of error
and how critical it is.


>
>1. failed to communicate with data source
>
> [Ifat] I think that the error should be logged, and all other datasources
> should work as usual.
>

Yes, and it would be good to have a retry mechanism


>
>1. malformed data from data source
>
> [Ifat] I think that the error should be logged, and all other datasources
> should work as usual. This problem means we must modify the code in the
> datasource itself, but until then Vitrage should work, right?
>
Yes, I think it is possible when the data source version changes and we
should discard the data and indicate the error. The other part should not
be affected.


>1. failed to execute an action
>
> [Ifat] Again, that’s a problem that requires code changes; but why fail
> other actions?
>

What I meant here is temporary failure, e.g. when you try to mark host down
but not able to reach it due to network connection issue or other reasons


>1. ...
>
> BTW, it might be a good idea to add API/UI for showing the configuration
> and the status of the datasources. We all know that errors in the log files
> are often ignored…
>

Sure, the errors I mentioned above is what the system operators could
encounter even with a correct configuration and not related to software
bugs. Display them in UI would be very helpful. The log files are more for
the engineers to analyse the root cause.


> Best Regards,
>
> Ifat.
>
>
>
>
>
> *From: *"Yujun Zhang (ZTE)" 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Monday, 29 May 2017 at 16:13
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *[openstack-dev] [vitrage] error handling
>
>
>
> Brought up by a recent code review, I think it worth a thorough discussion
> about the error handling rule.
>
>
>
> I once read an article[1] from Joyent and it impressed me on the
> distinguish between *Operational* errors vs. *programmer* errors. The
> article is written for nodejs, but the 

Re: [openstack-dev] [TripleO] Undercloud backup and restore

2017-05-30 Thread Carlos Camacho Gonzalez
Hi Shinobu,

It's really helpful to get feedback from customers, please, can you give me
details about the failures you are having?. If so, sending me directly some
logs would be great.

Thanks,
Carlos.

On Mon, May 29, 2017 at 9:07 AM, Shinobu Kinjo  wrote:
>
> Here is feedback from the customer.
>
> Following the guide [1], undercloud restoration was not succeeded.
>
> Swift objects could haven't been downloaded after restoration even
> though they followed all procedures during backing up / restoring
> their system described in [1].
>
> Since that, I'm not 100% sure if `tar -czf` is good enough to take a
> backup of the system or not.
>
> It would be great help to do dry-run against backed up data so that we
> can make sure that backed up data is completely fine.
>
> [1]
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_openstack_platform/7/html/back_up_and_restore_red_hat_enterprise_linux_openstack_platform/back_up_and_restore_the_undercloud
>
>
> On Wed, May 24, 2017 at 4:26 PM, Carlos Camacho Gonzalez
>  wrote:
> > Hey folks,
> >
> > Based on what we discussed yesterday in the TripleO weekly team meeting,
> > I'll like to propose a blueprint to create 2 features, basically to
backup
> > and restore the Undercloud.
> >
> > I'll like to follow in the first iteration the available docs for this
> > purpose [1][2].
> >
> > With the addition of backing up the config files on /etc/ specifically
to be
> > able to recover from a failed Undercloud upgrade, i.e. recover the repos
> > info removed in [3].
> >
> > I'll like to target this for P as I think I have enough time for
> > coding/testing these features.
> >
> > I already have created a blueprint to track this effort
> > https://blueprints.launchpad.net/tripleo/+spec/undercloud-backup-restore
> >
> > What do you think about it?
> >
> > Thanks,
> > Carlos.
> >
> > [1]:
> >
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_openstack_platform/7/html/back_up_and_restore_red_hat_enterprise_linux_openstack_platform/restore
> >
> > [2]:
> >
https://docs.openstack.org/developer/tripleo-docs/post_deployment/backup_restore_undercloud.html
> >
> > [3]:
> >
https://docs.openstack.org/developer/tripleo-docs/installation/updating.html
> >
> >
> >
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] error handling

2017-05-30 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi Yujun,

You started an interesting discussion. I think that the distinction between an 
operational error and a programmer error is correct and we should always keep 
that in mind.

I agree that having an overall design for error handling in Vitrage is a good 
idea; but I disagree that until then we better let it crash.

I think that Vitrage is made out of many pieces that don’t necessarily depend 
on one another. For example, if one datasource fails, everything else can work 
as usual – so why crash? Similarly, if one template fails to load, all other 
templates can still be activated.
Another aspect is that the main purpose of Vitrage is to provide insights. In 
case of a failure in one datasource/template, some of the insights might be 
missing. But this will not lead to inaccurate behavior or to wrong actions 
being executed in the system. IMO, we should give the user as much information 
as possible given that we have only part of the input.

Regarding the use cases that you mentioned:


  1.  invalid configuration file
[Ifat] This should depend on the specific configuration. If keystone is 
misconfigured, nothing will work of course. But if for example Zabbix is 
misconfigured, Vitrage should work and show the topology and the non-Zabbix 
alarms.


  1.  failed to communicate with data source
[Ifat] I think that the error should be logged, and all other datasources 
should work as usual.


  1.  malformed data from data source

[Ifat] I think that the error should be logged, and all other datasources 
should work as usual. This problem means we must modify the code in the 
datasource itself, but until then Vitrage should work, right?


  1.  failed to execute an action
[Ifat] Again, that’s a problem that requires code changes; but why fail other 
actions?


  1.  ...

BTW, it might be a good idea to add API/UI for showing the configuration and 
the status of the datasources. We all know that errors in the log files are 
often ignored…

Best Regards,
Ifat.


From: "Yujun Zhang (ZTE)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, 29 May 2017 at 16:13
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [vitrage] error handling

Brought up by a recent code review, I think it worth a thorough discussion 
about the error handling rule.

I once read an article[1] from Joyent and it impressed me on the distinguish 
between Operational errors vs. programmer errors. The article is written for 
nodejs, but the principle also applies for other programming language.

The basic rule recommended by Joyent is
Handling operational errors
(Not) handling programmer errors
There is also one rule in openstack style guide line[2] close to this idea.

[H201] Do not write except:, use except Exception: at the very least. When 
catching an exception you should be as specific so you don’t mistakenly catch 
unexpected exceptions.

I do think before we have a well designed error handling, it is better to let 
it crash. It is dangerous to hide the errors and keep the system running in 
undetermined states.

So the question is what kind of operational errors are we facing in vitrage? I 
can think of something like

  1.  invalid configuration file
  2.  failed to communicate with data source
  3.  malformed data from data source
  4.  failed to execute an action
  5.  ...
Maybe this could be the first step for the error handling design.

[1]: https://www.joyent.com/node-js/production/design/errors
[2]: https://docs.openstack.org/developer/hacking/

--
Yujun Zhang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][freezer] adopting oslo.context for logging debugging and tracing

2017-05-30 Thread Renat Akhmerov
Ok, thanks Tuan.

Renat Akhmerov
@Nokia

On 29 May 2017, 17:38 +0700, lương hữu tuấn , wrote:
> Hi Renat,
>
> Kong is going to move on with this patch and then i will continue with the 
> main problem of trust token in Mistral.
>
> Br,
>
> Tuan/Nokia
>
> > On Mon, May 29, 2017 at 9:20 AM, Renat Akhmerov  
> > wrote:
> > > Tuan,
> > >
> > > It’s ok, not always people have cycles to finish something upstream. No 
> > > need to explain that. All I’m worried about is getting this thing done. 
> > > So if you don’t have capacity please help transfer this work to someone 
> > > else.
> > >
> > > Thanks
> > >
> > > Renat
> > >
> > > On 29 May 2017, 13:36 +0700, lương hữu tuấn , 
> > > wrote:
> > > > Hi Doug and Renat,
> > > >
> > > > I totally agree with what Doug mentioned in the previous mail. In fact, 
> > > > my patch is only the used for the purpose of "implementing trust 
> > > > token", not for full refactoring mistral context. Since i do not have 
> > > > my capacity for contributing to Mistral, my commit now is for the need 
> > > > of Nokia in using token when it is expired.
> > > >
> > > > From very beginning, i would like to refactor mistral context to fully 
> > > > use oslo-context. But if i wanted to refactor the whole mistral 
> > > > context, i would spend my whole capacity for upstream work which is not 
> > > > available for me. By the way, thanks Doug to review it, i know all the 
> > > > issues in your comments but as i said, it was hard for me for upstream 
> > > > work. But i will re-arrange my schedule to finish as Doug commented in 
> > > > the patch as well as switching to oslo-context
> > > >
> > > > @Renat: I will try to refactor the whole mistral context therefore 
> > > > there will not be any roadblocks.
> > > >
> > > > Br,
> > > >
> > > > Tuan
> > > >
> > > > > On Sat, May 27, 2017 at 2:08 AM, Vitaliy Nogin  
> > > > > wrote:
> > > > > > Hi Doug,
> > > > > >
> > > > > > Anyway, thank for the notification. We are really appreciated.
> > > > > >
> > > > > > Regards,
> > > > > > Vitaliy
> > > > > >
> > > > > > > 26 мая 2017 г., в 20:54, Doug Hellmann  
> > > > > > > написал(а):
> > > > > > >
> > > > > > > Excerpts from Saad Zaher's message of 2017-05-26 12:03:24 +0100:
> > > > > > >> Hi Doug,
> > > > > > >>
> > > > > > >> Thanks for your review. Actually freezer has a separate repo for 
> > > > > > >> the api,
> > > > > > >> it can be found here [1]. Freezer is using oslo.context since 
> > > > > > >> newton. If
> > > > > > >> you have the time you can take a look at it and let us know if 
> > > > > > >> you have any
> > > > > > >> comments.
> > > > > > >
> > > > > > > Ah, that explains why I couldn't find it in the freezer repo. :-)
> > > > > > >
> > > > > > > Doug
> > > > > > >
> > > > > > >>
> > > > > > >> Thanks for your help
> > > > > > >>
> > > > > > >> [1] https://github.com/openstack/freezer-api
> > > > > > >>
> > > > > > >> Best Regards,
> > > > > > >> Saad!
> > > > > > >>
> > > > > > >> On Fri, May 26, 2017 at 5:45 AM, Renat Akhmerov 
> > > > > > >> 
> > > > > > >> wrote:
> > > > > > >>
> > > > > > >>> Thanks Doug. We’ll look into this.
> > > > > > >>>
> > > > > > >>> @Tuan, is there any roadblocks with the patch you’re working 
> > > > > > >>> on? [1]
> > > > > > >>>
> > > > > > >>> [1] https://review.openstack.org/#/c/455407/
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> Renat
> > > > > > >>>
> > > > > > >>> On 26 May 2017, 01:54 +0700, Doug Hellmann 
> > > > > > >>> , wrote:
> > > > > > >>>
> > > > > > >>> The new work to add the exception information and request ID 
> > > > > > >>> tracing
> > > > > > >>> depends on using both oslo.context and oslo.log to have all of 
> > > > > > >>> the
> > > > > > >>> relevant pieces of information available as log messages are 
> > > > > > >>> emitted.
> > > > > > >>>
> > > > > > >>> In the course of reviewing the "done" status for those 
> > > > > > >>> initiatives,
> > > > > > >>> I noticed that although mistral and freezer are using oslo.log,
> > > > > > >>> neither uses oslo.context. That means neither project will get 
> > > > > > >>> the
> > > > > > >>> extra debugging information, and neither project will see the 
> > > > > > >>> global
> > > > > > >>> request ID in logs.
> > > > > > >>>
> > > > > > >>> I started looking at updating mistral's context to use 
> > > > > > >>> oslo.context
> > > > > > >>> as a base class, but ran into some issues because of some 
> > > > > > >>> extensions
> > > > > > >>> made to the existing class. I wasn't able to find where freezer 
> > > > > > >>> is
> > > > > > >>> doing anything at all with an API request context.
> > > > > > >>>
> > > > > > >>> I'm available to help, if someone else wants to pick up the 
> > > > > > >>> work.
> > > > > > >>>
> > > > > > >>> Doug
> > > > > > >>>
> > > > > > >>>