Re: [openstack-dev] [masakari] A current high-level description of masakari architecture

2017-11-01 Thread Sam P
Hi Greg,

 [1] is pretty out-of-date, and [2] much closer what we have now.
 Plus, I am updating [2] with current details, for Masakari Project On
boarding session.
 I will update the wiki and share the slides soon.
 Till then, [2] is the best options we have now..
 [1] 
https://www.slideshare.net/masahito12/masakari-virtual-machine-high-availability-for-openstack
 [2] https://wiki.openstack.org/wiki/Masakari#Architecture
--- Regards,
Sampath



On Thu, Nov 2, 2017 at 7:50 AM, Waines, Greg  wrote:
> Is there a ‘current’ high-level description of the openstack masakari
> architecture ?
>
>
>
> I can only find these:
>
> https://wiki.openstack.org/wiki/Masakari#Architecture
>
> https://www.slideshare.net/masahito12/masakari-virtual-machine-high-availability-for-openstack
>
> but i am pretty sure these are out-of-date.
>
>
>
> let me know,
>
> Greg.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][networking-odl]

2017-11-01 Thread Bhatia, Manjeet S
Hello Neutrinos,

I've been trying service profile flavors for L3 in neutron to register the 
driver,
the method I've been using is below

I have this added to neutron.conf
[service_providers]
service_provider = 
L3_ROUTER_NAT:ODL:networking_odl.l3.l3_flavor.ODLL3ServiceProvider:default

and then tried creating flavor profile
[a]. openstack network flavor profile create --driver 
networking_odl.l3.l3_flavor.ODLL3ServiceProvider

It returns NotFoundException: Not Found (HTTP 404) (Request-ID: 
req-6991a8ab-b785-4160-96d6-e496d7667f15), Service Profile driver 
networking_odl.l3.l3_flavor.ODLL3ServiceProvider could not be found.

Here is trace from log http://paste.openstack.org/show/625178/ seems like 
resource not found,

But I also noticed in [1]. self.config is {}


[1]. 
https://github.com/openstack/neutron/blob/master/neutron/db/servicetype_db.py#L55


Any pointers what's being done wrong is it the enabling of service profiles 
flavor or something else ?



Thanks and regards !
Manjeet




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [masakari] A current high-level description of masakari architecture

2017-11-01 Thread Waines, Greg
Is there a ‘current’ high-level description of the openstack masakari 
architecture ?

I can only find these:
https://wiki.openstack.org/wiki/Masakari#Architecture
https://www.slideshare.net/masahito12/masakari-virtual-machine-high-availability-for-openstack
but i am pretty sure these are out-of-date.

let me know,
Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Docker Swarm Mode Support

2017-11-01 Thread Vahric MUHTARYAN
Hello Ricardo , 

Thanks for your explanation and answers.
One more question, what is the possibility to keep using Newton (right now i 
have it) and use latest Magnum features like swarm mode without upgrade 
Openstack ? Does it possible ? 

Regards
VM

On 30.10.2017 01:19, "Ricardo Rocha"  wrote:

Hi Vahric.

On Fri, Oct 27, 2017 at 9:51 PM, Vahric MUHTARYAN  
wrote:
> Hello All ,
>
>
>
> I found some blueprint about supporting Docker Swarm Mode
> https://blueprints.launchpad.net/magnum/+spec/swarm-mode-support
>
>
>
> I understood that related development is not over yet and no any Openstack
> version or Magnum version to test it also looks like some more thing to 
do.
>
> Could you pls inform when we should expect support of Docker Swarm Mode ?

Swarm mode is already available in Pike:
https://docs.openstack.org/releasenotes/magnum/pike.html

> Another question is fedora atomic is good but looks like its not up2date 
for
> docker , instead of use Fedora Atomic , why you do not use Ubuntu, or some
> other OS and directly install docker with requested version ?

Atomic also has advantages (immutable, etc), it's working well for us
at CERN. There are also Suse and CoreOS drivers, but i'm not familiar
with those.

Most pieces have moved to Atomic system containers, including all
kubernetes components so the versions are decouple from the Atomic
version.

We've also deployed locally a patch running docker itself in a system
container, this will get upstream with:
https://bugs.launchpad.net/magnum/+bug/1727700

With this we allow our users to deploy clusters with any docker
version (selectable with a label), currently up to 17.09.

> And last, to help to over waiting items “Next working items: ”  how we 
could
> help ?

I'll let Spyros reply to this and give you more info on the above items too.

Regards,
  Ricardo

>
>
>
> Regards
>
> Vahric Muhtaryan
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] ujson "drop in" replacement

2017-11-01 Thread gordon chung


On 01/11/17 03:00 PM, Graham Hayes wrote:
> There seems to be a lot of "replace oslo.serization / native python json
> with UltraJSON (otherwise known as ujson) patches over the last few
> weeks.
>
> We should be careful - it is not a drop in replacement. e.g. -

rofl. i guess someone took our patch and ran with it.

for the record, we don't have an api in ceilometer and we also bench'd 
the performance when we originally adopted in gnocchi[1] (spoiler: it's 
much faster). but yeah, it dumps differently in some cases.

[1] https://review.openstack.org/#/c/386744

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [policy] Meetings next week

2017-11-01 Thread Lance Bragstad
Hey all,

Just a reminder that we won't be holding a team meeting or a policy
meeting next week in lieu of the summit. We will resume our normal
meeting schedule the week of November 12th. Safe travels for those going
to Sydney!


Thanks,

Lance




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Forum etherpads

2017-11-01 Thread Lance Bragstad
Hey all,

I've bootstrapped the etherpads for our forum sessions with a little
content and they've been linked to the etherpad wiki [0]. Please add
additional context as you see fit.

Thanks,

Lance


[0] https://wiki.openstack.org/wiki/Forum/Sydney2017




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] ujson "drop in" replacement

2017-11-01 Thread Graham Hayes
Hey all,

There seems to be a lot of "replace oslo.serization / native python json
with UltraJSON (otherwise known as ujson) patches over the last few
weeks.

We should be careful - it is not a drop in replacement. e.g. -

Normal python JSON:

>>> import json
>>> json.dumps({"url":"https://google.com"})
'{"url": "https://google.com"}'

ujson:

>>> import ujson as json
>>> json.dumps({"url":"https://google.com"})
'{"url":"https:\\/\\/google.com"}'

It is not currently in use in many projects:

curl -X POST http://codesearch.openstack.org/api/v1/search -F
q=ujson -F repos='*' -F files="requirements\.txt" -f  -s | jq '.Results
| keys'
[
  "ceilometer",
  "kiloeyes",
  "monasca-common",
  "requirements",
  "rpm-packaging"
]

I personally do not see the point in adding another dependency that has
weird behaviour for an as yet unmeasured performance increase.

- Graham



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][quickstart] Trying to create a release config for a Master UC and Newton OC

2017-11-01 Thread Lee Yarwood
Hello all,

I'm attempting save future contributors to the fast forward upgrades feature
some time by introducing a quickstart release config that deploys a Master UC
and Newton OC:

config: Provide a Master UC and Newton OC release config
https://review.openstack.org/#/c/511464/

This initial attempt did appear to work when I created the review some weeks ago
but now results in a Pike OC. I'm now trying to avoid this by adding repos,
repo_cmd_before and repo_cmd_after as in other release configs without any
luck:

$ cat $WD/config/release/master-undercloud-newton-overcloud.yml
release: master
overcloud_release: newton 
undercloud_image_url: 
https://images.rdoproject.org/master/delorean/current-tripleo/stable/undercloud.qcow2
ipa_image_url: 
https://images.rdoproject.org/master/delorean/current-tripleo/stable/ironic-python-agent.tar
overcloud_image_url: 
https://images.rdoproject.org/newton/delorean/consistent/stable/overcloud-full.tar
images:
  - name: undercloud
url: "{{ undercloud_image_url }}"
type: qcow2
  - name: overcloud-full
url: "{{ overcloud_image_url }}"
type: tar
  - name: ipa_images
url: "{{ ipa_image_url }}"
type: tar

repos:
  - type: file
filename: delorean.repo
down_url: https://trunk.rdoproject.org/centos7-newton/current/delorean.repo

  - type: file
filename: delorean-deps.repo
down_url: http://trunk.rdoproject.org/centos7-newton/delorean-deps.repo

repo_cmd_before: |
  sudo yum clean all;
  sudo yum-config-manager --disable "*"
  sudo rm -rf /etc/yum.repos.d/delorean*;
  sudo rm -rf /etc/yum.repos.d/*.rpmsave;

repo_cmd_after: |
  sudo yum repolist;
  sudo yum update -y

This still results in a Pike OC, with the original overcloud-full image on the
virthost originally using the Newton repos:

$ bash quickstart.sh -w $WD \
 -t all \
 -c config/general_config/minimal-keystone-only.yml \
 -R master-undercloud-newton-overcloud \
 -N config/nodes/1ctlr_keystone.yml $VIRTHOST
[..]
$ ssh -F $WD/ssh.config.ansible virthost 
[..]
$ virt-cat -a overcloud-full.qcow2 /etc/yum.repos.d/delorean.repo
[delorean]
name=delorean-instack-undercloud-61e201bd3cf65e931cc865a1018cf9441e50dab8
baseurl=https://trunk.rdoproject.org/centos7-newton/61/e2/61e201bd3cf65e931cc865a1018cf9441e50dab8_be559bb4
enabled=1
gpgcheck=0

$ ssh -F $WD/ssh.config.ansible undercloud
[..]
$ virt-cat -a overcloud-full.qcow2 /etc/yum.repos.d/delorean.repo
[delorean]
name=delorean
baseurl=https://trunk.rdoproject.org/centos7-master/f4/42/f442a3aa35981c3d6d7e312599dde2a1b1d202c9_0468cca4/
gpgcheck=0
enabled=1
priority=20

$ ssh -F $WD/ssh.config.ansible overcloud-controller-0
[..]
$ cat /etc/yum.repos.d/delorean.repo 
[delorean]
name=delorean
baseurl=https://trunk.rdoproject.org/centos7-master/f4/42/f442a3aa35981c3d6d7e312599dde2a1b1d202c9_0468cca4/
gpgcheck=0
enabled=1
priority=20
$ grep keystone /var/log/yum.log
$

The weird thing is that the repo-setup role doesn't appear to run at all with
the above config. Something is obviously changing the repos and running `yum
update -y` prior to the overcloud instances being provisioned but I can't seem
to track it down. Any suggestions would be really appreciated!

Thanks in advance,

Lee
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] [nova] Changes to os-vif cores

2017-11-01 Thread Mooney, Sean K
+1

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, October 25, 2017 3:45 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [os-vif] [nova] Changes to os-vif cores
> 
> +1
> 
> On 10/24/2017 10:32 AM, Stephen Finucane wrote:
> > Hey,
> >
> > I'm not actually sure what the protocol is for adding/removing cores
> > to a library project without a PTL, so I'm just going to put this out
> > there: I'd like to propose the following changes to the os-vif core
> team.
> >
> > - Add 'nova-core'
> >
> >os-vif makes extensive use of objects and we've had a few hiccups
> around
> >versionings and the likes recently [1][2]. I'd the expertise of
> some of the
> >other nova cores here as we roll this out to projects other than
> nova, and I
> >trust those not interested/knowledgeable in this area to stay away
> > :)
[Mooney, Sean K] in the future as we start integrating with neutron we may want 
to also extend this to neutron-cores with the same understanding that those 
not interested/knowledgeable in this area continue to focus on neutron.

I also think it continues to be the current os-vif teams role to ensure we do 
not break
Our customers and understand the interaction with both nova and neutron of the
Changes we are making and/or reviewing. That is to say I don’t want the fact 
that nova-cores
or neutron-cores is added to imply that only they should make sure os-vif works 
with nova/neutron.
More succinctly this should change should not be a burden on the nova and 
neutron.


> >
> > - Remove Russell Bryant, Maxime Leroy
> >
> >These folks haven't been active on os-vif  [3][4] for a long time
> and I think
> >they can be safely removed.
> >
> > To the existing core team members, please respond with a yay/nay and
> > we'll wait a week before doing anything.
> >
> > Cheers,
> > Stephen
> >
> > [1] https://review.openstack.org/#/c/508498/
> > [2] https://review.openstack.org/#/c/509107/
> > [3]
> >
> https://review.openstack.org/#/q/reviewedby:%22Russell+Bryant+%253Crbr
> > yant% 2540redhat.com%253E%22+project:openstack/os-vif
> > [4]
> >
> https://review.openstack.org/#/q/reviewedby:%22Maxime+Leroy+%253Cmaxim
> > e.ler oy%25406wind.com%253E%22+project:openstack/os-vif
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][containers]ironic-inspector

2017-11-01 Thread milanisko k
Folks,
=

I've got a dilemma right now about how to proceed with containerising
ironic-inspector:

Fat Container
--
put ironic-inspector and dnsmasq into a single container i.e consider a
container as a complete inspection service shipping unit, use supervisord
inside to fork and monitor both the services.

Pros:

* decoupling: dnsmasq of inspector isn't used by any other service which
makes our life simpler as we won't need to reset dnsmasq configuration in
case inspector died (to avoid exposing an unfiltered DHCP service)

* we can use dnsmasq filter (an on-line configuration files updating
facility) driver to limit access to dnsmasq instead of iptables, in a
self-contained "package" that is configured to work together as a single
unit

* we don't have to worry about always scheduling dnsmasq and inspector
containers on a single node (both services are bundled)

* we have a *Spine-Leaf deployment capable & containerised undercloud*

* an *HA capable inspector* service to be reused in overcloud

* an integrated solution, tested to work by upstream CI in inspector
(compatibility, versioning, configuration,...)

Cons:

* inflexibility: container has to be rebuilt to be used with different DHCP
service (filter driver)

* supervisord dependency and the need to refactor current container of
inspector

* 

Flat Container
---

Put inspector and dnsmasq into separate containers. Use the (current)
iptables driver to protect dnsmasq. IIRC this is the current approach.

Pros:

* containerised undercloud

Cons:

* no decoupling of dnsmasq

* no spine-leaf (iptables)

* containers have to be scheduled together on a single node,

* no HA (iptables driver)

* container won't be cool for overcloud as it won't be HA

Flat container with dnsmasq filter driver


Same as above but iptables isn't used anymore. Since it's not the current
approach, we'd have to reshape the containers of dnsmasq and inspector to
expose each others configuration so that inspector can write dnsmasq
configuration on the fly (does inotify work in the mounted directories
case???)

Pros:

* containerised undercloud

* Spine-Leaf

Cons:

* No (easy) HA (dnsmasq would be exposed in case inspector died)

* No decoupling of dnsmasq (shared between multiple services)

* containers to be reshaped to expose the configuration

* overcloud-uncool container (lack of HA)

No Container
--

we ship inspector as a service and configure dnsmasq for inspector to be
shut down in case inspector dies (to prevent exposing an unfiltered DHCP
service interference). We use dnsmasq (configuration) filter driver to have
a Spine-Leaf deployment capable undercloud.

Pros:

* Spine

Cons:

* no HA inspector (shared dnsmasq?)

* no containers

* no reusable container for overcloud

* we'd have to update the undercloud systemd to shut down dnsmasq in case
inspector dies if we want to use the dnsmasq filter driver

* no decoupling

The Question
--

What is your take on it?

Cheers,
milan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Documentation new meeting time - Wednesday at 16:00 UTC

2017-11-01 Thread Petr Kovar
Below are the meeting minutes from today's docs team meeting.

===
#openstack-doc: docteam
===


Meeting started by pkovar at 16:00:43 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/docteam/2017/docteam.2017-11-01-16.00.log.html
.



Meeting summary
---

* Docs team vision document  (pkovar, 16:06:19)
  * At final review  (pkovar, 16:06:26)
  * LINK: https://review.openstack.org/#/c/514779/  (pkovar, 16:06:31)
  * Sent a reminder asking for more input  (pkovar, 16:06:37)
  * Let's close by Thu, before the Summit  (pkovar, 16:06:47)

* Docs retention policy changes  (pkovar, 16:10:16)
  * We've just approved this spec. Thanks Doug for putting this
together!  (pkovar, 16:10:27)
  * LINK: https://review.openstack.org/#/c/507629  (pkovar, 16:10:33)
  * Flagging deprecated releases  (pkovar, 16:10:52)
  * LINK:
http://lists.openstack.org/pipermail/openstack-dev/2017-October/123381.html
(pkovar, 16:10:57)

* Branches vs "everything in master"  (pkovar, 16:13:28)
  * Common Install Guide content should be branched/split up
per-release?  (pkovar, 16:13:50)
  * LINK: https://review.openstack.org/#/c/516523/  (pkovar, 16:16:20)

* OpenStack Summit Sydney  (pkovar, 16:30:42)
  * Quite a few docs-related sessions scheduled  (pkovar, 16:30:55)
  * Installation Guides and Tutorials Updates and Testing  (pkovar,
16:31:03)
  * LINK:

https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20448/installation-guides-and-tutorials-updates-and-testing
(pkovar, 16:31:09)
  * Docs - Project Update  (pkovar, 16:31:16)
  * LINK:

https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20373/docs-project-update
(pkovar, 16:31:21)
  * Docs/i18n - Project Onboarding  (pkovar, 16:31:26)
  * LINK:

https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20550/docsi18n-project-onboarding
(pkovar, 16:31:31)
  * Ops Guide Transition and Maintenance  (pkovar, 16:31:35)
  * LINK:

https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20446/ops-guide-transition-and-maintenance
(pkovar, 16:31:40)
  * Documentation and relnotes, what do you miss?  (pkovar, 16:31:45)
  * LINK:

https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20468/documentation-and-relnotes-what-do-you-miss
(pkovar, 16:31:50)
  * How Zanata powers upstream collaboration with OpenStack
internationalization  (pkovar, 16:31:55)
  * LINK:

https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20007/how-zanata-powers-upstream-collaboration-with-openstack-internationalization
(pkovar, 16:31:59)

* sitemap automation suggestions  (pkovar, 16:39:07)
  * LINK:
http://lists.openstack.org/pipermail/openstack-dev/2017-October/123228.html
(pkovar, 16:39:11)

* Bug Triage Team  (pkovar, 16:40:29)
  * LINK: https://wiki.openstack.org/wiki/Documentation/SpecialityTeams
(pkovar, 16:40:34)

* PDF builds  (pkovar, 16:42:29)
  * LINK: https://review.openstack.org/#/c/509297/  (pkovar, 16:42:34)

* Open discussion  (pkovar, 16:45:05)
  * LINK: https://review.openstack.org/#/c/512136/  (chason, 16:46:13)
  * cores, if you are available, please review  (pkovar, 16:47:51)
  * LINK: https://review.openstack.org/#/c/512136/  (pkovar, 16:47:59)
  * LINK: https://docs.openstack.org/pike/deploy/ the OSA link is still
not working for me  (eumel8, 16:50:23)



Meeting ended at 16:55:59 UTC.



People present (lines said)
---

* pkovar (116)
* AJaeger (17)
* chason (13)
* eumel8 (6)
* ianychoi (4)
* openstack (3)



Generated by `MeetBot`_ 0.1.4

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Cancelling meeting 8 Nov

2017-11-01 Thread Michał Jastrzębski
Have a good summit everyone!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] testing ansible-deploy in gates

2017-11-01 Thread Dmitry Tantsur

On 10/31/2017 07:09 PM, Pavlo Shchelokovskyy wrote:

Hi all,

as we have agreed on the PTG, we are about to pull the ansible-deploy interface 
from ironic-staging-drivers to ironic. We obviously need to test it on gates 
too, in a non-voting mode like snmp and redfish ones.


This raises couple of questions/concerns:

1. Testing hardware types on gates.
This is the first? interface that does not have any "classic" driver associated 
with it. All our devstack setup logic is currently based on classic drivers 
instead of hardware types, in particular all the "is_deployed_by" functions and 
logic depending on them.
As we are about to deprecate the classic drivers altogether and eventually 
remove them, we ought to start moving our setup and testing procedures to 
hardware types as well.
(another interesting point would be how much effort we need to adapt all our 
unit tests to use hw types instead of 'fake' and other classic drivers...)


++ to moving to hardware types. As to 'fake', there is 'fake-hardware', which 
does roughly the same.




2. Deploy ramdisk image to use.
Current job in staging drivers does small rebuild of tinyipa image during 
deploy. I'd like to avoid it as much as possible, so I propose to add all the 
logic which is there to default build options and scripts of tinyipa build. This 
includes installing SSH server and enabling SSH access to the ramdisk, and some 
small mangling with files and file links.


++

A separate question would be SSH keys. We could either not bake them to the 
image and generate them each time anew, but that would still require an image 
rebuild on (each?) devstack run. Or we could generate them once, bake the public 
key to the image and publish the private key to tarballs.o.o, so it could be 
re-used by IPA scripts and jobs to build fresh images on merge and during tests. 
There are surely certain security consideration to such approach, but as tinyipa 
is targeted for testing (virtual) environments and not production, I suppose we 
could probably neglect them.


Mmmm, I'm pretty sure somebody will try to use our published images in 
production, even if we recommend against it. How long does the image repacking 
take? It should not be hard to inject one file in an image..




WDYT?

Another aspect of this is (as we agreed) we would need to move all the 
'imagebuild' folder content from IPA repo to a separate repo, and devise a way 
to use this repo in our devstack plugin.


I've started that, but I don't have time to continue. Any help is appreciated :)



I'm eager to hear your thoughts and comments.

Best regards,
--
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack Extreme Testing

2017-11-01 Thread Sam P
Hi Boris,

 Thanks for comment.
 Sorry for the misleading context in the etherpad. It sounds like Eris
will do everything, but it is not.
 I think I need to add more details to etherpad (I will do that)
 Short answer is we are not into  "reinventing wheels" and we will use
all possible existing tools.
 # Me and Gautam are working on a document about what are the gaps,
and what tool should use on what purpose and why.
 # I think that would be the long answer and  we definitely need your
feed back on this.
 #  I will share that here..

 For, Rally,
  Actually we are using Rally in Eris. Here is the PoC we are working on (WIP).
 # At this point, PoC is in it's very early stage. Not all the
Concepts are implemented yet..
  code: https://github.com/gautamdivgi/eris
  doc: 
https://openstack-lcoo.atlassian.net/wiki/spaces/LCOO/pages/22872097/Extreme+Testing+Demo

 You may find lot more doc's about Eris in here,
 
https://openstack-lcoo.atlassian.net/wiki/spaces/LCOO/pages/13393034/Eris+-+Extreme+Testing+Framework+for+OpenStack

 For an example, we are considering,
 Rally: For control plane load injection (Current PoC we don't use Rally
 Shaker: For data plane load injection
 os-fault: fault injection ( lot of work need to be done here)

 Here is a one example:
 Eris is focus on realistic outage scenarios such as, SDN controller
failure, Storage back-end failure, infra L2/L3 SW failure ,etc,,
 If those are vendor specific appliances, then how should we do
failure injection and metrics gathering?
 Eris will provide plugin mechanism to plug vendor drivers ( provide
by the vendor).
 In testing, Eris will use Rally and Shaker for load generation and
inject deterministic/random HW/SW failures in those
 appliances and gather the metrics. Then,  Eris (or Rally or any other
tool or mix of them) can analyze the result and
 create the final report.

 Sorry for ranting.

--- Regards,
Sampath



On Wed, Nov 1, 2017 at 4:02 PM, Boris Pavlovic  wrote:
> Sam,
>
>>  Etherpad: https://etherpad.openstack.org/p/SYD-extreme-testing
>
>
> I really don't want to sound like a person that say use Rally my best ever
> project blablbab and other BS.
> I think that "reinventing wheels" approach is how humanity evolves and
> that's why I like this effort in any case.
>
> But really, I read carefully etherpad and I really see in description of
> Eris just plain Rally as is:
>
> - Rally allows you to create tests as YAML
> - Rally allows you to inject in various actions during the load (Rally
> Hooks) which makes it easy to do chaos and other kind of testing
> - Rally is pluggable and you can write even your own Runners (scenario
> executors) that will generate load pattern that you need
> - Rally has SLAs plugins (that can deeply analyze result of test cases) and
> say whatever they pass or not
> - We are working on feature that allows you to mix different workloads in
> parallel (and generate more realistic load)
> -.
>
> So it would be really nice if you can share gaps that you faced that are
> blocking you to use directly Rally..
>
> Thanks!
>
> Best regards,
> Boris Pavlovic
>
>
> On Tue, Oct 31, 2017 at 10:50 PM, Sam P  wrote:
>>
>> Hi All,
>>
>>  Sending out a gentle reminder of Sydney Summit Forum Session
>> regarding this topic.
>>
>>  Extreme/Destructive Testing
>>  Tuesday, November 7, 1:50pm-2:30pm
>>  Sydney Convention and Exhibition Centre - Level 4 - C4.11
>>
>> [https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20470/extremedestructive-testing]
>>  Eatherpad: https://etherpad.openstack.org/p/SYD-extreme-testing
>>
>>  Your participation in this session would be greatly appreciated.
>> --- Regards,
>> Sampath
>>
>>
>>
>> On Mon, Aug 14, 2017 at 11:43 PM, Tim Bell  wrote:
>> > +1 for Boris’ suggestion. Many of us use Rally to probe our clouds and
>> > have
>> > significant tooling behind it to integrate with local availability
>> > reporting
>> > and trouble ticketing systems. It would be much easier to deploy new
>> > functionality such as you propose if it was integrated into an existing
>> > project framework (such as Rally).
>> >
>> >
>> >
>> > Tim
>> >
>> >
>> >
>> > From: Boris Pavlovic 
>> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> > 
>> > Date: Monday, 14 August 2017 at 12:57
>> > To: "OpenStack Development Mailing List (not for usage questions)"
>> > 
>> > Cc: openstack-operators 
>> > Subject: Re: [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack
>> > Extreme
>> > Testing
>> >
>> >
>> >
>> > Sam,
>> >
>> >
>> >
>> > Seems like a good plan and huge topic ;)
>> >
>> >
>> >
>> > I would as well suggest to take a look at the similar efforts in
>> > OpenStack:
>> >
>> > - Failure injection: https://github.com/openstack/os-faults
>> >
>> > - Rally Hooks Mechanism (to inject 

[openstack-dev] Logging format: let's discuss a bit about default format, format configuration and so on

2017-11-01 Thread Cédric Jeanneret
Dear Stackers,

While working on my locally deployed Openstack (Pike using TripleO), I
was a bit struggling with the logging part. Currently, all logs are
pushed to per-service files, in the standard format "one line per
entry", plain flat text.

It's nice, but if one is wanting to push and index those logs in an ELK,
the current, default format isn't really good.

After some discussions about oslo.log, it appears it provides a nice
JSONFormatter handler¹ one might want to use for each (python) service
using oslo central library.

A JSON format is really cool, as it's easy to parse for machines, and it
can be on a multi-line without any bit issue - this is especially
important for stack traces, as their output is multi-line without real
way to have a common delimiter so that we can re-format it and feed it
to any log parser (logstash, fluentd, …).

After some more talks, olso.log will not provide a unified interface in
order to output all received logs as JSON - this makes sens, as that
would mean "rewrite almost all the python logging management
interface"², and that's pretty useless, since (all?) services have their
own "logging.conf" file.

That said… to the main purpose of that mail:

- Default format for logs
A first question would be "are we all OK with the default output format"
- I'm pretty sure "humans" are happy with that, as it's really
convenient to read and grep. But on a "standard" Openstack deploy, I'm
pretty sure one does not have only one controller, one ceph node and one
compute. Hence comes the log centralization, and with that, the log
indexation and treatments.

For that, one might argue "I'm using plain files on my logger, and
grep-ing -r in them". That's a way to do things, and for that, plain,
flat logs are great.

But… I'm pretty sure I'm not the only one wanting to use some kind of
ELK cluster for that kind of purpose. So the right question is: what
about switching the default log format to JSON? On my part, I don't see
"cons", only "pros", but my judgment is of course biased, as I'm "alone
in my corner". But what about you, Community?

- Provide a way to configure the output format/handler
While poking around in the puppet modules code, I didn't find any way to
set the output handler for the logs. For example, in puppet-nova³ we can
set a lot of things, but not the useful handler for the output.

It would be really cool to get, for each puppet module, the capability
to set the handler so that one can just push some stuff in hiera, and
Voilà, we have JSON logs.

Doing so would allow people to chose between the default  (current)
output, and something more "computable".

Of course, either proposal will require a nice code change in all puppet
modules (add a new parameter for the foo::logging class, and use that
new param in the configuration file, and so on), but at least people
will be able to actually chose.

So, before opening an issue on each launchpad project (that would be…
long), I'd rather open the discussion in here and, eventually, come to
some nice, acceptable and accepted solution that would make the
Openstack Community happy :).

Any thoughts?

Thank you for your attention, feedback and wonderful support for that
monster project :).

Cheers,

C.


¹
https://github.com/openstack/oslo.log/blob/master/oslo_log/formatters.py#L166-L235
²
http://eavesdrop.openstack.org/irclogs/%23openstack-oslo/%23openstack-oslo.2017-11-01.log.html#t2017-11-01T13:23:14
³ https://github.com/openstack/puppet-nova/blob/master/manifests/logging.pp


-- 
Cédric Jeanneret
Senior Linux System Administrator
Infrastructure Solutions

Camptocamp SA
PSE-A / EPFL, 1015 Lausanne

Phone: +41 21 619 10 32
Office: +41 21 619 10 02
Email: cedric.jeanne...@camptocamp.com




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [EXTERNAL] Re: [TripleO] roles_data.yaml equivalent in containers

2017-11-01 Thread Abhishek Kane
Hi Steve,

I see that the docker/services are importing tags from the puppet/services as 
shown in-
https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/sahara-api.yaml#L53
https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/sahara-api.yaml#L64
https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/sahara-api.yaml#L74
https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/sahara-api.yaml#L76

It looks like an indirect call to puppet services.

Currently I want to execute step_config on the controller physical host-
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/veritas-hyperscale-controller.yaml#L98

And set service_name to call the service_config_settings in different 
containers-
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/veritas-hyperscale-controller.yaml#L100

So, instead of that redirection from docker to puppet service, I copied 
veritas-hyperscale-controller.yaml file to docker/services and added a new 
environment file in the services-docker to call it.
http://paste.openstack.org/show/625211/
http://paste.openstack.org/show/625209/

This did not work though. Do I need to call puppet service indirectly via 
docker service?

Thanks,
Abhishek



On 10/30/17, 9:35 PM, "Abhishek Kane"  wrote:

Hi Steven,

I was out of the town and hence couldn’t reply to the email.
I will take a look at the examples you have shared and get back with the 
results tomorrow.

Thanks,
Abhishek

On 10/25/17, 2:21 PM, "Steven Hardy"  wrote:

On Wed, Oct 25, 2017 at 6:41 AM, Abhishek Kane
 wrote:
>
> Hi,
>
>
>
> In THT I have an environment file and corresponding puppet service 
for Veritas HyperScale.
>
> 
https://github.com/openstack/tripleo-heat-templates/blob/master/environments/veritas-hyperscale/veritas-hyperscale-config.yaml
>
> 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/veritas-hyperscale-controller.yaml
>
>
>
> This service needs rabbitmq user the hooks for it is 
“veritas_hyperscale::hs_rabbitmq”-
>
> 
https://github.com/openstack/puppet-tripleo/blob/master/manifests/profile/base/rabbitmq.pp#L172
>
>
>
> In order to configure Veritas HyperScale, I add 
“OS::TripleO::Services::VRTSHyperScale” to roles_data.yaml file and use 
following command-
>
>
>
> # openstack overcloud deploy --templates -r 
/home/stack/roles_data.yaml -e 
/usr/share/openstack-tripleo-heat-templates/environments/veritas-hyperscale/veritas-hyperscale-config.yaml
 -e 
/usr/share/openstack-tripleo-heat-templates/environments/veritas-hyperscale/cinder-veritas-hyperscale-config.yaml
>
>
>
> This command sets “veritas_hyperscale_controller_enabled” to true in 
hieradata and all the hooks gets called.
>
>
>
> I am trying to containerize Veritas HyperScale services. I used 
following config file in quickstart-
>
> http://paste.openstack.org/show/624438/
>
>
>
> It has the environment files-
>
>   -e 
{{overcloud_templates_path}}/environments/veritas-hyperscale/cinder-veritas-hyperscale-config.yaml
>
>   -e 
{{overcloud_templates_path}}/environments/veritas-hyperscale/veritas-hyperscale-config.yaml
>
>
>
> But this itself doesn’t set “veritas_hyperscale_controller_enabled” 
to true in hieradata and veritas_hyperscale::hs_rabbitmq doesn’t get called.
>
> 
https://github.com/openstack/tripleo-heat-templates/blob/master/roles_data.yaml#L56
>
>
>
>
>
> How do I add OS::TripleO::Services::VRTSHyperScale in case of 
containers?

the roles_data.yaml approach you used previously should still work in
the case of containers, but the service template referenced will be
different (the files linked above still refer to the puppet service
template)

e.g


https://github.com/openstack/tripleo-heat-templates/blob/master/environments/veritas-hyperscale/veritas-hyperscale-config.yaml#L18

defines:

OS::TripleO::Services::VRTSHyperScale:
../../puppet/services/veritas-hyperscale-controller.yaml

Which overrides this default mapping to OS::Heat::None:


https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud-resource-registry-puppet.j2.yaml#L297

For containerized services, there are different resource_registry
mappings that refer to 

[openstack-dev] [Openstack][Ceilometer] interface statistics for SRIO-V based VM

2017-11-01 Thread Jai Singh Rana
Hi,

Does Ceilometer supports metrics for resource_type
'instance_network_interface' for VM based on SRIO-V supported nic?

I have pike based setup but i am unable to get these statistics with
SRIO-V supported nic with vnic_type=direct for the VM.

If it does, how to configure ceilometer to get these statistics. If
not, is there any proposal or roadmap to support it in future.

Thanks,
Jai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]Distinguish the direction of requests

2017-11-01 Thread XuZhuang
Hello,



I have some questions in how to distinguish the direction of requests between 
local neutron and central neutron.




There is the preliminary plan




1. For how to distinguish the requests in central neutron

we can add a filter in neutron/…./etc/api-paste.ini. Using this filter we can 
get some values about the source.

But the question is that the process of loading filter is in Neutron. Without 
changing Neutron how could we add a filter? Could we change Neutron?




2. For how to add a signal in the requests

The module of common.client in Tricircle is responsible for sending requests. 
So we can add a signal in the header of requests. And central plugin will get 
this signal using the filter.




Best Regards

Zhuangzhuang Xu (Lyman Xu)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] No meeting this week

2017-11-01 Thread Jay Bryant
Just a reminder that we do not have a meeting this week as a number of
people will be traveling.

See you in Sydney!

Jay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Tripleo CI Community meeting tomorrow

2017-11-01 Thread Arx Cruz
The idea is to have the community meeting at the end of our sprint. So far,
we are having 2 weeks sprint, but we are thinking about flexible sprints,
depending of the scope of our work. For now, yes, it’s regular, but once we
adopt flexible sprint, it will change.

Kind regards,
Arx Cruz

On Tue, 24 Oct 2017 at 17:56 Emilien Macchi  wrote:

> On Tue, Oct 24, 2017 at 8:50 AM, Arx Cruz  wrote:
> > Hello
> >
> > We are going to have a TripleO CI Community meeting tomorrow 10/25/2017
> at 1
> > pm UTC time.
> > The meeting is going to happen on BlueJeans [1] and also on IRC on
> #tripleo
> > channel.
> >
> > After that, we will hold Office Hours starting at 4PM UTC in case someone
> > from community have any questions related to CI.
> >
> > Hope to see you there.
> >
> > 1 - https://bluejeans.com/7071866728
> >
> >
> > Kind regards,
> > Arx Cruz
>
> quick question, is it a regular time or will it change in the future?
>
> (so I can update my calendar)
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Kernel parameters needed to boot from iscsi

2017-11-01 Thread Derek Higgins
On 30 October 2017 at 15:16, Julia Kreger  wrote:
...
>>> When I tried it I got this
>>> [  370.704896] dracut-initqueue[387]: Warning: iscistart: Could not
>>> get list of targets from firmware.
>>>
>>> perhaps we could alter iscistart to not complain if there are no
>>> targets attached and just continue, then simply always have
>>> rd.iscsi.firmware=1 in the kernel param regardless of storage type
>>
>
> For those that haven't been following IRC discussion, Derek was kind
> enough to submit a pull request to address this in dracut.

The relevant fix is here
https://github.com/dracutdevs/dracut/pull/298

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] No meeting for this and next week

2017-11-01 Thread Rico Lin
Hi team

As I will be on a flight to Sydney on this Wednesday and flight back on
next Wed. I won't be able to run meetings for this two weeks.
Any core feel like to host one (for whatever the issues are) are welcome,
otherwise, see you in the third week.

-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack Extreme Testing

2017-11-01 Thread Boris Pavlovic
Sam,

 Etherpad: https://etherpad.openstack.org/p/SYD-extreme-testing


I really don't want to sound like a person that say use Rally my best ever
project blablbab and other BS.
I think that "reinventing wheels" approach is how humanity evolves and
that's why I like this effort in any case.

But really, I read carefully etherpad and I really see in description of
Eris just plain Rally as is:

- Rally allows you to create tests as YAML
- Rally allows you to inject in various actions during the load (Rally
Hooks) which makes it easy to do chaos and other kind of testing
- Rally is pluggable and you can write even your own Runners (scenario
executors) that will generate load pattern that you need
- Rally has SLAs plugins (that can deeply analyze result of test cases) and
say whatever they pass or not
- We are working on feature that allows you to mix different workloads in
parallel (and generate more realistic load)
-.

So it would be really nice if you can share gaps that you faced that are
blocking you to use directly Rally..

Thanks!

Best regards,
Boris Pavlovic


On Tue, Oct 31, 2017 at 10:50 PM, Sam P  wrote:

> Hi All,
>
>  Sending out a gentle reminder of Sydney Summit Forum Session
> regarding this topic.
>
>  Extreme/Destructive Testing
>  Tuesday, November 7, 1:50pm-2:30pm
>  Sydney Convention and Exhibition Centre - Level 4 - C4.11
>  [https://www.openstack.org/summit/sydney-2017/summit-
> schedule/events/20470/extremedestructive-testing]
>  Eatherpad: https://etherpad.openstack.org/p/SYD-extreme-testing
>
>  Your participation in this session would be greatly appreciated.
> --- Regards,
> Sampath
>
>
>
> On Mon, Aug 14, 2017 at 11:43 PM, Tim Bell  wrote:
> > +1 for Boris’ suggestion. Many of us use Rally to probe our clouds and
> have
> > significant tooling behind it to integrate with local availability
> reporting
> > and trouble ticketing systems. It would be much easier to deploy new
> > functionality such as you propose if it was integrated into an existing
> > project framework (such as Rally).
> >
> >
> >
> > Tim
> >
> >
> >
> > From: Boris Pavlovic 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: Monday, 14 August 2017 at 12:57
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Cc: openstack-operators 
> > Subject: Re: [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack Extreme
> > Testing
> >
> >
> >
> > Sam,
> >
> >
> >
> > Seems like a good plan and huge topic ;)
> >
> >
> >
> > I would as well suggest to take a look at the similar efforts in
> OpenStack:
> >
> > - Failure injection: https://github.com/openstack/os-faults
> >
> > - Rally Hooks Mechanism (to inject in rally scenarios failures):
> > https://rally.readthedocs.io/en/latest/plugins/implementation/hook_and_
> trigger_plugins.html
> >
> >
> >
> >
> >
> > Best regards,
> > Boris Pavlovic
> >
> >
> >
> >
> >
> > On Mon, Aug 14, 2017 at 2:35 AM, Sam P  wrote:
> >
> > Hi All,
> >
> > This is a follow up for OpenStack Extreme Testing session[1]
> > we did in MEX-ops-meetup.
> >
> > Quick intro for those who were not there:
> > In this work, we proposed to add new testing framework for openstack.
> > This framework will provides tool for create tests with destructive
> > scenarios which will check for High Availability, failover and
> > recovery of OpenStack cloud.
> > Please refer the link on top of the [1] for further details.
> >
> > Follow up:
> > We are planning periodic irc meeting and have an irc
> > channel for discussion. I will get back to you with those details soon.
> >
> > At that session, we did not have time to discuss last 3 items,
> > Reference architectures
> >  We are discussing about the reference architecture in [2].
> >
> > What sort of failures do you see today in your environment?
> >  Currently we are considering, service failures, backend services (mq,
> > DB, etc.) failures,
> >  Network sw failures..etc. To begin with the implementation, we are
> > considering to start with
> >  service failures. Please let us know what failures are more frequent
> > in your environment.
> >
> > Emulation/Simulation mechanisms, etc.
> >  Rather than doing actual scale, load, or performance tests, we are
> > thinking to build a emulation/simulation mechanism
> > to get the predictions or result of how will openstack behave on such
> > situations.
> > This interesting idea was proposed by the Gautam and need more
> > discussion on this.
> >
> > Please let us know you questions or comments.
> >
> > Request to Mike Perez:
> >  We discussed about synergies with openstack assertion tags and other
> > efforts to do similar testing in openstack.
> >  Could you please give some info or pointer of previous discussions.
> >
> > [1] 

Re: [openstack-dev] [neutron][networking-midonet] Core reviewers change proposal

2017-11-01 Thread Takashi Yamamoto
Hi,

On Fri, Oct 20, 2017 at 10:10 PM, Takashi Yamamoto
 wrote:
> Hi,
>
> Unless anyone objects, I'll remove the following people from the list
> of networking-midonet core reviewers.
>
> - Joe Mills
> - Michael Micucci
>
> They made great contributions in the past (thank you!) but have not
> reviewed any patches in the last 6 months. [2]
>
> [1] https://review.openstack.org/#/admin/groups/607,members
> [2] http://stackalytics.com/report/contribution/networking-midonet/180

As I haven't received any concerns, I updated the members accordingly.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] core reviewer proposal

2017-11-01 Thread Takashi Yamamoto
Hi,

On Fri, Oct 20, 2017 at 10:10 PM, Takashi Yamamoto
 wrote:
> Hi,
>
> Unless anyone objects, i'll add the following people to neutron-vpnaas
> core reviewers. [1]
>
> - Cao Xuan Hoang 
> - Akihiro Motoki 
> - Miguel Lavalle 
>
> Hoang is the most active contributor for the project these days.
>
> I don't bother to introduce Akihiro and Miguel as i guess everyone here
> knows them. :-)
>
> [1] https://review.openstack.org/#/admin/groups/502,members

As I haven't received any concerns, I updated the list accordingly.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev