Re: [openstack-dev] [rally]404 on docker rallyforge/rally

2017-12-04 Thread Matthieu Simonin


- Mail original -
> De: "Swapnil Kulkarni" <cools...@gmail.com>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Lundi 4 Décembre 2017 12:57:36
> Objet: Re: [openstack-dev] [rally]404 on docker rallyforge/rally
> 
> On Mon, Dec 4, 2017 at 4:13 PM, Matthieu Simonin
> <matthieu.simo...@inria.fr> wrote:
> > Hi,
> >
> > Monday morning and it seems that docker images for rally aren't reachable
> > anymore.
> >
> > https://hub.docker.com/r/rallyforge/rally/
> >
> > Did I miss something ? Is the doc up-to-date[1] ?
> >
> > [1]:
> > http://rally.readthedocs.io/en/0.10.0/install_and_upgrade/install.html#rally-docker
> >
> > Matt
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> I think you need to refer to [1] for updated docs and [2] should be
> able to help you with rally-docker information which might be useful
> for you.

Thanks Swapnil, but the namespace rallyforge[1] is now empty 
(or all images are private) on docker hub.

I have no problem building images on my own, but I just wanted to make
sure this is now the only way I can use the rally images. 

If so the doc will need to be updated to remove any reference to 
rallyforge/rally.


[1] https://hub.docker.com/r/rallyforge/

Best,

Matthieu

> 
> [1] https://docs.openstack.org/rally/latest/
> [2]
> https://docs.openstack.org/rally/latest/install_and_upgrade/install.html#rally-docker
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally]404 on docker rallyforge/rally

2017-12-04 Thread Matthieu Simonin
Hi,

Monday morning and it seems that docker images for rally aren't reachable 
anymore.

https://hub.docker.com/r/rallyforge/rally/

Did I miss something ? Is the doc up-to-date[1] ?

[1]: 
http://rally.readthedocs.io/en/0.10.0/install_and_upgrade/install.html#rally-docker

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FEMDC] Wed. 11. IRC Meeting 15:00 UTC

2017-10-10 Thread Matthieu Simonin
Hi all,

The next meeting is planned tomorrow/today

Wed. 11 Oct. 15:00 UTC

A draft agenda is available in the etherpad :

https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017
(line 1317 - at the end)

You are very welcome to amend it.

Best,

Matt

- Mail original -
> De: "Paul-Andre Raymond" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> , "OpenStack
> Operators" , "Openstack Users" 
> 
> Envoyé: Mercredi 27 Septembre 2017 16:56:52
> Objet: Re: [openstack-dev] [FEMDC] IRC Meeting today 15:00 UTC
> 
> Below is the link to the etherpad for our meeting.
> 
> 
> 
> On 9/27/17, 10:01 AM, "Paul-Andre Raymond" 
> wrote:
> 
> Dear all,
> 
> A gentle reminder for our meeting today (an hour from now).
> I believe today will be a short meeting.
> Draft agenda was prepared by our friends from INRIA at
> 
> https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017
> (line 1237)
> 
> Please feel free to add items.
> 
> Best,
> 
>  
> Paul-André
> --
>  
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo.messaging][femdc]Topic names for every resource type RPC endpoint

2017-09-24 Thread Matthieu Simonin
Thanks Miguel for your feedback.

I'll definetely dig more into this.
Having a lot of messages broadcasted to all the neutron agents is not 
something you want especially in the context of femdc[1]. 

Best,

Matt

[1]: https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds

- Mail original -
> De: "Miguel Angel Ajo Pelayo" <majop...@redhat.com>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Mercredi 20 Septembre 2017 11:15:12
> Objet: Re: [openstack-dev] [neutron][oslo.messaging][femdc]Topic names for 
> every resource type RPC endpoint
> 
> I wrote those lines.
> 
> At that time, I tried a couple a publisher and a receiver at that scale. It
> was the receiver side what crashed trying to subscribe, the sender was
> completely fine.
> 
> Sadly I don't keep the test examples, I should have stored them in github
> or something. It shouldn't be hard to replicate though if you follow the
> oslo_messaging docs.
> 
> 
> 
> On Wed, Sep 20, 2017 at 9:58 AM, Matthieu Simonin <matthieu.simo...@inria.fr
> > wrote:
> 
> > Hello,
> >
> > In the Neutron docs about RPCs and Callbacks system, it is said[1] :
> >
> > "With the underlying oslo_messaging support for dynamic topics on the
> > receiver
> > we cannot implement a per “resource type + resource id” topic, rabbitmq
> > seems
> > to handle 1’s of topics without suffering, but creating 100’s of
> > oslo_messaging receivers on different topics seems to crash."
> >
> > I wonder if this statements still holds for the new transports supported in
> > oslo.messaging (e.g Kafka, AMQP1.0) or if it's more a design limitation.
> > I'm interested in any relevant docs/links/reviews on the "topic" :).
> >
> > Moreover, I'm curious to get an idea on how many different resources a
> > Neutron
> > Agent would have to manage and thus how many oslo_messaging receivers
> > would be
> > required (e.g how many security groups a neutron agent has to manage ?) -
> > at
> > least the order of magnitude.
> >
> > Best,
> >
> > Matt
> >
> >
> >
> > [1]: https://docs.openstack.org/neutron/latest/contributor/
> > internals/rpc_callbacks.html#topic-names-for-every-
> > resource-type-rpc-endpoint
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][oslo.messaging][femdc]Topic names for every resource type RPC endpoint

2017-09-20 Thread Matthieu Simonin
Hello,

In the Neutron docs about RPCs and Callbacks system, it is said[1] :

"With the underlying oslo_messaging support for dynamic topics on the receiver
we cannot implement a per “resource type + resource id” topic, rabbitmq seems
to handle 1’s of topics without suffering, but creating 100’s of
oslo_messaging receivers on different topics seems to crash."

I wonder if this statements still holds for the new transports supported in
oslo.messaging (e.g Kafka, AMQP1.0) or if it's more a design limitation.
I'm interested in any relevant docs/links/reviews on the "topic" :).

Moreover, I'm curious to get an idea on how many different resources a Neutron
Agent would have to manage and thus how many oslo_messaging receivers would be
required (e.g how many security groups a neutron agent has to manage ?) - at
least the order of magnitude.

Best,

Matt



[1]: 
https://docs.openstack.org/neutron/latest/contributor/internals/rpc_callbacks.html#topic-names-for-every-resource-type-rpc-endpoint

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-08-08 Thread Matthieu Simonin
Hello,

As discussed in the last meeting, I started to formalize the content of the 
etherpad in the performance WG documentation :

https://review.openstack.org/#/c/491818/

I've set some co-authorship according to what I saw in the etherpad. I guess 
this list can be shrunk/expanded on demand :)

Best,

Matt


- Mail original -
> De: "Matthieu Simonin" <matthieu.simo...@inria.fr>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Jeudi 6 Juillet 2017 16:31:46
> Objet: Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal 
> for message bus analysis
> 
> - Mail original -
> > De: "Paul-Andre Raymond" <paul-andre.raym...@nexius.com>
> > À: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev@lists.openstack.org>
> > Envoyé: Mercredi 5 Juillet 2017 21:48:29
> > Objet: Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal
> > for message bus analysis
> > 
> > Thank you Matt,
> > 
> > This is very insightful. It helps.
> > 
> > The second link did not work for me.
> 
> Oh yeah that's probably because of the on-going doc migration[1].
> 
> [1]:
> http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html
> 
> > 
> > In the presentation, it mentioned that the load consisted “Boot and List”
> > operations through Rally.
> > Did I understand well?
> 
> Yes.
> 
> > Were those hitting the Openstack UI?
> 
> Rally benchmarks put loads the various APIs and gather some metrics about
> the execution (time, failures...)
> 
> > Was keystone involved? Was it using Fernet or another sort of token?
> 
> Keystone is indeed involved and the token was at that time UUID token.
> 
> > 
> > Intuitively, I expected
> > - the big driver for performance on mariadb would be authentication tokens.
> > And fernet would allow to control that.
> > - The big driver for performance on rabbitmq would be ceilometer, and it is
> > not clear from your presentation that any telemetry data hit the message
> > queue.
> 
> Telemetry wasn't set up, I guess this would have killed Rabbit earlier in the
> tests.
> The split between notification and RPC messaging is interesting in this area.
> 
> Bye,
> 
> Matt
> 
> > 
> > Regards,
> > 
> > Paul-Andre
> > 
> > 
> > 
> > -Original Message-
> > From: Matthieu Simonin <matthieu.simo...@inria.fr>
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev@lists.openstack.org>
> > Date: Saturday, July 1, 2017 at 4:42 AM
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev@lists.openstack.org>
> > Subject: Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman
> > proposal
> > for message bus analysis
> > 
> > Hi Paul-André,
> > 
> > This was without ceilometer. Nova + Neutron were consuming a lot of
> > connections.
> > Some charts are available in the Barcelona presentation[1] and the
> > performance docs[2].
> > In the latter you'll find some telemetry related tests.
> > 
> > [1]:
> > 
> > https://www.openstack.org/assets/presentation-media/Chasing-1000-nodes-scale.pdf
> > [2]: https://docs.openstack.org/developer/performance-docs/
> > 
> > Best,
> > 
> > Matt
> > 
> > - Mail original -
> > > De: "Paul-Andre Raymond" <paul-andre.raym...@nexius.com>
> > > À: "OpenStack Development Mailing List (not for usage questions)"
> > > <openstack-dev@lists.openstack.org>
> > > Envoyé: Vendredi 30 Juin 2017 18:42:04
> > > Objet: Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman
> > > proposal for message bus analysis
> > > 
> > > Hi Matthieu,
> > > 
> > > You mentioned 15000 connections with 1000 compute nodes.
> > > Was that mostly Nova? Was ceilometer involved?
> > > I would be curious to know how much AMQP traffic is Control
> > > related
> > > (e.g. spinning up VMs) vs how much is telemetry related in a
> > > typical
> > > openstack deployment.
> > > Do we know that?
> >   

Re: [openstack-dev] [kolla] Looking for Docker images for Cinder, Glance etc for oVirt

2017-07-14 Thread Matthieu Simonin
Hello,

If it helps, we are building on a regular basis a subset of the kolla images. 
They are pushed in dockerhub under beyondtheclouds namespace [1].

stable/ocata images should be up-to-date, master is tagged latest.
Nevertheless some caveats of relying on those tags are mentionned in this 
thread [2].

[1]: https://hub.docker.com/u/beyondtheclouds/
[2]: http://lists.openstack.org/pipermail/openstack-dev/2017-April/115391.html

Best,

Matt

- Mail original -
> De: "Michał Jastrzębski" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Dimanche 9 Juillet 2017 00:48:56
> Objet: Re: [openstack-dev] [kolla] Looking for Docker images for Cinder, 
> Glance etc for oVirt
> 
> Hello,
> 
> Unfortunately we still don't have proper dockerhub uploading
> mechanism, that's in progress. For now you need to build your own
> images, here's doc for that:
> https://docs.openstack.org/kolla/latest/image-building.html
> Also feel free to join us on #openstack-kolla irc if you have further
> questions.
> 
> Cheers,
> Michal
> 
> On 8 July 2017 at 11:03, Leni Kadali Mutungi  wrote:
> > Hello all.
> >
> > I am trying to use the Cinder and Glance Docker images you provide in
> > relation to the setup here:
> > http://www.ovirt.org/develop/release-management/features/cinderglance-docker-integration/
> >
> > I tried to run `sudo docker pull
> > kollaglue/centos-rdo-glance-registry:latest` and got an error of not
> > found. I thought that it could possible to use a Dockerfile to spin up
> > an equivalent of it, so I would like some guidance on how to go about
> > doing that. Best practices and so on. Alternatively, if it is
> > possible, may you point me in the direction of the equivalent images
> > mentioned in the guides if they have been superseded by something else?
> > Thanks.
> >
> > CCing the oVirt users and devel lists to see if anyone has experienced
> > something similar.
> >
> > --
> > - Warm regards
> > Leni Kadali Mutungi
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-07-06 Thread Matthieu Simonin
- Mail original -
> De: "Paul-Andre Raymond" <paul-andre.raym...@nexius.com>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Mercredi 5 Juillet 2017 21:48:29
> Objet: Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal 
> for message bus analysis
> 
> Thank you Matt,
> 
> This is very insightful. It helps.
> 
> The second link did not work for me.

Oh yeah that's probably because of the on-going doc migration[1].

[1]: 
http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html

> 
> In the presentation, it mentioned that the load consisted “Boot and List”
> operations through Rally.
> Did I understand well?

Yes.

> Were those hitting the Openstack UI?

Rally benchmarks put loads the various APIs and gather some metrics about 
the execution (time, failures...)

> Was keystone involved? Was it using Fernet or another sort of token?

Keystone is indeed involved and the token was at that time UUID token.

> 
> Intuitively, I expected
> - the big driver for performance on mariadb would be authentication tokens.
> And fernet would allow to control that.
> - The big driver for performance on rabbitmq would be ceilometer, and it is
> not clear from your presentation that any telemetry data hit the message
> queue.

Telemetry wasn't set up, I guess this would have killed Rabbit earlier in the 
tests.
The split between notification and RPC messaging is interesting in this area.

Bye,

Matt

> 
> Regards,
> 
> Paul-Andre
> 
> 
> 
> -Original Message-
> From: Matthieu Simonin <matthieu.simo...@inria.fr>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Date: Saturday, July 1, 2017 at 4:42 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal
> for message bus analysis
> 
> Hi Paul-André,
> 
> This was without ceilometer. Nova + Neutron were consuming a lot of
> connections.
> Some charts are available in the Barcelona presentation[1] and the
> performance docs[2].
> In the latter you'll find some telemetry related tests.
> 
> [1]:
> 
> https://www.openstack.org/assets/presentation-media/Chasing-1000-nodes-scale.pdf
> [2]: https://docs.openstack.org/developer/performance-docs/
> 
> Best,
> 
> Matt
> 
> - Mail original -
> > De: "Paul-Andre Raymond" <paul-andre.raym...@nexius.com>
> > À: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev@lists.openstack.org>
> > Envoyé: Vendredi 30 Juin 2017 18:42:04
> > Objet: Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman
> > proposal for message bus analysis
> > 
> > Hi Matthieu,
> > 
> > You mentioned 15000 connections with 1000 compute nodes.
> > Was that mostly Nova? Was ceilometer involved?
> > I would be curious to know how much AMQP traffic is Control
> > related
> >     (e.g. spinning up VMs) vs how much is telemetry related in a
> > typical
> > openstack deployment.
> > Do we know that?
> > 
> > I have also left some comments in the doc.
> > 
> > Paul-Andre
> > 
> > 
> > -Original Message-
> > From: Matthieu Simonin <matthieu.simo...@inria.fr>
> > Reply-To: "OpenStack Development Mailing List (not for usage
> > questions)"
> > <openstack-dev@lists.openstack.org>
> > Date: Wednesday, June 21, 2017 at 6:54 PM
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev@lists.openstack.org>
> > Subject: Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman
> > proposal formessage bus analysis
> > 
> > Hi Ken,
> > 
> > Thanks for starting this !
> > I've made a first pass on the epad and left some notes and
> > questions
> > there.
> > 
> > Best,
> > 
> > Matthieu
> > - Mail original 

Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-07-01 Thread Matthieu Simonin
Hi Paul-André,

This was without ceilometer. Nova + Neutron were consuming a lot of connections.
Some charts are available in the Barcelona presentation[1] and the performance 
docs[2].
In the latter you'll find some telemetry related tests.

[1]: 
https://www.openstack.org/assets/presentation-media/Chasing-1000-nodes-scale.pdf
[2]: https://docs.openstack.org/developer/performance-docs/

Best,

Matt

- Mail original -
> De: "Paul-Andre Raymond" <paul-andre.raym...@nexius.com>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Vendredi 30 Juin 2017 18:42:04
> Objet: Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal 
> for message bus analysis
> 
> Hi Matthieu,
> 
> You mentioned 15000 connections with 1000 compute nodes.
> Was that mostly Nova? Was ceilometer involved?
> I would be curious to know how much AMQP traffic is Control related
> (e.g. spinning up VMs) vs how much is telemetry related in a typical
> openstack deployment.
> Do we know that?
> 
> I have also left some comments in the doc.
> 
>     Paul-Andre
> 
> 
> -Original Message-
> From: Matthieu Simonin <matthieu.simo...@inria.fr>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Date: Wednesday, June 21, 2017 at 6:54 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman
> proposal for  message bus analysis
> 
> Hi Ken,
> 
> Thanks for starting this !
> I've made a first pass on the epad and left some notes and questions
> there.
> 
> Best,
> 
> Matthieu
> - Mail original -
> > De: "Ken Giusti" <kgiu...@gmail.com>
> > À: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev@lists.openstack.org>
> > Envoyé: Mercredi 21 Juin 2017 15:23:26
> > Objet: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman
> > proposal formessage bus analysis
> > 
> > Hi All,
> > 
> > Andy and I have taken a stab at defining some test scenarios for
> > anal the
> > different message bus technologies:
> > 
> > https://etherpad.openstack.org/p/1BGhFHDIoi
> > 
> > We've started with tests for just the oslo.messaging layer to
> > analyze
> > throughput and latency as the number of message bus clients - and
> > the bus
> > itself - scale out.
> > 
> > The next step will be to define messaging oriented test scenarios
> > for an
> > openstack deployment.  We've started by enumerating a few of the
> > tools,
> > topologies, and fault conditions that need to be covered.
> > 
> > Let's use this epad as a starting point for analyzing messaging -
> > please
> > feel free to contribute, question, and criticize :)
> > 
> > thanks,
> > 
> > 
> > 
> > --
> > Ken Giusti  (kgiu...@gmail.com)
> > 
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-06-21 Thread Matthieu Simonin
Hi Ken,

Thanks for starting this !
I've made a first pass on the epad and left some notes and questions there.

Best,

Matthieu
- Mail original -
> De: "Ken Giusti" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Mercredi 21 Juin 2017 15:23:26
> Objet: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for
> message bus analysis
> 
> Hi All,
> 
> Andy and I have taken a stab at defining some test scenarios for anal the
> different message bus technologies:
> 
> https://etherpad.openstack.org/p/1BGhFHDIoi
> 
> We've started with tests for just the oslo.messaging layer to analyze
> throughput and latency as the number of message bus clients - and the bus
> itself - scale out.
> 
> The next step will be to define messaging oriented test scenarios for an
> openstack deployment.  We've started by enumerating a few of the tools,
> topologies, and fault conditions that need to be covered.
> 
> Let's use this epad as a starting point for analyzing messaging - please
> feel free to contribute, question, and criticize :)
> 
> thanks,
> 
> 
> 
> --
> Ken Giusti  (kgiu...@gmail.com)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-06-18 Thread Matthieu Simonin


- Mail original -
> De: "Thierry Carrez" 
> À: openstack-dev@lists.openstack.org
> Envoyé: Lundi 22 Mai 2017 11:02:21
> Objet: Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master
> 
> Mike Bayer wrote:
> > On 05/18/2017 06:13 PM, Adrian Turjak wrote:
> >>
> >> So, specifically in the realm of Keystone, since we are using sqlalchemy
> >> we already have Postgresql support, and since Cockroachdb does talk
> >> Postgres it shouldn't be too hard to back Keystone with it. At that
> >> stage you have a Keystone DB that could be multi-region, multi-master,
> >> consistent, and mostly impervious to disaster. Is that not the holy
> >> grail for a service like Keystone? Combine that with fernet tokens and
> >> suddenly Keystone becomes a service you can't really kill, and can
> >> mostly forget about.
> > 
> > So this is exhibit A for why I think keeping some level of "this might
> > need to work on other databases" within a codebase is always a great
> > idea even if you are not actively supporting other DBs at the moment.
> > Even if Openstack dumped Postgresql completely, I'd not take the
> > rudimental PG-related utilities out of oslo.db nor would I rename all
> > the "mysql_XYZ" facilities to be "XYZ".
> > [...]
> Yes, that sounds like another reason why we'd not want to aggressively
> contract to the MySQL family of databases. At the very least, before we
> do that, we should experiment with CockroachDB and see how reasonable it
> would be to use in an OpenStack context. It might (might) hit a sweet
> spot between performance, durability, database decentralization and
> keeping SQL advanced features -- I'd hate it if we discovered that too late.

The FEDMC working group[1] seems like a good place to have deeper discussion 
and collaboration about CockroachDB in OpenStack.
Some discussions about CockroachDB already occured there and some people are 
ready to dive into it.

This would be great if some Keystone/SQLAlchemy/interested people 
could join to help in bootstraping the project.

[1]: https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds

--
Matt (msimonin)

ps: next meeting is next wed. 06/21 15:00UTC (#openstack-meeting).

> 
> --
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][osprofiler][keystone][neutron][nova] osprofiler in paste deploy files

2017-05-30 Thread Matthieu Simonin


- Mail original -
> De: "Lance Bragstad" <lbrags...@gmail.com>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Mardi 30 Mai 2017 16:33:17
> Objet: Re: [openstack-dev] [kolla][osprofiler][keystone][neutron][nova] 
> osprofiler in paste deploy files
> 
> On Mon, May 29, 2017 at 4:08 AM, Matthieu Simonin <matthieu.simo...@inria.fr
> > wrote:
> 
> > Hello,
> >
> > I'd like to have more insight on OSProfiler support in paste-deploy files
> > as it seems not similar across projects.
> > As a result, the way you can enable it on Kolla side differs. Here are
> > some examples:
> >
> > a) Nova paste.ini already contains OSProfiler middleware[1].
> >
> > b) Keystone paste.ini doesn't contain OSProfiler but the file is exposed
> > in Kolla-ansible.
> > Thus it can be overwritten[2] by providing an alternate paste file using a
> > node_custom_config directory.
> >
> 
> I'm looking through keystone's sample paste file we keep in the project and
> we do have osprofiler in our v2 and v3 pipelines [0] [1]. It looks like it
> has been in keystone's sample paste file since Mitaka [2]

My bad, Kolla is maintaining a copy (without osprofiler) of the file which will 
replace the one shipped with Keystone (with osprofiler).

> 
> 
> [0]
> https://github.com/openstack/keystone/blob/58d7eaca41f83a52e100cbae9afe7d3faf1b9693/etc/keystone-paste.ini#L43-L44
> [1]
> https://github.com/openstack/keystone/blob/58d7eaca41f83a52e100cbae9afe7d3faf1b9693/etc/keystone-paste.ini#L68
> [2]
> https://github.com/openstack/keystone/commit/639e36adbfa0f58ce2c3f31856b4343e9197aa0e
> 
> 
> >
> > c) Neutron paste.ini doesn't contain OSProfiler middleware[3]. For
> > devstack, a hook can reconfigure the file at deploy time[4].
> > For Kolla, it seems that the only solution right now is to rebuild the
> > whole docker image.
> >
> > As a user of Kolla and OSprofiler a) is the most convenient thing.
> >
> > Regarding b) and c), is it a deliberate choice to ship the paste deploy
> > files without OSProfiler middleware?
> >
> > Do you think we could converge ? ideally having a) for every API services ?
> >
> > Best,
> >
> > Matt
> >
> > [1]: https://github.com/openstack/nova/blob/0d31fb303e07b7ed9f55b9c823b43e
> > 6db5153ee6/etc/nova/api-paste.ini#L29-L37
> > [2]: https://github.com/openstack/kolla-ansible/blob/
> > fe61612ec6db469cccf2d2b4f0bd404ad4ced112/ansible/roles/
> > keystone/tasks/config.yml#L119
> > [3]: https://github.com/openstack/neutron/blob/
> > e4557a7793fbf3461bfae36ead41ee4d349920ab/neutron/tests/
> > contrib/hooks/osprofiler
> > [4]: https://github.com/openstack/neutron/blob/
> > e4557a7793fbf3461bfae36ead41ee4d349920ab/etc/api-paste.ini#L6-L9
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][osprofiler][keystone][neutron][nova] osprofiler in paste deploy files

2017-05-29 Thread Matthieu Simonin


- Mail original -
> De: "Eduardo Gonzalez" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Lundi 29 Mai 2017 11:53:53
> Objet: Re: [openstack-dev] [kolla][osprofiler][keystone][neutron][nova] 
> osprofiler in paste deploy files
> 
> Hi Matt,
> 
> A), As far I can see most services with Osprofile implemented have it
> enabled by default at paste-ini files except Neutron and maybe a couple
> more.
> 
> B), kolla's custom keystone-paste-ini file will probably be "removed" for
> Pike release, using keystone defaults with osprofile enabled. We will add
> other method to customize paste-ini, ofc.

Is there any reference on a (future) method to customize paste-ini files ?
This would be a really helpful feature (and not only for OSProfiler).

> 
> C), For now in kolla, neutron paste-ini cannot be customized by regular
> methods, a quick but no good solution would be exec into container, modify
> the file and restart container, or modify kolla-ansible playbooks to allow
> copying your custom api-paste.ini.
> 
> Agree that option A is the best method.
> 
> FYI, at this moment there is a change under review to implement osprofile
> in kolla-ansible. [0]

Thanks for pointing me this. It seems a complementary feature to fully enable 
OSProfiler in Kolla. In my opinion, making sure the various api-paste files 
enable 
OSProfiler is something that goes first.

Best,

Matt

> 
> [0] https://review.openstack.org/#/c/455628/
> 
> Regards, Eduardo
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][osprofiler][keystone][neutron][nova] osprofiler in paste deploy files

2017-05-29 Thread Matthieu Simonin
Hello, 

I'd like to have more insight on OSProfiler support in paste-deploy files as it 
seems not similar across projects.
As a result, the way you can enable it on Kolla side differs. Here are some 
examples:

a) Nova paste.ini already contains OSProfiler middleware[1].

b) Keystone paste.ini doesn't contain OSProfiler but the file is exposed in 
Kolla-ansible. 
Thus it can be overwritten[2] by providing an alternate paste file using a 
node_custom_config directory.

c) Neutron paste.ini doesn't contain OSProfiler middleware[3]. For devstack, a 
hook can reconfigure the file at deploy time[4].
For Kolla, it seems that the only solution right now is to rebuild the whole 
docker image.

As a user of Kolla and OSprofiler a) is the most convenient thing. 

Regarding b) and c), is it a deliberate choice to ship the paste deploy files 
without OSProfiler middleware? 

Do you think we could converge ? ideally having a) for every API services ?

Best,

Matt

[1]: 
https://github.com/openstack/nova/blob/0d31fb303e07b7ed9f55b9c823b43e6db5153ee6/etc/nova/api-paste.ini#L29-L37
[2]: 
https://github.com/openstack/kolla-ansible/blob/fe61612ec6db469cccf2d2b4f0bd404ad4ced112/ansible/roles/keystone/tasks/config.yml#L119
[3]: 
https://github.com/openstack/neutron/blob/e4557a7793fbf3461bfae36ead41ee4d349920ab/neutron/tests/contrib/hooks/osprofiler
[4]: 
https://github.com/openstack/neutron/blob/e4557a7793fbf3461bfae36ead41ee4d349920ab/etc/api-paste.ini#L6-L9

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Multi-Regions Support

2017-01-06 Thread Matthieu Simonin


- Mail original -
> De: "Jay Pipes" 
> À: openstack-dev@lists.openstack.org
> Envoyé: Vendredi 6 Janvier 2017 21:42:46
> Objet: Re: [openstack-dev] [kolla] Multi-Regions Support
> 
> On 01/06/2017 03:23 PM, Sam Yaple wrote:
> > This should be read as MariaDB+Galera for replication. It is a
> > highly-available database.
> 
> Don't get me wrong. I love me some Galera. :) However, what the poster
> is really working towards is an implementation of the VCPE and eVCPE use
> cases for ETSI NFV. These use cases require a highly distributed compute
> fabric that can withstand long disruptions in network connectivity
> (between POPs/COs and the last mile of network service) while still
> being able to service compute and network functions at the customer premise.
> 
> Galera doesn't tolerate network disruption of any significant length of
> time. At all. If there is a Keystone services running on the customer
> premise that is connecting to a Galera database, and that Galera
> database's connectivity to its peers is disrupted, down goes the whole
> on-premise cloud fabric. And that's exactly what I believe the original
> poster is attempting to avoid. Thus my not understanding the choice here.
> 

Jay, you are thinking too far ;) 

The goal of this thread is to see how Kolla can deploy a multi region scenario.
Remarks/contributions to progress in that direction is the goal of the initial 
post.

Best,

Matt

> Best,
> -jay
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Performance][shaker] Triangular topology

2016-12-07 Thread Matthieu Simonin


- Mail original -
> De: "Ilya Shakhat" <ishak...@mirantis.com>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Mardi 6 Décembre 2016 14:39:28
> Objet: Re: [openstack-dev] [Performance][shaker] Triangular topology
> 
> Hi Matt,
> 
> I would suggest to let users specify custom topology in Shaker scenario via
> graphs (e.g. directed triangle would look like: A -> B, B -> C, C -> A),
> where every pair of nodes is pair of VMs and every edge corresponds to the
> traffic flow. The above example will be deployed as 6 VMs, 2 per compute
> node (since we need to separate ingress and egress flows).

I totally agree as it could cover a lot of use cases.

> 
> I already have a patch that allows to deploy graph-based topology:
> https://review.openstack.org/#/c/407495/ but it does not configure
> concurrency properly yet (concurrency still increments by pairs, solution
> tbd)

I'm guessing that changing the semantic of concurrency with regard to the other
 scenarios is maybe not a good thing.

As far as I understand a concurrency of 3 with the following graph

- [A, B]
- [B, C]
- [C, A]

will lead to 3 flows (potentially bi-directionnal) being active. 

So without changing the current semantic of the concurrency 
we could have all flows active, with a concurrency of 6 for the following :

graph:
- [A, B]
- [B, C]
- [C, A]
- [A, B]
- [B, C]
- [C, A]

In that case, what would mean a concurrency of 3 with the above graph ? 
In other words, can we make sure that [A,B], [B,C] and [C,A] are active ? 
More generally, for a custom graph, maybe we can find a way to specify in 
the yaml what pairs should be active for a given concurrency level. 
This could be in the above case (pseudo-yaml) : 
graph:
- [A, B],1
- [B, C],2
- [C, A],3
- [A, B],4
- [B, C],5
- [C, A],6

all pairs with a number less or equal to the concurrency will be considered 
active.

> 
> Please check whether my approach suits your use case, feedback appreciated
> :)

I like it !

> 
> Thanks,
> Ilya
> 
> 2016-11-24 19:57 GMT+04:00 Matthieu Simonin <matthieu.simo...@inria.fr>:
> 
> > Hi Ilya,
> >
> > Thanks for your answer, let me know your findings.
> > In any case I'll be glad to help if needed.
> >
> > Matt
> >
> > ps : I just realized that I missed a proper subjet to the thread :(.
> > If this thread continue it's maybe better to change that.
> >
> > - Mail original -
> > > De: "Ilya Shakhat" <ishak...@mirantis.com>
> > > À: "OpenStack Development Mailing List (not for usage questions)" <
> > openstack-dev@lists.openstack.org>
> > > Envoyé: Jeudi 24 Novembre 2016 13:03:33
> > > Objet: Re: [openstack-dev] [Performance][shaker]
> > >
> > > Hi Matt,
> > >
> > > Out of the box Shaker doesn't support such topology.
> > > It shouldn't be hard to implement though. Let me check what needs to be
> > > done.
> > >
> > > Thanks,
> > > Ilya
> > >
> > > 2016-11-24 13:49 GMT+03:00 Matthieu Simonin <matthieu.simo...@inria.fr>:
> > >
> > > > Hello,
> > > >
> > > > I'm looking to shaker capabilities and I'm wondering if this kind
> > > > of accomodation (see attachment also) can be achieved
> > > >
> > > > Ascii (flat) version :
> > > >
> > > > CN1 (2n VMs) <- n flows -> CN2 (2n VMs)
> > > > CN1 (2n VMs) <- n flows -> CN3 (2n VMs)
> > > > CN2 (2n VMs) <- n flows -> CN3 (2n VMs)
> > > >
> > > > In this situation concurrency could be mapped to the number of
> > > > simultaneous flows in use per link.
> > > >
> > > > Best,
> > > >
> > > > Matt
> > > >
> > > >
> > > > 
> > __
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > >
> > > 
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> 

Re: [openstack-dev] [Performance][shaker]

2016-11-24 Thread Matthieu Simonin
Hi Ilya, 

Thanks for your answer, let me know your findings.
In any case I'll be glad to help if needed.

Matt

ps : I just realized that I missed a proper subjet to the thread :(.
If this thread continue it's maybe better to change that.

- Mail original -
> De: "Ilya Shakhat" <ishak...@mirantis.com>
> À: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Envoyé: Jeudi 24 Novembre 2016 13:03:33
> Objet: Re: [openstack-dev] [Performance][shaker]
> 
> Hi Matt,
> 
> Out of the box Shaker doesn't support such topology.
> It shouldn't be hard to implement though. Let me check what needs to be
> done.
> 
> Thanks,
> Ilya
> 
> 2016-11-24 13:49 GMT+03:00 Matthieu Simonin <matthieu.simo...@inria.fr>:
> 
> > Hello,
> >
> > I'm looking to shaker capabilities and I'm wondering if this kind
> > of accomodation (see attachment also) can be achieved
> >
> > Ascii (flat) version :
> >
> > CN1 (2n VMs) <- n flows -> CN2 (2n VMs)
> > CN1 (2n VMs) <- n flows -> CN3 (2n VMs)
> > CN2 (2n VMs) <- n flows -> CN3 (2n VMs)
> >
> > In this situation concurrency could be mapped to the number of
> > simultaneous flows in use per link.
> >
> > Best,
> >
> > Matt
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Performance][shaker]

2016-11-24 Thread Matthieu Simonin
Hello, 

I'm looking to shaker capabilities and I'm wondering if this kind 
of accomodation (see attachment also) can be achieved

Ascii (flat) version : 

CN1 (2n VMs) <- n flows -> CN2 (2n VMs)
CN1 (2n VMs) <- n flows -> CN3 (2n VMs)
CN2 (2n VMs) <- n flows -> CN3 (2n VMs)

In this situation concurrency could be mapped to the number of simultaneous 
flows in use per link.

Best,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-02 Thread Matthieu Simonin
abase.
The dependency with SQLAlchemy exists today because it simplified the 
implementation of our PoC. If having such a NoSQL db driver makes sense for the 
community (DragonFlow project's developers told us that they are also 
interested from such a driver), this dependency can be removed in order to 
directly switch from the object model to the NoSQL one.


Concretely, we think that there are three possible approaches:
1) We can use the SQLAlchemy API as the common denominator between a 
relational and non-relational implementation of the db.api component. These two 
implementation could continue to converge by sharing a large amount of code.
2) We create a new non-relational implementation (from scratch) of the 
db.api component. It would require probably more work.
3) We are also studying a last alternative: writing a SQLAlchemy engine 
that targets NewSQL databases (scalability + ACID):
 - https://github.com/cockroachdb/cockroach
 - https://github.com/pingcap/tidb

Last but not the least, we expect to have a meeting with Joshua Harlow in order 
to see whether ROME can become an optional oslo.db driver.  We plan to have 
such a discussion by mid-May. 
According to the discussion, we can rewrite ROME in a more pythonic way. To 
achieve such a goal, we highlight that a full time engineer will join our  team 
on July, the 1st. He can re-implement ROME from scratch in an appropriate way 
(as we know now what is required to make Nova work with Redis, and with the 
support of OpenStack core-developers, we would be able to improve our 
proposition and continue to increase its performance).

Regarding all AMQP/ Cell V2 remarks, it can probably make sense to create 
another thread as it seems that several points need to be discussed/clarified 
(on our side, we would be interest to contribute on conducting performance 
evaluations of AMQP solutions such as 0MQ for instance). 


Matthieu Simonin 
for the discovery project
https://beyondtheclouds.github.io/

> 
> [1] https://github.com/BeyondTheClouds/rome
> 
> [2]
> https://github.com/BeyondTheClouds/rome/blob/master/lib/rome/core/expression/expression.py#L172
> 
> [3]
> https://github.com/BeyondTheClouds/rome/blob/master/lib/rome/core/expression/expression.py#L102
> 
> >
> >
> > -- Ed Leafe
> >
> >
> >
> >
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-23 Thread Matthieu Simonin


- Mail original -
> De: "Edward Leafe" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Samedi 23 Avril 2016 19:12:03
> Objet: Re: [openstack-dev] [nova] Distributed Database
> 
> On Apr 23, 2016, at 10:10 AM, Thierry Carrez  wrote:
> 
> >> I think replacing nova's persistent storage layer with a distributed
> >> database would have a great effect - but I do not think it would have
> >> anything to do with the database itself. It would come from the act that
> >> would be completely necessary to accomplish that- completely rewriting
> >> the persistence layer.
> > 
> > So... The folks from the Discovery initiative working on a
> > massively-distributed cloud use case have been working on incremental
> > changes to make that use case better supported by stock OpenStack. That
> > includes an oslo.db driver backed by a distributed database. Being
> > scientists, they ran interesting experiments to see when and where it made
> > sense.
> > 
> > They will present their findings in the upstream dev track, and I think it
> > makes a good data point for this discussion:
> > 
> > https://www.openstack.org/summit/austin-2016/summit-schedule/events/7342
> 
> That’s exactly the scenario I had in mind. When I first was tasked with
> creating cells in the early days of OpenStack, Rackspace wanted a global
> deployment of individual cells that could be addressed individually or as a
> single deployment. It wasn’t the load on the database that was ever the
> issue; it was the inability to keep the data synchronized across the globe.
> I pushed for something better than MySQL at this, but was not successful in
> convincing many others. I’m really looking forward to hearing the results of
> these tests.


We just landed in Austin (a bit tired). 
But motivated in the perspective to have some of you in the working group 
session to discuss ! 

See you,

Matthieu




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev