Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-21 Thread Mike Bayer



On 05/21/2017 03:38 PM, Monty Taylor wrote:

documentation on the sequence of steps the operator should take.

In the "active" approach, we still document expectations, but we also 
validate them. If they are not what we expect but can be changed at 
runtime, we change them overriding conflicting environmental config, and 
if we can't, we hard-stop indicating an unsuitable environment. Rather 
than providing helper tools, we perform the steps needed ourselves, in 
the order they need to be performed, ensuring that they are done in the 
manner in which they need to be done.


we do this in places like tripleo.   The MySQL configs and such are 
checked into the source tree, it includes details like 
innodb_file_per_table, timeouts used by haproxy, etc.   I know tripleo 
is not like the service itself like Nova but it's also not exactly 
something we hand off to the operators to figure out from scratch either.


We do some of it in oslo.db as well.  We set things like MySQL SQL_MODE. 
 We try to make sure the unicode-ish flags are set up and that we're 
using utf-8 encoding.




Some examples:

* Character Sets / Collations

We currently enforce at testing time that all database migrations are 
explicit about InnoDB. We also validate in oslo.db that table character 
sets have the string 'utf8' in them. (only on MySQL) We do not have any 
check for case-sensitive or case-insensitive collations (these affect 
sorting and comparison operations) Because we don't, different server 
config settings or different database backends for different clouds can 
actually behave differently through the REST API.


To deal with that:

First we'd have to decide whether case sensitive or case insensitive was 
what we wanted. If we decided we wanted case sensitive, we could add an 
enforcement of that in oslo.db, and write migrations to get from case 
insensitive indexes to case sensitive indexes on tables where we 
detected that a case insensitive collation had been used. If we decided 
we wanted to stick with case insensitive we could similarly add code to 
enforce it on MySQL. To enforce it actively on PostgresSQL, we'd need to 
either switch our code that's using comparisons to use the sqlalchemy 
case-insensitive versions explicitly, or maybe write some sort of 
overloaded driver for PG that turns all comparisons into 
case-insensitive, which would wrap both sides of comparisons in lower() 
calls (which has some indexing concerns, but let's ignore that for the 
moment) We could also take the 'external' approach and just document it, 
then define API tests and try to tie the insensitive behavior in the API 
to Interop Compliance. I'm not 100% sure how a db operator would 
remediate this - but PG has some fancy computed index features - so 
maybe it would be possible.


let's make the case sensitivity explicitly enforced!



A similar issue lurks with the fact that MySQL unicode storage is 3-byte 
by default and 4-byte is opt-in. We could take the 'external' approach 
and document it and assume the operator has configured their my.cnf with 
the appropriate default, or taken an 'active' approach where we override 
it in all the models and make migrations to get us from 3 to 4 byte.


let's force MySQL to use utf8mb4!   Although I am curious what is the 
actual use case we want to hit here (which gets into, zzzeek is ignorant 
as to which unicode glyphs actually live in 4-byte utf8 characters).




* Schema Upgrades

The way you roll out online schema changes is highly dependent on your 
database architecture.


Just limiting to the MySQL world:

If you do Galera, you can do roll them out in Total Order or Rolling 
fashion. Total Order locks basically everything while it's happening, so 
isn't a candidate for "online". In rolling you apply the schema change 
to one node at a time. If you do that, the application has to be able to 
deal with both forms of the table, and you have to deal with ensuring 
that data can replicate appropriately while the schema change is happening.


Galera replicates DDL operations.   If I add a column on a node, it pops 
up on the other nodes too in a similar way as transactions are 
replicated, e.g. nearly synchronous.   I would *assume* it has to do 
this in the context of it's usual transaction ordering, even though 
MySQL doesn't do transactional DDL, so that if the cluster sees 
transaction A, schema change B, transaction C that depends on B, that 
ordering is serialized appropriately.However, even if it doesn't do 
that, the rolling upgrades we do don't start the services talking to the 
new schema structures until the DDL changes are complete, and Galera is 
near-synchronous replication.


Also speaking to the "active" question, we certainly have all kinds of 
logic in Openstack (the optimistic update strategy in particular) that 
take "Galera" into account.  And of course we have Galera config inside 
of tripleo.  So that's kind of the "active" approach, I think.





If you do DRBD 

Re: [openstack-dev] [tc] revised Postgresql support status patch for governance

2017-05-21 Thread Mike Bayer



On 05/21/2017 03:51 PM, Monty Taylor wrote:


So I don't see the problem of "consistent utf8 support" having much to
do with whether or not we support Posgtresql - you of course need your
"CREATE DATABASE" to include the utf8 charset like we do on MySQL, but
that's it.


That's where we stand which means that we're doing 3 byte UTF8 on MySQL,
and 4 byte on PG. That's actually an API facing difference today. It's
work to dig out of from the MySQL side, maybe the PG one is just all
super cool and done. But it's still a consideration point.


The biggest concern for me is that we're letting API behavior be 
dictated by database backend and/or database config choices. The API 
should behave like the API behaves.


The API should behave like, "we store utf-8".  We should accept that 
"utf-8" means "up to four bytes" and make sure we are using utf8mb4 for 
all MySQL backends.  That the API of MySQL has made this bizarre 
decision about what utf-8 is to be would be a bug in MySQL that needs to 
be worked around by the calling application.   Other databases that want 
to work with openstack need to also do utf-8 with four bytes.  We can 
easily add some tests to oslo.db that round trip an assortment of 
unicode glyphs to confirm this (if there's one kind of test I've written 
more than anyone should, it's pushing out non-ascii bytes to a database 
and testing they come back the same).





Sure, it's work. But that's fine. The point of that list was that there
is stuff that is work because SQLA is a leaky abstraction. Which is fine
if there are people taking that work off the table.


I would not characterize this as SQLA being a leaky abstraction.


yee !   win!:)



I'd say that at some point we didn't make a decision as to what we 
wanted to do with text input and how it would be stored or not stored 
and how it would be searched and sorted. Case sensitive collations have 
been available to us the entire time, but we never decided whether our
API was case sensitive or case insensitive. OR - we *DID* decide that 
our API is case insensitive the fact that it isn't on some deployments 
is a bug. I'm putting money on the 'nobody made a decision' answer.


I wasn't there but perhaps early Openstack versions didn't have "textual 
search" kinds of features ?   maybe they were added by folks who didn't 
consider the case sensitivity issue at that time. I'd be strongly in 
favor of making use of oslo.db / SQLAlchemy constructs that are 
explicitly case sensitive or not.  It's true, SQLAlchemy also does not 
force you to "make a decision" on this, if it did, this would be in the 
"hooray the abstraction did not leak!" category.   But SQLA makes lots 
of these kinds of decisions to be kind of hands-off about things like 
this as developers often don't want there to be a decision made here 
(lest it adds even more to the "SQLAlchemy forces me to make so many 
decisions!" complaint I have to read on twitter every day).








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-21 Thread Mike Bayer



On 05/20/2017 12:04 PM, Julien Danjou wrote:

On Fri, May 19 2017, Mike Bayer wrote:


IMO that's a bug for them.


Of course it's a bug. IIRC Mehdi tried to fix it without much success.


I'm inspired to see that Keystone, Nova etc. are
able to move between and eventlet backend and a mod_wsgi backend.IMO
eventlet is really not needed for those services that present a REST interface.
Although for a message queue with lots of long-running connections that receive
events, that's a place where I *would* want to use a polling / non-blocking
model.  But I'd use it explicitly, not with monkeypatching.


+1


I'd ask why not oslo.cotyledon but it seems there's a faction here that is
overall moving out of the Openstack umbrella in any case.


Not oslo because it can be used by other projects than just OpenStack.
And it's a condition of success. As Mehdi said, Oslo has been deserted
in the recent cycles, so putting a lib there as very little chance of
seeing its community and maintenance help grow. Whereas trying to reach
the whole Python ecosystem is more likely to get traction.

As a maintainer of SQLAlchemy I'm surprised you even suggest that. Or do
you plan on doing oslo.sqlalchemy? ;)


I do oslo.db (which also is not "abandoned" in any way).  the point of 
oslo is that it is an openstack-centric mediation layer between some 
common service/library and openstack.


it looks like there already is essentially such a layer for cotyledon. 
I'd just name it "oslo.cotyledon" :)  or oslo. something.  We have a 
moose.  It's cool.






Basically I think openstack should be getting off eventlet in a big way so I
guess my sentiment here is that the Gnocchi / Cotyledon /etc. faction is just
splitting off rather than serving as any kind of direction for the rest of
Openstack to start looking.  But that's only an impression, maybe projects will
use Cotyledon anyway.   If every project goes off and uses something completely
different though, then I think we're losing.   The point of oslo was to prevent
that.


I understand your concern and opinion. I think you, me and Mehdi don't
have the experience as contributors in OpenStack. I invite you to try
moving any major OpenStack project to something like oslo.service2 or
Cotyledon or to achieve any technical debt resolution in OpenStack to
have a view on hard it is to tackle. Then you'll see where we stand. :)


Sure, that's an area where I think the whole direction of openstack 
would benefit from more centralized planning, but i have been here just 
enough to observe that this kind of thing has been discussed before and 
it is of course very tricky to implement.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-21 Thread Adrian Turjak

On 20/05/17 09:31, Mike Bayer wrote:
>
> On 05/18/2017 06:13 PM, Adrian Turjak wrote:
>>
>> So, specifically in the realm of Keystone, since we are using sqlalchemy
>> we already have Postgresql support, and since Cockroachdb does talk
>> Postgres it shouldn't be too hard to back Keystone with it. At that
>> stage you have a Keystone DB that could be multi-region, multi-master,
>> consistent, and mostly impervious to disaster. Is that not the holy
>> grail for a service like Keystone? Combine that with fernet tokens and
>> suddenly Keystone becomes a service you can't really kill, and can
>> mostly forget about.
>
> So this is exhibit A for why I think keeping some level of "this might
> need to work on other databases" within a codebase is always a great
> idea even if you are not actively supporting other DBs at the moment.
> Even if Openstack dumped Postgresql completely, I'd not take the
> rudimental PG-related utilities out of oslo.db nor would I rename all
> the "mysql_XYZ" facilities to be "XYZ".
I have posted on the reviews, but my hope is that if we do drop
PostgreSQL support we at least also state that we will not support any
database specific features that preclude someone from using/supporting a
different database should they wish to, and that we will not deny
patches that add/fix support for other databases provided they do not
interfere with support for the ones we do 'support' officially.
>
> Cockroachdb advertises SQLAlchemy compatibility very prominently. 
> While their tutorial at
> https://www.cockroachlabs.com/docs/build-a-python-app-with-cockroachdb-sqlalchemy.html
> says it uses psycopg2 as the database driver, they have implemented
> their own "cockroachdb://" dialect on top of it, which likely smooths
> out the SQL dialect and connectivity quirks between real Postgresql
> and CockroachDB.
>
> This is not the first "distributed database" to build on the
> Postgresql protocol, I did a bunch of work for a database that started
> out called "Akiban", then got merged to "FoundationDB", and then sadly
> was sucked into a black hole shaped like a huge Apple and the entire
> product and staff were gone forever.  CockroachDB seems to be filling
> in that same hole that I was hoping FoundationDB was going to do
> (until they fell into said hole).
>
>>
>> I'm welcome to being called mad, but I am curious if anyone has looked
>> at this. I'm likely to do some tests at some stage regarding this,
>> because I'm hoping this is the solution I've been hoping to find for
>> quite a long time.
>
> I'd have a blast if Keystone wanted to get into this.   Distributed /
> NewSQL is something I have a lot of optimism about.   Please keep me
> looped in.
>

I can't speak for the Keystone team itself, this was mainly my own
speculation and general ideas about how to better handle
multi-master/DR. I know that standard MySQL async mult-master would work
for a lot of cases, but async multi-master has a bunch of problems that
are often very weird, rare, or painful and hard to debug so people tend
to avoid it unless there is no other option.  That's why I have a lot of
interest in CockroachDB because it was built pretty much entirely to
solve just that case in the Google Spanner mold.

The scenario I'm interested with this is Keystone setup for Fernet
tokens, multi-site, multi-master, and geo-loadbalancing so you always
talk to your nearest datacenter. With non-admin users being able to self
manage their own projects, users, and roles within some scope. Thus data
needs to be consistent across regions because of DR and
geo-loadbalancing, but also you may have multiple people editing the
same users at the same time in different places. I'm curious how much
better Cockroachdb may handle those cases, but from the looks of things
a lot of those problems aren't there or only crop up in very large or
complex transactions which Keystone doesn't seem to have too many of (I
could be wrong though). My use case likely isn't going to hit many
problems with multi-master since we are currently only running 3 sites
that share auth, but I prefer to plan for worst case scenario! Plus the
more I can trust the DB layer, the more I can focus elsewhere.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-21 Thread Monty Taylor

On 05/19/2017 05:10 PM, Matt Riedemann wrote:

On 5/19/2017 3:35 PM, Monty Taylor wrote:

Heck - while I'm on floating ips ... if you have some pre-existing
floating ips and you want to boot servers on them and you want to do
that in parallel, you can't. You can boot a server with a floating ip
that did not pre-exist if you get the port id of the fixed ip of the
server then pass that id to the floating ip create call. Of course,
the server doesn't return the port id in the server record, so at the
very least you need to make a GET /ports.json?device_id={server_id}
call. Of course what you REALLY need to find is the port_id of the ip
of the server that came from a subnet that has 'gateway_ip' defined,
which is even more fun since ips are associated with _networks_ on the
server record and not with subnets.


A few weeks ago I think we went down this rabbit hole in the nova
channel, which led to this etherpad:

https://etherpad.openstack.org/p/nova-os-interfaces

It was really a discussion about the weird APIs that nova has and a lot
of the time our first question is, "why does it return this, or that, or
how is this consumed even?", at which point we put out the Monty signal.


That was a fun conversation!


During a seemingly unrelated forum session on integrating searchlight
with nova-api, operators in the room were saying they wanted to see
ports returned in the server response body, which I think Monty was also
saying when we were going through that etherpad above.


I'd honestly like the contents you get from os-interfaces just always be 
returned as part of the server record. Having it as a second REST call 
isn't terribly helpful - if I need to make an additional call per 
server, I might as well just go call neutron. That way the only 
per-server query I really need to make is GET 
/ports.json?device_id={server_id} - since networks and subnets can be 
cached.


However, if I could do GET /servers/really-detailed or something and get 
/servers/detail + /os-interfaces in one go for all of the servers in my 
project, that would be an efficiency win.



This goes back to a common issue we/I have in nova which is we don't
know who is using which APIs and how. The user survey isn't going to
give us this data. Operators probably don't have this data, unless they
are voicing it as API users themselves. But it would be really useful to
know, which gaps are various tools in the ecosystem needing to overcome
by making multiple API calls to possibly multiple services to get a
clear picture to answer some question, and how can we fix that in a
single place (maybe the compute API)? A backlog spec in nova could be a
simple place to start, or just explaining the gaps in the mailing list
(separate targeted thread of course).




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql support status patch for governance

2017-05-21 Thread Monty Taylor

On 05/18/2017 02:49 PM, Sean Dague wrote:

On 05/18/2017 01:02 PM, Mike Bayer wrote:



On 05/17/2017 02:38 PM, Sean Dague wrote:


Some of the concerns/feedback has been "please describe things that are
harder by this being an abstraction", so examples are provided.


so let's go through this list:

- OpenStack services taking a more active role in managing the DBMS

, "managing" is vague to me, are we referring to the database
service itself, e.g. starting / stopping / configuring?   installers
like tripleo do this now, pacemaker is standard in HA for control of
services, I think I need some background here as to what the more active
role would look like.


I will leave that one for mordred, it was his concern.


I have written a novel on this topic just now in a thread titled

  "[tc] Active or passive role with our database layer"




- The ability to have zero down time upgrade for services such as
  Keystone.

So "zero down time upgrades" seems to have broken into:

* "expand / contract with the code carefully dancing around the
existence of two schema concepts simultaneously", e.g. nova, neutron.
AFAIK there is no particular issue supporting multiple backends on this
because we use alembic or sqlalchemy-migrate to abstract away basic
ALTER TABLE types of feature.


Agree. But there are still issues with designing the schema upgrades 
themselves to be compatible with replication streams or other online 
schema update constraints.



* "expand / contract using server side triggers to reconcile the two
schema concepts", e.g. keystone.   This is more difficult because there
is currently no "trigger" abstraction layer.   Triggers represent more
of an imperative programming model vs. typical SQL,  which is why I've
not taken on trying to build a one-size-fits-all abstraction for this in
upstream Alembic or SQLAlchemy.   However, it is feasible to build a
"one-size-that-fits-openstack-online-upgrades" abstraction.  I was
trying to gauge interest in helping to create this back in the
"triggers" thread, in my note at
http://lists.openstack.org/pipermail/openstack-dev/2016-August/102345.html,
which also referred to some very raw initial code examples.  However, it
received strong pushback from a wide range of openstack veterans, which
led me to believe this was not a thing that was happening.   Apparently
Keystone has gone ahead and used triggers anyway, however I was not
pulled into that process.   But if triggers are to be "blessed" by at
least some projects, I can likely work on this problem for MySQL /
Postgresql agnosticism.  If keystone is using triggers right now for
online upgrades, I would ask, are they currently working on Postgresql
as well with PG-specific triggers, or does Postgresql degrade into a
"non-online" migration scenario if you're running Keystone?


This is the triggers conversation, which while I have issues with, is
the only path forward now if you are doing keystone in a load balancer
and need to retain HA through the process.


I also have issues with this- and I continue to reject categorically the 
assertion that it's the only path forward.


It's not a normal or suggested way to deal with this. There ARE 
best-practice suggested ways to deal with this ... but to the point of 
the other email, they require being more intimate with the HA architecture.



No one is looking at pg here. And yes, everything not mysql would just
have to take the minimal expand / contract downtime. Data services like
Keystone / Glance whose data is their REST API definitely have different
concerns than Nova dropping it's control plane for 30s to recycle code
and apply db schema tweaks.


Depending on the app, nova's control plane is just as much of a concern. 
I agree- there are certainly plenty of workloads out there where it's 
not - but there is an issue at hand that needs to be solved and needs to 
be solved one time and then always work.



- Consistent UTF8 4 & 5 byte support in our APIs

"5 byte support" appears to refer to utf-8's ability to be...well a
total of 6 bytes.But in practice, unicode itself only needs 4 bytes
and that is as far as any database supports right now since they target
unicode (see https://en.wikipedia.org/wiki/UTF-8#Description).  That's
all any database we're talking about supports at most.  So...lets assume
this means four bytes.


The 5 byte statement came in via a bug to Nova, it might have been
confused, and I might have been confused in interpretting it. Lets
assume it's invalid now and move to 4 byte.


Yes.



From the perspective of database-agnosticism with regards to database
and driver support for non-ascii characters, this problem has been
solved by SQLAlchemy well before Python 3 existed when many DBAPIs would
literally crash if they received a u'' string, and the rest of them
would churn out garbage; SQLAlchemy implemented a full encode/decode
layer on top of the Python DBAPI to fix this.  The situation is vastly
improved now that all DBAPIs 

[openstack-dev] [tc] Active or passive role with our database layer

2017-05-21 Thread Monty Taylor

Hi all!

As the discussion around PostgreSQL has progressed, it has come clear to 
me that there is a decently deep philosophical question on which we do 
not currently share either definition or agreement. I believe that the 
lack of clarity on this point is one of the things that makes the 
PostgreSQL conversation difficult.


I believe the question is between these two things:

* Should OpenStack assume the existence of an external database service 
that it treat as an black-box on the other side of a connection string?


* Should OpenStack take an active and/or opinionated role in managing 
the database service?


A potentially obvious question about that (asked by Mike Bayer in a 
different thread) is: "what do you mean by managing?"


What I mean by managing is doing all of the things you can do related to 
database operational controls short of installing the software, writing 
the basic db config files to disk and stopping and starting the 
services. It means being much more prescriptive about what types of 
config we support, validating config settings that cannot be overridden 
at runtime and refusing to operate if they are unworkable.


Why would we want to be 'more active'? When managing and tuning 
databases, there are some things that are driven by the environment and 
some things that are driven by the application.


Things that are driven by the environment include things like the amount 
of RAM actually available, whether or not the machines running the 
database are dedicated or shared, firewall settings, selinux settings 
and what versions of software are available.


Things that are driven by the application are things like character set 
and collation, schema design, data types, schema upgrade and HA strategies.


One might argue that HA strategies are an operator concern, but in 
reality the set of workable HA strategies is tightly constrained by how 
the application works, and the pairing an application expecting one HA 
strategy with a deployment implementing a different one can have 
negative results ranging from unexpected downtime to data corruption.


For example: An HA strategy using slave promotion and a VIP that points 
at the current write master paired with an application incorrectly 
configured to do such a thing can lead to writes to the wrong host after 
a failover event and an application that seems to be running fine until 
the data turns up weird after a while.


For the areas in which the characteristics of the database are tied 
closely to the application behavior, there is a constrained set of valid 
choices at the database level. Sometimes that constrained set only has 
one member.


The approach to those is what I'm talking about when I ask the question 
about "external" or "active".


In the "external" approach, we document the expectations and then write 
the code assuming that the database is set up appropriately. We may 
provide some helper tools, such as 'nova-manage db sync' and 
documentation on the sequence of steps the operator should take.


In the "active" approach, we still document expectations, but we also 
validate them. If they are not what we expect but can be changed at 
runtime, we change them overriding conflicting environmental config, and 
if we can't, we hard-stop indicating an unsuitable environment. Rather 
than providing helper tools, we perform the steps needed ourselves, in 
the order they need to be performed, ensuring that they are done in the 
manner in which they need to be done.


Some examples:

* Character Sets / Collations

We currently enforce at testing time that all database migrations are 
explicit about InnoDB. We also validate in oslo.db that table character 
sets have the string 'utf8' in them. (only on MySQL) We do not have any 
check for case-sensitive or case-insensitive collations (these affect 
sorting and comparison operations) Because we don't, different server 
config settings or different database backends for different clouds can 
actually behave differently through the REST API.


To deal with that:

First we'd have to decide whether case sensitive or case insensitive was 
what we wanted. If we decided we wanted case sensitive, we could add an 
enforcement of that in oslo.db, and write migrations to get from case 
insensitive indexes to case sensitive indexes on tables where we 
detected that a case insensitive collation had been used. If we decided 
we wanted to stick with case insensitive we could similarly add code to 
enforce it on MySQL. To enforce it actively on PostgresSQL, we'd need to 
either switch our code that's using comparisons to use the sqlalchemy 
case-insensitive versions explicitly, or maybe write some sort of 
overloaded driver for PG that turns all comparisons into 
case-insensitive, which would wrap both sides of comparisons in lower() 
calls (which has some indexing concerns, but let's ignore that for the 
moment) We could also take the 'external' approach and just document it, 
then 

[openstack-dev] [trove][all] Weekly meeting time change - 1500UTC #openstack-meeting-alt

2017-05-21 Thread Amrith Kumar
The Trove weekly meeting time has been changed from 1800UTC on Wednesdays to
1500UTC on Wednesdays[1]. Thanks to Trevor for following up on this action
item from the discussions we had in the summit in Boston.

This change has been made to accommodate some new participants to the
community from Europe and China, and advancing the meeting time by three
hours makes the time more convenient for them and not terrible for the rest
of us.

The first meeting at this new time[2] will be on this coming Wednesday,
24th. As always, the meeting agenda can be found at [3].

Thanks,

-amrith

[1] https://review.openstack.org/#/c/466381/
[2] http://eavesdrop.openstack.org/#Trove_(DBaaS)_Team_Meeting
[3] https://wiki.openstack.org/wiki/Meetings/TroveMeeting
--
Amrith Kumar
GPG: 0x5e48849a9d21a29b



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-21 Thread Mehdi Abaakouk

On Fri, May 19, 2017 at 02:04:05PM -0400, Sean Dague wrote:

You end up replicating the Ceilometer issue where there was a break down
in getting needs expressed / implemented, and the result was a service
doing heavy polling of other APIs (because that's the only way it could
get the data it needed).


Not related to the topic, but Ceilometer doesn't have this issue
anymore. Since Nova writes the uuid of the instance inside the libvirt
instance metadata. We just associate libvirt metrics to the instance
uuid. And then correlate them with the full metdata we receive via
notification. We don't poll nova at all anymore.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev