Re: [openstack-dev] [MagnetoDB] Andrey Ostapenko core nomination

2014-12-29 Thread Charles Wang
Congrats Andrey, well deserved.

On 12/26/14, 9:16 AM, isviridov isviri...@mirantis.com wrote:

Hello stackers and magnetians,

I suggest nominating Andrey Ostapenko [1] to MagnetoDB cores.

During last months he has made huge contribution to MagnetoDB [2]
Andrey drives Tempest and python-magnetodbclient successfully.

Please rise your hands.

Thank you,
Ilya Sviridov

[1] http://stackalytics.com/report/users/aostapenko
[2] http://stackalytics.com/report/contribution/magnetodb/90




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] MagnetoDB events notifications

2014-05-28 Thread Charles Wang
Hi Flavio,

Thank you very much for taking time to review the MagnetoDB Notification
spec. For Oslo Notifier vs Oslo Messaging, could you please provide links
to example projects showing how Oslo Messaging¹s Notifier component is
used in OpensSack? I noticed Oslo Notifier is being graduated to Oslo
Messaging, but it seems both are actively being developed.

https://blueprints.launchpad.net/oslo/+spec/graduate-notifier-middleware

Charles Wang
charles_w...@symantec.com


On 5/28/14, 8:17 AM, Flavio Percoco fla...@redhat.com wrote:

On 23/05/14 08:55 -0700, Charles Wang wrote:
Folks,

Please take a look at the initial draft of MagnetoDB Events and
Notifications
wiki page:  https://wiki.openstack.org/wiki/MagnetoDB/notification. Your
feedback will be appreciated.

Just one nit.

The wiki page mentions that Oslo Notifier will be used. Oslo notifier
is on its way of deprecation. Instead, oslo.messaging[0] should be used.

[0] http://docs.openstack.org/developer/oslo.messaging/


-- 
@flaper87
Flavio Percoco


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] MagnetoDB events notifications

2014-05-27 Thread Charles Wang
Hi Dmitriy,

Thank you very much for your feedback.

Although it looks like MagnetoDB Events  Notifications component has some 
similarities to Ceilometer, it is much narrower scope. We only plan to provide 
immediate and periodic notifications of MagnetoDB table/data item CRUD 
activities based on Oslo Notification. There’s no backend database storing 
them, and no query API for those notifications. They are different from 
Ceilometer metrics and events. In the future when we integrate with Ceilometer, 
the MagnetoDB notifications are fed into Ceilometer to collect Ceilometer 
metrics, and/or generate Ceilometer events. Basically Ceilometer will be a 
consumer of MagnetoDB notifications.

I’ll update the wiki further to define our scope clearer, and possibly drop the 
word events” to indicate we focus on notifications.

Regards,

Charles



From: Dmitriy Ukhlov dukh...@mirantis.commailto:dukh...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, May 26, 2014 at 7:28 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [MagnetoDB] MagnetoDB events  notifications

Hi Charles!

It looks like to me that we are duplicating functionality of Ceilometer project.
 Am I wrong? Have you considered Ceilometer integration for monitoring 
MagnetoDB?


On Fri, May 23, 2014 at 6:55 PM, Charles Wang 
charles_w...@symantec.commailto:charles_w...@symantec.com wrote:
Folks,

Please take a look at the initial draft of MagnetoDB Events and Notifications 
wiki page:  https://wiki.openstack.org/wiki/MagnetoDB/notification. Your 
feedback will be appreciated.

Thanks,

Charles Wang
charles_w...@symantec.commailto:charles_w...@symantec.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Dmitriy Ukhlov
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] PTL elections

2014-05-27 Thread Charles Wang
Hi Sergey,

A couple of questions with regard to the process:

1. Is it self nomination only or we can nominate someone else?
2. Is the PTL for Juno, or for a length of 6 months?

Thanks,

Charles Wang
charles_w...@symantec.com


On 5/26/14, 8:16 AM, Sergey Lukjanov slukja...@mirantis.com wrote:

Hi folks,

due to the requirement to have PTL for the program, we're running
elections for the MagnetoDB PTL for Juno cycle. Schedule and policies
are fully aligned with official OpenStack PTLs elections.

You can find more info in official Juno elections wiki page [0] and
the same page for MagnetoDB elections [1], additionally some more info
in official nominations opening email [2].

Timeline:

till 05:59 UTC May 30, 2014: Open candidacy to MagnetoDB PTL positions
May 30, 2014 - 1300 UTC June 6, 2014: PTL elections

To announce your candidacy please start a new openstack-dev at
lists.openstack.org mailing list thread with the following subject:
[MagnetoDB] PTL Candidacy.

[0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
[1] https://wiki.openstack.org/wiki/MagnetoDB/PTL_Elections_Juno
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031239.html

Thank you.


-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] MagnetoDB events notifications

2014-05-23 Thread Charles Wang
Folks,

Please take a look at the initial draft of MagnetoDB Events and Notifications 
wiki page:  https://wiki.openstack.org/wiki/MagnetoDB/notification. Your 
feedback will be appreciated.

Thanks,

Charles Wang
charles_w...@symantec.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-04-30 Thread Charles Wang
Sorry for being late to the party.  Since we follow mostly DynamoDB, it makes 
sense not to deviate too much away from DynamoDB’s consistency mode.

From what I read about DynamoDB, READ consistency is defined to be either 
strong consistency or eventual consistency.

  
ConsistentReadhttp://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-ConsistentRead:
 boolean”,

ConsistentReadhttp://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#API_Query_RequestSyntax

If set to true, then the operation uses strongly consistent reads; otherwise, 
eventually consistent reads are used.

Strongly consistent reads are not supported on global secondary indexes. If you 
query a global secondary index with ConsistentRead set to true, you will 
receive an error message.

Type: Boolean

Required: No

http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html

WRITE consistency is not clearly defined anywhere. From what Werner Vogel’s 
description, it seems to indicate writes are replicated across availability 
zones/data centers synchronously. I guess inside data center, writes are 
replicated asynchronously. And the API doesn’t allow user to specify WRITE 
consistency level.

http://www.allthingsdistributed.com/2012/01/amazon-dynamodb.html

Considering the above factors and what Cassandra’s capabilities, I propose we 
use the following model.

READ:

 *   Strong consistency (synchronously replicate to all, maps to Cassandra READ 
All consistency level)
 *   Eventual consistency (quorum read, maps to Cassandra READ Quorum)
 *   Weak consistency (not in DynamoDB, maps to Cassandra READ ONE)

WRITE:

 *   Strong consistency (synchronously replicate to all, maps to Cassandra 
WRITE All consistency level)
 *   Eventual consistency (quorum write, maps to Cassandra WRITE Quorum)
 *   Weak consistency (not in DynamoDB, maps to Cassandra WRITE ANY)

For conditional writes (conditional putItem/deletItem), only strong and 
eventual consistency should be supported.

Thoughts?

Thanks,

Charles

From: Dmitriy Ukhlov dukh...@mirantis.commailto:dukh...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, April 29, 2014 at 10:43 AM
To: Illia Khudoshyn ikhudos...@mirantis.commailto:ikhudos...@mirantis.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of 
concept

Hi Illia,
WEAK/QUORUM instead of true/false it is ok for me.

But we have also STRONG.

What does STRONG mean? In current concept we a using QUORUM and say that it is 
strong. I guess it is confusing (at least for me) and can have different 
behavior for different backends.

I believe that from user point of view only 4 usecases exist: write and read 
with consistency or not.
For example if we use QUORUM for write what is usecase to use read with STRONG 
one? QUORUM read is enought to get consistent data. Or if we use WEAK (ONE) for 
consistent write what is the use case to use read from QUORUM? we need to read 
from ALL.

But we can to use different kinds of backend's abilities to implement 
consistent and incosistent operation. To provide the best flexibility of 
backend  specific features I propose to use backend specific configuration 
section in table schema. In this case you can get much more then in initial 
concept. For example specify consistensy level ANY instead of ONE for WEAK 
consistency if you want concentrate on performance of TWO if you want to 
provide more fault tolerant behavior.

With my proposal we will have only one limitation in comparison with first 
proposal - We have maximally flexible consistency, but  per table, not per 
request. We have only 2 choices to specify consistensy per request (true or 
false). But I believe that it is enough to cover user usecases



On Tue, Apr 29, 2014 at 6:16 AM, Illia Khudoshyn 
ikhudos...@mirantis.commailto:ikhudos...@mirantis.com wrote:
Hi all,

Dima, I think I understand your reasoning but I have some issues with that. I 
agree that binary logic is much more straightforward and easy to understand and 
use. But following that logic, having the only one hardcoded consistency level 
is even easier and more understandable.
As I can see, the idea of the proposal is to provide user a more fine-grained 
control on consistency to leverage backend features AND at the same time to not 
bound ourselves with only this concrete backend's features. In scope of 
Maksym's proposal choice between WEAK/QUORUM for me is pretty much the same as 
your FALSE/TRUE. But I'd prefer to have more.

PS Eager to see your new index design


On Tue, Apr 29, 2014 at 7:44 AM, Dmitriy Ukhlov 
dukh...@mirantis.commailto:dukh...@mirantis.com wrote:

Hello Maksym,

Thank you for your work!

I 

Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-04-30 Thread Charles Wang
Discussed further with Dima. Our consensus is to have WRITE consistency level 
defined in table schema, and READ consistency control at data item level. This 
should satisfy our use cases for now.

For example, user defined table has Eventual Consistency (Quorum). After user 
writes data using the consistency level defined in table schema, when user 
tries to read data back asking for Strong consistency, MagnetoDB can do a READ 
Eventual Consistency (Quorum) to satisfy user's Strong consistency requirement.

Thanks,

Charles

From: Charles Wang charles_w...@symantec.commailto:charles_w...@symantec.com
Date: Wednesday, April 30, 2014 at 10:19 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Illia Khudoshyn ikhudos...@mirantis.commailto:ikhudos...@mirantis.com
Cc: Keith Newstadt 
keith_newst...@symantec.commailto:keith_newst...@symantec.com
Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of 
concept

Sorry for being late to the party.  Since we follow mostly DynamoDB, it makes 
sense not to deviate too much away from DynamoDB’s consistency mode.

From what I read about DynamoDB, READ consistency is defined to be either 
strong consistency or eventual consistency.

  
ConsistentReadhttp://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-ConsistentRead:
 boolean”,

ConsistentReadhttp://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#API_Query_RequestSyntax

If set to true, then the operation uses strongly consistent reads; otherwise, 
eventually consistent reads are used.

Strongly consistent reads are not supported on global secondary indexes. If you 
query a global secondary index with ConsistentRead set to true, you will 
receive an error message.

Type: Boolean

Required: No

http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html

WRITE consistency is not clearly defined anywhere. From what Werner Vogel’s 
description, it seems to indicate writes are replicated across availability 
zones/data centers synchronously. I guess inside data center, writes are 
replicated asynchronously. And the API doesn’t allow user to specify WRITE 
consistency level.

http://www.allthingsdistributed.com/2012/01/amazon-dynamodb.html

Considering the above factors and what Cassandra’s capabilities, I propose we 
use the following model.

READ:

 *   Strong consistency (synchronously replicate to all, maps to Cassandra READ 
All consistency level)
 *   Eventual consistency (quorum read, maps to Cassandra READ Quorum)
 *   Weak consistency (not in DynamoDB, maps to Cassandra READ ONE)

WRITE:

 *   Strong consistency (synchronously replicate to all, maps to Cassandra 
WRITE All consistency level)
 *   Eventual consistency (quorum write, maps to Cassandra WRITE Quorum)
 *   Weak consistency (not in DynamoDB, maps to Cassandra WRITE ANY)

For conditional writes (conditional putItem/deletItem), only strong and 
eventual consistency should be supported.

Thoughts?

Thanks,

Charles

From: Dmitriy Ukhlov dukh...@mirantis.commailto:dukh...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, April 29, 2014 at 10:43 AM
To: Illia Khudoshyn ikhudos...@mirantis.commailto:ikhudos...@mirantis.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of 
concept

Hi Illia,
WEAK/QUORUM instead of true/false it is ok for me.

But we have also STRONG.

What does STRONG mean? In current concept we a using QUORUM and say that it is 
strong. I guess it is confusing (at least for me) and can have different 
behavior for different backends.

I believe that from user point of view only 4 usecases exist: write and read 
with consistency or not.
For example if we use QUORUM for write what is usecase to use read with STRONG 
one? QUORUM read is enought to get consistent data. Or if we use WEAK (ONE) for 
consistent write what is the use case to use read from QUORUM? we need to read 
from ALL.

But we can to use different kinds of backend's abilities to implement 
consistent and incosistent operation. To provide the best flexibility of 
backend  specific features I propose to use backend specific configuration 
section in table schema. In this case you can get much more then in initial 
concept. For example specify consistensy level ANY instead of ONE for WEAK 
consistency if you want concentrate on performance of TWO if you want to 
provide more fault tolerant behavior.

With my proposal we will have only one limitation in comparison with first 
proposal - We have maximally flexible consistency, but  per table, not per 
request. We have only 2 choices to specify consistensy per