Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-05-21 Thread Ilya Sviridov
Team, I believe it is quite complex task and we have to spend more time on
concept.
So, I've postponed it to next 3.0 seria, it is month from now and we can
keep focus on stabilization of current version.

Let us return to this discussion later.

Thanks,
Ilya Sviridov
isviridov @ FreeNode


On Mon, May 5, 2014 at 4:06 AM, Illia Khudoshyn wrote:

> Can't say for others but I'm personally not really happy with Charles &
> Dima approach. As Charles pointed out (or hinted) , QUORUM during write may
> be equal to both EVENTUAL and STRONG, depending on consistency level chosen
> for later read. The same is with QUORUM for read. I'm afraid, this way MDB
> will become way too complex, and it would take more effort to predict its
> behaviour from user's point of view.
> I'd rather prefer it to be as straightforward as possible -- take full
> control and responsibility or follow reasonable defaults.
>
> And, please note, we're aiming to multi DC support, soon or late. And for
> that we'll need more flexible consistency control, so binary option would
> not be enough.
>
> Thanks
>
>
> On Thu, May 1, 2014 at 12:10 AM, Charles Wang 
> wrote:
>
>> Discussed further with Dima. Our consensus is to have WRITE consistency
>> level defined in table schema, and READ consistency control at data item
>> level. This should satisfy our use cases for now.
>>
>> For example, user defined table has Eventual Consistency (Quorum). After
>> user writes data using the consistency level defined in table schema, when
>> user tries to read data back asking for Strong consistency, MagnetoDB can
>> do a READ Eventual Consistency (Quorum) to satisfy user's Strong
>> consistency requirement.
>>
>> Thanks,
>>
>> Charles
>>
>> From: Charles Wang 
>> Date: Wednesday, April 30, 2014 at 10:19 AM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>, Illia Khudoshyn <
>> ikhudos...@mirantis.com>
>> Cc: Keith Newstadt 
>>
>> Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft
>> of concept
>>
>> Sorry for being late to the party.  Since we follow mostly DynamoDB, it
>> makes sense not to deviate too much away from DynamoDB’s consistency mode.
>>
>> From what I read about DynamoDB, READ consistency is defined to be either
>> strong consistency or eventual consistency.
>>
>>   "ConsistentRead 
>> <http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-ConsistentRead>":
>>  "*boolean*”,
>>
>> *ConsistentRead 
>> <http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#API_Query_RequestSyntax>*
>>
>> If set to true, then the operation uses strongly consistent reads; 
>> otherwise, eventually consistent reads are used.
>>
>> Strongly consistent reads are not supported on global secondary indexes. If 
>> you query a global secondary index with *ConsistentRead* set to true, you 
>> will receive an error message.
>>
>> Type: Boolean
>>
>> Required: No
>>
>>
>> http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html
>>
>> WRITE consistency is not clearly defined anywhere. From what Werner
>> Vogel’s description, it seems to indicate writes are replicated across
>> availability zones/data centers synchronously. I guess inside data center,
>> writes are replicated asynchronously. And the API doesn’t allow user to
>> specify WRITE consistency level.
>>
>> http://www.allthingsdistributed.com/2012/01/amazon-dynamodb.html
>>
>> Considering the above factors and what Cassandra’s capabilities, I
>> propose we use the following model.
>>
>> READ:
>>
>>- Strong consistency (synchronously replicate to all, maps to
>>Cassandra READ All consistency level)
>>- Eventual consistency (quorum read, maps to Cassandra READ Quorum)
>>- Weak consistency (not in DynamoDB, maps to Cassandra READ ONE)
>>
>> WRITE:
>>
>>- Strong consistency (synchronously replicate to all, maps to
>>Cassandra WRITE All consistency level)
>>- Eventual consistency (quorum write, maps to Cassandra WRITE Quorum)
>>- Weak consistency (not in DynamoDB, maps to Cassandra WRITE ANY)
>>
>> For conditional writes (conditional putItem/deletItem), only strong and
>> eventual consistency should be supported.
>>
>> Thoughts?
>>
>> Thanks,
>>
>> Charles
>>
>> From: Dmitr

Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-05-05 Thread Illia Khudoshyn
Can't say for others but I'm personally not really happy with Charles &
Dima approach. As Charles pointed out (or hinted) , QUORUM during write may
be equal to both EVENTUAL and STRONG, depending on consistency level chosen
for later read. The same is with QUORUM for read. I'm afraid, this way MDB
will become way too complex, and it would take more effort to predict its
behaviour from user's point of view.
I'd rather prefer it to be as straightforward as possible -- take full
control and responsibility or follow reasonable defaults.

And, please note, we're aiming to multi DC support, soon or late. And for
that we'll need more flexible consistency control, so binary option would
not be enough.

Thanks


On Thu, May 1, 2014 at 12:10 AM, Charles Wang wrote:

> Discussed further with Dima. Our consensus is to have WRITE consistency
> level defined in table schema, and READ consistency control at data item
> level. This should satisfy our use cases for now.
>
> For example, user defined table has Eventual Consistency (Quorum). After
> user writes data using the consistency level defined in table schema, when
> user tries to read data back asking for Strong consistency, MagnetoDB can
> do a READ Eventual Consistency (Quorum) to satisfy user's Strong
> consistency requirement.
>
> Thanks,
>
> Charles
>
> From: Charles Wang 
> Date: Wednesday, April 30, 2014 at 10:19 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>, Illia Khudoshyn <
> ikhudos...@mirantis.com>
> Cc: Keith Newstadt 
>
> Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of
> concept
>
> Sorry for being late to the party.  Since we follow mostly DynamoDB, it
> makes sense not to deviate too much away from DynamoDB’s consistency mode.
>
> From what I read about DynamoDB, READ consistency is defined to be either
> strong consistency or eventual consistency.
>
>   "ConsistentRead 
> <http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-ConsistentRead>":
>  "*boolean*”,
>
> *ConsistentRead 
> <http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#API_Query_RequestSyntax>*
>
> If set to true, then the operation uses strongly consistent reads; otherwise, 
> eventually consistent reads are used.
>
> Strongly consistent reads are not supported on global secondary indexes. If 
> you query a global secondary index with *ConsistentRead* set to true, you 
> will receive an error message.
>
> Type: Boolean
>
> Required: No
>
>
> http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html
>
> WRITE consistency is not clearly defined anywhere. From what Werner
> Vogel’s description, it seems to indicate writes are replicated across
> availability zones/data centers synchronously. I guess inside data center,
> writes are replicated asynchronously. And the API doesn’t allow user to
> specify WRITE consistency level.
>
> http://www.allthingsdistributed.com/2012/01/amazon-dynamodb.html
>
> Considering the above factors and what Cassandra’s capabilities, I propose
> we use the following model.
>
> READ:
>
>- Strong consistency (synchronously replicate to all, maps to
>Cassandra READ All consistency level)
>- Eventual consistency (quorum read, maps to Cassandra READ Quorum)
>- Weak consistency (not in DynamoDB, maps to Cassandra READ ONE)
>
> WRITE:
>
>- Strong consistency (synchronously replicate to all, maps to
>Cassandra WRITE All consistency level)
>- Eventual consistency (quorum write, maps to Cassandra WRITE Quorum)
>- Weak consistency (not in DynamoDB, maps to Cassandra WRITE ANY)
>
> For conditional writes (conditional putItem/deletItem), only strong and
> eventual consistency should be supported.
>
> Thoughts?
>
> Thanks,
>
> Charles
>
> From: Dmitriy Ukhlov 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, April 29, 2014 at 10:43 AM
> To: Illia Khudoshyn 
> Cc: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of
> concept
>
> Hi Illia,
> WEAK/QUORUM instead of true/false it is ok for me.
>
> But we have also STRONG.
>
> What does STRONG mean? In current concept we a using QUORUM and say that
> it is strong. I guess it is confusing (at least for me) and can have
> different behavior for different backends.
>
> I believe 

Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-04-30 Thread Charles Wang
Discussed further with Dima. Our consensus is to have WRITE consistency level 
defined in table schema, and READ consistency control at data item level. This 
should satisfy our use cases for now.

For example, user defined table has Eventual Consistency (Quorum). After user 
writes data using the consistency level defined in table schema, when user 
tries to read data back asking for Strong consistency, MagnetoDB can do a READ 
Eventual Consistency (Quorum) to satisfy user's Strong consistency requirement.

Thanks,

Charles

From: Charles Wang mailto:charles_w...@symantec.com>>
Date: Wednesday, April 30, 2014 at 10:19 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>, 
Illia Khudoshyn mailto:ikhudos...@mirantis.com>>
Cc: Keith Newstadt 
mailto:keith_newst...@symantec.com>>
Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of 
concept

Sorry for being late to the party.  Since we follow mostly DynamoDB, it makes 
sense not to deviate too much away from DynamoDB’s consistency mode.

>From what I read about DynamoDB, READ consistency is defined to be either 
>strong consistency or eventual consistency.

  
"ConsistentRead<http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-ConsistentRead>":
 "boolean”,

ConsistentRead<http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#API_Query_RequestSyntax>

If set to true, then the operation uses strongly consistent reads; otherwise, 
eventually consistent reads are used.

Strongly consistent reads are not supported on global secondary indexes. If you 
query a global secondary index with ConsistentRead set to true, you will 
receive an error message.

Type: Boolean

Required: No

http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html

WRITE consistency is not clearly defined anywhere. From what Werner Vogel’s 
description, it seems to indicate writes are replicated across availability 
zones/data centers synchronously. I guess inside data center, writes are 
replicated asynchronously. And the API doesn’t allow user to specify WRITE 
consistency level.

http://www.allthingsdistributed.com/2012/01/amazon-dynamodb.html

Considering the above factors and what Cassandra’s capabilities, I propose we 
use the following model.

READ:

 *   Strong consistency (synchronously replicate to all, maps to Cassandra READ 
All consistency level)
 *   Eventual consistency (quorum read, maps to Cassandra READ Quorum)
 *   Weak consistency (not in DynamoDB, maps to Cassandra READ ONE)

WRITE:

 *   Strong consistency (synchronously replicate to all, maps to Cassandra 
WRITE All consistency level)
 *   Eventual consistency (quorum write, maps to Cassandra WRITE Quorum)
 *   Weak consistency (not in DynamoDB, maps to Cassandra WRITE ANY)

For conditional writes (conditional putItem/deletItem), only strong and 
eventual consistency should be supported.

Thoughts?

Thanks,

Charles

From: Dmitriy Ukhlov mailto:dukh...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, April 29, 2014 at 10:43 AM
To: Illia Khudoshyn mailto:ikhudos...@mirantis.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of 
concept

Hi Illia,
WEAK/QUORUM instead of true/false it is ok for me.

But we have also STRONG.

What does STRONG mean? In current concept we a using QUORUM and say that it is 
strong. I guess it is confusing (at least for me) and can have different 
behavior for different backends.

I believe that from user point of view only 4 usecases exist: write and read 
with consistency or not.
For example if we use QUORUM for write what is usecase to use read with STRONG 
one? QUORUM read is enought to get consistent data. Or if we use WEAK (ONE) for 
consistent write what is the use case to use read from QUORUM? we need to read 
from ALL.

But we can to use different kinds of backend's abilities to implement 
consistent and incosistent operation. To provide the best flexibility of 
backend  specific features I propose to use backend specific configuration 
section in table schema. In this case you can get much more then in initial 
concept. For example specify consistensy level ANY instead of ONE for WEAK 
consistency if you want concentrate on performance of TWO if you want to 
provide more fault tolerant behavior.

With my proposal we will have only one limitation in comparison with first 
proposal - We have maximally flexible consistency, but  per table, not per 
request. We have only 2 choices to specify consistensy per request (true or 
false). But I believe that it is enough to cover us

Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-04-30 Thread Charles Wang
Sorry for being late to the party.  Since we follow mostly DynamoDB, it makes 
sense not to deviate too much away from DynamoDB’s consistency mode.

>From what I read about DynamoDB, READ consistency is defined to be either 
>strong consistency or eventual consistency.

  
"ConsistentRead<http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-ConsistentRead>":
 "boolean”,

ConsistentRead<http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#API_Query_RequestSyntax>

If set to true, then the operation uses strongly consistent reads; otherwise, 
eventually consistent reads are used.

Strongly consistent reads are not supported on global secondary indexes. If you 
query a global secondary index with ConsistentRead set to true, you will 
receive an error message.

Type: Boolean

Required: No

http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html

WRITE consistency is not clearly defined anywhere. From what Werner Vogel’s 
description, it seems to indicate writes are replicated across availability 
zones/data centers synchronously. I guess inside data center, writes are 
replicated asynchronously. And the API doesn’t allow user to specify WRITE 
consistency level.

http://www.allthingsdistributed.com/2012/01/amazon-dynamodb.html

Considering the above factors and what Cassandra’s capabilities, I propose we 
use the following model.

READ:

 *   Strong consistency (synchronously replicate to all, maps to Cassandra READ 
All consistency level)
 *   Eventual consistency (quorum read, maps to Cassandra READ Quorum)
 *   Weak consistency (not in DynamoDB, maps to Cassandra READ ONE)

WRITE:

 *   Strong consistency (synchronously replicate to all, maps to Cassandra 
WRITE All consistency level)
 *   Eventual consistency (quorum write, maps to Cassandra WRITE Quorum)
 *   Weak consistency (not in DynamoDB, maps to Cassandra WRITE ANY)

For conditional writes (conditional putItem/deletItem), only strong and 
eventual consistency should be supported.

Thoughts?

Thanks,

Charles

From: Dmitriy Ukhlov mailto:dukh...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, April 29, 2014 at 10:43 AM
To: Illia Khudoshyn mailto:ikhudos...@mirantis.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of 
concept

Hi Illia,
WEAK/QUORUM instead of true/false it is ok for me.

But we have also STRONG.

What does STRONG mean? In current concept we a using QUORUM and say that it is 
strong. I guess it is confusing (at least for me) and can have different 
behavior for different backends.

I believe that from user point of view only 4 usecases exist: write and read 
with consistency or not.
For example if we use QUORUM for write what is usecase to use read with STRONG 
one? QUORUM read is enought to get consistent data. Or if we use WEAK (ONE) for 
consistent write what is the use case to use read from QUORUM? we need to read 
from ALL.

But we can to use different kinds of backend's abilities to implement 
consistent and incosistent operation. To provide the best flexibility of 
backend  specific features I propose to use backend specific configuration 
section in table schema. In this case you can get much more then in initial 
concept. For example specify consistensy level ANY instead of ONE for WEAK 
consistency if you want concentrate on performance of TWO if you want to 
provide more fault tolerant behavior.

With my proposal we will have only one limitation in comparison with first 
proposal - We have maximally flexible consistency, but  per table, not per 
request. We have only 2 choices to specify consistensy per request (true or 
false). But I believe that it is enough to cover user usecases



On Tue, Apr 29, 2014 at 6:16 AM, Illia Khudoshyn 
mailto:ikhudos...@mirantis.com>> wrote:
Hi all,

Dima, I think I understand your reasoning but I have some issues with that. I 
agree that binary logic is much more straightforward and easy to understand and 
use. But following that logic, having the only one hardcoded consistency level 
is even easier and more understandable.
As I can see, the idea of the proposal is to provide user a more fine-grained 
control on consistency to leverage backend features AND at the same time to not 
bound ourselves with only this concrete backend's features. In scope of 
Maksym's proposal choice between WEAK/QUORUM for me is pretty much the same as 
your FALSE/TRUE. But I'd prefer to have more.

PS Eager to see your new index design


On Tue, Apr 29, 2014 at 7:44 AM, Dmitriy Ukhlov 
mailto:dukh...@mirantis.com>> wrote:

Hello Maksym,

Thank you for your work!

I suggest you to 

Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-04-29 Thread Dmitriy Ukhlov
Hi Illia,
WEAK/QUORUM instead of true/false it is ok for me.

But we have also STRONG.

What does STRONG mean? In current concept we a using QUORUM and say that it
is strong. I guess it is confusing (at least for me) and can have different
behavior for different backends.

I believe that from user point of view only 4 usecases exist: write and
read with consistency or not.
For example if we use QUORUM for write what is usecase to use read with
STRONG one? QUORUM read is enought to get consistent data. Or if we use
WEAK (ONE) for consistent write what is the use case to use read from
QUORUM? we need to read from ALL.

But we can to use different kinds of backend's abilities to implement
consistent and incosistent operation. To provide the best flexibility of
backend  specific features I propose to use backend specific configuration
section in table schema. In this case you can get much more then in initial
concept. For example specify consistensy level ANY instead of ONE for WEAK
consistency if you want concentrate on performance of TWO if you want to
provide more fault tolerant behavior.

With my proposal we will have only one limitation in comparison with first
proposal - We have maximally flexible consistency, but  per table, not per
request. We have only 2 choices to specify consistensy per request (true or
false). But I believe that it is enough to cover user usecases



On Tue, Apr 29, 2014 at 6:16 AM, Illia Khudoshyn wrote:

> Hi all,
>
> Dima, I think I understand your reasoning but I have some issues with
> that. I agree that binary logic is much more straightforward and easy to
> understand and use. But following that logic, having the only one hardcoded
> consistency level is even easier and more understandable.
> As I can see, the idea of the proposal is to provide user a more
> fine-grained control on consistency to leverage backend features AND at the
> same time to not bound ourselves with only this concrete backend's
> features. In scope of Maksym's proposal choice between WEAK/QUORUM for me
> is pretty much the same as your FALSE/TRUE. But I'd prefer to have more.
>
> PS Eager to see your new index design
>
>
> On Tue, Apr 29, 2014 at 7:44 AM, Dmitriy Ukhlov wrote:
>
>>
>> Hello Maksym,
>>
>> Thank you for your work!
>>
>> I suggest you to consider more general approach and hide backend specific
>> staff. I have the next proposal:
>> 1) add support for inconsistent write operation by adding PutItem,
>> UpdateItem and DeleteItem request parameters "consistent" = True of False
>> (as well as GetItem and Query requests)
>> 2) add possibility to set backend specific metadata (it would be nice to
>> use some generic format like json) per table in scope of create table
>> request. I suggest to specify mapping for Cassandra consistency level per
>> operation type (consistent read, inconsistent read, consistent write,
>> inconsistent write)
>>
>> I agree that now we have a limitation for inconsistent write operation on
>> tables with indexed fields and for requests with specified expected
>> conditions. I have thought about how to overcome this limitation and it
>> seems that I found out solution for index handling without CAS operation.
>> And maybe it is reasonable to redesign it a bit.
>>
>> On Mon, Apr 28, 2014 at 8:33 AM, MAKSYM IARMAK (CS) <
>> maksym_iar...@symantec.com> wrote:
>>
>>>  Hi,
>>>
>>> Because of we can't use inconsistent write if we use indexed table and
>>> condition operations which indexes based on (this staff requires the state
>>> of data), we have one more issue.
>>>
>>> If we want to make write with consistency level ONE (WEAK) to the
>>> indexed table, we will have 2 variants:
>>> 1. Carry out the operation successfully and implicitly make write to the
>>> indexed table with minimally possible consistency level for it (QUORUM);
>>> 2. Raise an exception, that we can not perform this operation and list
>>> all possible CLs for this operation.
>>>
>>> I personally prefer the 2nd variant. So, does anybody have some
>>> objections or maybe another ideas?
>>>
>>>  --
>>> *From:* MAKSYM IARMAK (CS) [maksym_iar...@symantec.com]
>>> *Sent:* Friday, April 25, 2014 9:14 PM
>>> *To:* openstack-dev@lists.openstack.org
>>> *Subject:* [openstack-dev] [MagnetoDB] Configuring consistency draft of
>>> concept
>>>
>>>   >So, here is specification draft of concept.
>>>
>>> ___
>>> OpenStack-dev mailin

Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-04-29 Thread Illia Khudoshyn
Hi all,

Dima, I think I understand your reasoning but I have some issues with that.
I agree that binary logic is much more straightforward and easy to
understand and use. But following that logic, having the only one hardcoded
consistency level is even easier and more understandable.
As I can see, the idea of the proposal is to provide user a more
fine-grained control on consistency to leverage backend features AND at the
same time to not bound ourselves with only this concrete backend's
features. In scope of Maksym's proposal choice between WEAK/QUORUM for me
is pretty much the same as your FALSE/TRUE. But I'd prefer to have more.

PS Eager to see your new index design


On Tue, Apr 29, 2014 at 7:44 AM, Dmitriy Ukhlov wrote:

>
> Hello Maksym,
>
> Thank you for your work!
>
> I suggest you to consider more general approach and hide backend specific
> staff. I have the next proposal:
> 1) add support for inconsistent write operation by adding PutItem,
> UpdateItem and DeleteItem request parameters "consistent" = True of False
> (as well as GetItem and Query requests)
> 2) add possibility to set backend specific metadata (it would be nice to
> use some generic format like json) per table in scope of create table
> request. I suggest to specify mapping for Cassandra consistency level per
> operation type (consistent read, inconsistent read, consistent write,
> inconsistent write)
>
> I agree that now we have a limitation for inconsistent write operation on
> tables with indexed fields and for requests with specified expected
> conditions. I have thought about how to overcome this limitation and it
> seems that I found out solution for index handling without CAS operation.
> And maybe it is reasonable to redesign it a bit.
>
> On Mon, Apr 28, 2014 at 8:33 AM, MAKSYM IARMAK (CS) <
> maksym_iar...@symantec.com> wrote:
>
>>  Hi,
>>
>> Because of we can't use inconsistent write if we use indexed table and
>> condition operations which indexes based on (this staff requires the state
>> of data), we have one more issue.
>>
>> If we want to make write with consistency level ONE (WEAK) to the indexed
>> table, we will have 2 variants:
>> 1. Carry out the operation successfully and implicitly make write to the
>> indexed table with minimally possible consistency level for it (QUORUM);
>> 2. Raise an exception, that we can not perform this operation and list
>> all possible CLs for this operation.
>>
>> I personally prefer the 2nd variant. So, does anybody have some
>> objections or maybe another ideas?
>>
>>  --------------
>> *From:* MAKSYM IARMAK (CS) [maksym_iar...@symantec.com]
>> *Sent:* Friday, April 25, 2014 9:14 PM
>> *To:* openstack-dev@lists.openstack.org
>> *Subject:* [openstack-dev] [MagnetoDB] Configuring consistency draft of
>> concept
>>
>>   >So, here is specification draft of concept.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best regards,
> Dmitriy Ukhlov
> Mirantis Inc.
>



-- 

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com <http://www.mirantis.ru/>

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-04-28 Thread Dmitriy Ukhlov
Hello Maksym,

Thank you for your work!

I suggest you to consider more general approach and hide backend specific
staff. I have the next proposal:
1) add support for inconsistent write operation by adding PutItem,
UpdateItem and DeleteItem request parameters "consistent" = True of False
(as well as GetItem and Query requests)
2) add possibility to set backend specific metadata (it would be nice to
use some generic format like json) per table in scope of create table
request. I suggest to specify mapping for Cassandra consistency level per
operation type (consistent read, inconsistent read, consistent write,
inconsistent write)

I agree that now we have a limitation for inconsistent write operation on
tables with indexed fields and for requests with specified expected
conditions. I have thought about how to overcome this limitation and it
seems that I found out solution for index handling without CAS operation.
And maybe it is reasonable to redesign it a bit.

On Mon, Apr 28, 2014 at 8:33 AM, MAKSYM IARMAK (CS) <
maksym_iar...@symantec.com> wrote:

>  Hi,
>
> Because of we can't use inconsistent write if we use indexed table and
> condition operations which indexes based on (this staff requires the state
> of data), we have one more issue.
>
> If we want to make write with consistency level ONE (WEAK) to the indexed
> table, we will have 2 variants:
> 1. Carry out the operation successfully and implicitly make write to the
> indexed table with minimally possible consistency level for it (QUORUM);
> 2. Raise an exception, that we can not perform this operation and list all
> possible CLs for this operation.
>
> I personally prefer the 2nd variant. So, does anybody have some objections
> or maybe another ideas?
>
>  --
> *From:* MAKSYM IARMAK (CS) [maksym_iar...@symantec.com]
> *Sent:* Friday, April 25, 2014 9:14 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [MagnetoDB] Configuring consistency draft of
> concept
>
>   >So, here is specification draft of concept.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Dmitriy Ukhlov
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-04-28 Thread MAKSYM IARMAK (CS)
Hi,

Because of we can't use inconsistent write if we use indexed table and 
condition operations which indexes based on (this staff requires the state of 
data), we have one more issue.

If we want to make write with consistency level ONE (WEAK) to the indexed 
table, we will have 2 variants:
1. Carry out the operation successfully and implicitly make write to the 
indexed table with minimally possible consistency level for it (QUORUM);
2. Raise an exception, that we can not perform this operation and list all 
possible CLs for this operation.

I personally prefer the 2nd variant. So, does anybody have some objections or 
maybe another ideas?


From: MAKSYM IARMAK (CS) [maksym_iar...@symantec.com]
Sent: Friday, April 25, 2014 9:14 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

>So, here is specification draft of concept.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-04-25 Thread MAKSYM IARMAK (CS)
Hi openstackers,

In order to implement 
https://blueprints.launchpad.net/magnetodb/+spec/support-tuneable-consistency 
we need tunable consistency support in MagnetoDB what is described here 
https://blueprints.launchpad.net/magnetodb/+spec/configurable-consistency

So, here is specification draft of concept.

1. First of all, there is a list of suggested consistency levels for MagnetoDB:

 *   STRONG - Provides the highest consistency and the lowest availability of 
any other level. (A write must be written to the commit log and memory table on 
all replica nodes in the cluster for that row. Read returns the record with the 
most recent timestamp after all replicas have responded.)
 *   WEAK - Provides low latency. Delivers the lowest consistency and highest 
availability compared to other levels. (A write must be written to the commit 
log and memory table of at least one replica node. Read returns a response from 
at least one replica node)
 *   QUORUM - Provides strong consistency if you can tolerate some level of 
failure. (A write must be written to the commit log and memory table on a 
quorum of replica nodes. Read returns the record with the most recent timestamp 
after a quorum of replicas has responded regardless of data center.)

And special Multi Data Center consistency levels:

 *   MDC_EACH_QUORUM - Used in multiple data center clusters to strictly 
maintain consistency at the same level in each data center. (A write must be 
written to the commit log and memory table on a quorum of replica nodes in all 
data centers. Read returns the record with the most recent timestamp once a 
quorum of replicas in each data center of the cluster has responded.)
 *   MDC_LOCAL_QUORUM - Used in multiple data center clusters to maintain 
consistency in local (current) data center. (A write must be written to the 
commit log and memory table on a quorum of replica nodes in the same data 
center as the coordinator node. Read returns the record with the most recent 
timestamp once a quorum of replicas in the current data center as the 
coordinator node has reported. Avoids latency of inter-data center 
communication.)

BUT: We can't use inconsistent write if we use indexed table and condition 
operations which indexes based on. Because this staff requires the state of 
data. So it seems that we can:
1) tune consistent read/write operation in the next combinations: 
QUORUM/QUORUM, MDC_LOCAL_QUORUM/MDC_EACH_QUORUM, 
MDC_EACH_QUORUM/MDC_LOCAL_QUORUM, STRONG/WEAK) .
And also we have inconsistent read operation with CL=WEAK
2) if we really need inconsistent write we can allow it for tables without 
indexing. In this case we provide more flexibility and optimization 
possibility, but on another hand we make MagnetoDB more complicated.



2. JSON request examples.

I suggest adding new 'consistency_level' attribute. So we should check 
corresponding naming in backend API, cause it can be little different there.



For read data operation we will use for example get item request:

{
"key": {
"ForumName": {
"S": "MagnetoDB"
},
"Subject": {
"S": "What about configurable consistency support?"
}
},
"attributes_to_get": ["LastPostDateTime","Message","Tags"],
"consistency_level": "STRONG"
}

Here we use consistency level STRONG, so it means, that response returns the 
record with the most recent timestamp after all replicas have responded. In 
this case we will have the highest consistency but the lowest availability of 
any other level.

For write data operation we will use for example put item request:

{
"item": {
"LastPostDateTime": {
"S": "201303190422"
},
"Tags": {
"SS": ["Update","Multiple items","HelpMe"]
},
"ForumName": {
"S": "Amazon DynamoDB"
},
"Message": {
"S": "I want to update multiple items."
},
"Subject": {
"S": "How do I update multiple items?"
},
"LastPostedBy": {
"S": "f...@example.com"
}
},
"expected": {
"ForumName": {
"exists": false
},
"Subject": {
"exists": false
},
},
"consistency_level": "WEAK"
}
"""

Here we use consistency level WEAK, so it means, that write will be written to 
the commit log and memory table of at least one replica node. In th