[jira] [Comment Edited] (CASSANDRA-13547) Filtered materialized views missing data

2017-06-28 Thread Krishna Dattu Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067734#comment-16067734
 ] 

Krishna Dattu Koneru edited comment on CASSANDRA-13547 at 6/29/17 5:31 AM:
---

Thanks [~jasonstack] ! 

I will try to rework on {{1. Missing Update}} to address your comment.

About 
 {quote}
Using the greater timestamp from view's columns(pk+non-pk) in base row will 
later shadow entire row in view if there is a normal column in base as primary 
key in view.
{quote}

This looks like a nasty problem. This patch does not cause this. This is a 
existing behaviour that any updates to view's pk columns will make old row 
marked as tombstone (at highest timestamp of all columns in base row) and will 
create a new row with updated pk. 

EDIT : I just saw that https://issues.apache.org/jira/browse/CASSANDRA-11500 
exists because of this exact problem.

See view row timestamps in the below example : 

Existing behaviour without patch:

{code}
INSERT INTO test (a, b, c, d) VALUES (1, 1, 1, 1) using timestamp 0;

test  : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=0], [c=1 ts=0], [d=1 ts=0]
mv_test1  : [1]@0 Row[info=[ts=0] ]: 1 | [c=1 ts=0], [d=1 ts=0]
{code}

{code}
UPDATE test using timestamp 5 set c = 0 WHERE a=1;

test  : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=0], [c=0 ts=5], [d=1 ts=0]
mv_test1  : [1]@0 Row[info=[ts=0] ]: 1 | [c=0 ts=5], [d=1 ts=0]
{code}

{code}
UPDATE test using timestamp 1 set b = 0 WHERE a=1;

test  : [1]@0 Row[info=[ts=0] ]:  | [b=0 ts=1], [c=0 ts=5], [d=1 ts=0]
mv_test1  :
[1]@0 Row[info=[ts=1] ]: 0 | [c=0 ts=5], [d=1 ts=0]
[1]@39 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707556(shadowable) ]: 1 |
{code}
{code}

UPDATE test using timestamp 2 set b = 1 WHERE a=1;

table : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=2], [c=0 ts=5], [d=1 ts=0]

View (before compaction)
[1]@0 Row[info=[ts=1] ]: 0 | [c=0 ts=5], [d=1 ts=0]
[1]@39 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707556(shadowable) ]: 1 |
[1]@0 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707897(shadowable) ]: 0 |
[1]@30 Row[info=[ts=2] ]: 1 | [c=0 ts=5], [d=1 ts=0]

View (after compaction)
[1]@0 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707897(shadowable) ]: 0 |
[1]@31 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707556(shadowable) ]: 1 |

{code}


With this patch :

{code}

INSERT INTO test (a, b, c, d) VALUES (1, 1, 1, 1) using timestamp 0;

table : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=0], [c=1 ts=0], [d=1 ts=0]
view  : [1]@0 Row[info=[ts=0] ]: 1 | [c=1 ts=0], [d=1 ts=0]
{code}
{code}
UPDATE test using timestamp 5 set c = 0 WHERE a=1;
UPDATE test using timestamp 1 set b = 0 WHERE a=1; 

table : [1]@0 Row[info=[ts=0] ]:  | [b=0 ts=1], [c=0 ts=5], [d=1 ts=0]
view  :
[1]@0 Row[info=[ts=5] ]: 0 | [c=0 ts=5], [d=1 ts=0] /* row ts=5 because 
of this patch */
[1]@38 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498708886(shadowable) ]: 1 |
{code}
{code}
UPDATE test using timestamp 2 set b = 1 WHERE a=1;

table : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=2], [c=0 ts=5], [d=1 ts=0]

View (before compaction)
[1]@0 Row[info=[ts=5] ]: 0 | [c=0 ts=5], [d=1 ts=0] /* row ts=5 because of this 
patch */
[1]@38 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498708886(shadowable) ]: 1 |
[1]@0 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498709103(shadowable) ]: 0 |
[1]@30 Row[info=[ts=5] ]: 1 | [c=0 ts=5], [d=1 ts=0] /*-- row ts=5 because of 
this patch */

View (after compaction)
[1]@0 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498709103(shadowable) ]: 0 |
[1]@31 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498708886(shadowable) ]: 1 |

{code}

I am not sure yet how to fix this issue ... given that if live row and 
tombstone have same timestamp , tombstone wins.

Another problem is that all view deletes are marked as shadowable. But then 
that is a different problem and I belive it is being fixed in  
https://issues.apache.org/jira/browse/CASSANDRA-13409 .


was (Author: krishna.koneru):
Thanks [~jasonstack] ! 

I will try to rework on {{1. Missing Update}} to address your comment.

About 
 {quote}
Using the greater timestamp from view's columns(pk+non-pk) in base row will 
later shadow entire row in view if there is a normal column in base as primary 
key in view.
{quote}

This looks like a nasty problem. This patch does not cause this. This is a 
existing behaviour that any updates to view's pk columns will make old row 
marked as tombstone (at highest timestamp of all columns in base row) and will 
create a new row with updated pk. 

See view row timestamps in the below example : 

Existing behaviour without patch:

{code}
INSERT INTO test 

[jira] [Commented] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-06-28 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067751#comment-16067751
 ] 

Jeff Jirsa commented on CASSANDRA-13615:


So I'm unable to build it on my hardware, but it looks like fedora ships some 
pre-compiled objects

Any objection [~amitkumar_ghatwal] / [~ReiOdaira] / [~mshuler] to grabbing the 
object from 

http://rpmfind.net/linux/RPM/fedora/devel/rawhide/ppc64le/s/sigar-java-1.6.5-0.18.git58097d9.fc26.ppc64le.html

Built in February 2017, providing: 

{code}
MD5 (./usr/lib64/sigar/libsigar-ppc64le-linux.so) = 
a08b16e51463c55bc5d0c4b2f6119904
{code}

Or do we need to build against 1.6.4? I'm not familiar with sigar at all, I 
feel very much out of my element here.



> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13592) Null Pointer exception at SELECT JSON statement

2017-06-28 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067738#comment-16067738
 ] 

ZhaoYang commented on CASSANDRA-13592:
--

Thanks for reviewing.. I will update the 2.0/3.0/3.11 branch when trunk CI 
finishes.

"list , set, map or UDT types" are not allowed in key, ideally it shouldn't 
cause issues. I think only key buffer will be reused eg. paging-state, 
subsequent rows deserialization. 
I have included them in the test.

One more issue is in ToJsonFct.

{code}
public ByteBuffer execute(ProtocolVersion protocolVersion, List 
parameters) throws InvalidRequestException
{
assert parameters.size() == 1 : "Expected 1 argument for toJson(), but 
got " + parameters.size();
ByteBuffer parameter = parameters.get(0);
if (parameter == null)
return ByteBufferUtil.bytes("null");
// same..
return 
ByteBufferUtil.bytes(argTypes.get(0).toJSONString(parameter.duplicate(), 
protocolVersion));
}
{code}

> Null Pointer exception at SELECT JSON statement
> ---
>
> Key: CASSANDRA-13592
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13592
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Debian Linux
>Reporter: Wyss Philipp
>Assignee: ZhaoYang
>  Labels: beginner
> Attachments: system.log
>
>
> A Nulll pointer exception appears when the command
> {code}
> SELECT JSON * FROM examples.basic;
> ---MORE---
>  message="java.lang.NullPointerException">
> Examples.basic has the following description (DESC examples.basic;):
> CREATE TABLE examples.basic (
> key frozen> PRIMARY KEY,
> wert text
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> The error appears after the ---MORE--- line.
> The field "wert" has a JSON formatted string.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13547) Filtered materialized views missing data

2017-06-28 Thread Krishna Dattu Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067734#comment-16067734
 ] 

Krishna Dattu Koneru edited comment on CASSANDRA-13547 at 6/29/17 4:56 AM:
---

Thanks [~jasonstack] ! 

I will try to rework on {{1. Missing Update}} to address your comment.

About 
 {quote}
Using the greater timestamp from view's columns(pk+non-pk) in base row will 
later shadow entire row in view if there is a normal column in base as primary 
key in view.
{quote}

This looks like a nasty problem. This patch does not cause this. This is a 
existing behaviour that any updates to view's pk columns will make old row 
marked as tombstone (at highest timestamp of all columns in base row) and will 
create a new row with updated pk. 

See view row timestamps in the below example : 

Existing behaviour without patch:

{code}
INSERT INTO test (a, b, c, d) VALUES (1, 1, 1, 1) using timestamp 0;

test  : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=0], [c=1 ts=0], [d=1 ts=0]
mv_test1  : [1]@0 Row[info=[ts=0] ]: 1 | [c=1 ts=0], [d=1 ts=0]
{code}

{code}
UPDATE test using timestamp 5 set c = 0 WHERE a=1;

test  : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=0], [c=0 ts=5], [d=1 ts=0]
mv_test1  : [1]@0 Row[info=[ts=0] ]: 1 | [c=0 ts=5], [d=1 ts=0]
{code}

{code}
UPDATE test using timestamp 1 set b = 0 WHERE a=1;

test  : [1]@0 Row[info=[ts=0] ]:  | [b=0 ts=1], [c=0 ts=5], [d=1 ts=0]
mv_test1  :
[1]@0 Row[info=[ts=1] ]: 0 | [c=0 ts=5], [d=1 ts=0]
[1]@39 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707556(shadowable) ]: 1 |
{code}
{code}

UPDATE test using timestamp 2 set b = 1 WHERE a=1;

table : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=2], [c=0 ts=5], [d=1 ts=0]

View (before compaction)
[1]@0 Row[info=[ts=1] ]: 0 | [c=0 ts=5], [d=1 ts=0]
[1]@39 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707556(shadowable) ]: 1 |
[1]@0 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707897(shadowable) ]: 0 |
[1]@30 Row[info=[ts=2] ]: 1 | [c=0 ts=5], [d=1 ts=0]

View (after compaction)
[1]@0 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707897(shadowable) ]: 0 |
[1]@31 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707556(shadowable) ]: 1 |

{code}


With this patch :

{code}

INSERT INTO test (a, b, c, d) VALUES (1, 1, 1, 1) using timestamp 0;

table : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=0], [c=1 ts=0], [d=1 ts=0]
view  : [1]@0 Row[info=[ts=0] ]: 1 | [c=1 ts=0], [d=1 ts=0]
{code}
{code}
UPDATE test using timestamp 5 set c = 0 WHERE a=1;
UPDATE test using timestamp 1 set b = 0 WHERE a=1; 

table : [1]@0 Row[info=[ts=0] ]:  | [b=0 ts=1], [c=0 ts=5], [d=1 ts=0]
view  :
[1]@0 Row[info=[ts=5] ]: 0 | [c=0 ts=5], [d=1 ts=0] /* row ts=5 because 
of this patch */
[1]@38 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498708886(shadowable) ]: 1 |
{code}
{code}
UPDATE test using timestamp 2 set b = 1 WHERE a=1;

table : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=2], [c=0 ts=5], [d=1 ts=0]

View (before compaction)
[1]@0 Row[info=[ts=5] ]: 0 | [c=0 ts=5], [d=1 ts=0] /* row ts=5 because of this 
patch */
[1]@38 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498708886(shadowable) ]: 1 |
[1]@0 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498709103(shadowable) ]: 0 |
[1]@30 Row[info=[ts=5] ]: 1 | [c=0 ts=5], [d=1 ts=0] /*-- row ts=5 because of 
this patch */

View (after compaction)
[1]@0 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498709103(shadowable) ]: 0 |
[1]@31 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498708886(shadowable) ]: 1 |

{code}

I am not sure yet how to fix this issue ... given that if live row and 
tombstone have same timestamp , tombstone wins.

Another problem is that all view deletes are marked as shadowable. But then 
that is a different problem and I belive it is being fixed in  
https://issues.apache.org/jira/browse/CASSANDRA-13409 .


was (Author: krishna.koneru):
Thanks [~jasonstack] ! 

I will try to rework on {{1. Missing Update}} to address your comment.

About 
 {quote}
Using the greater timestamp from view's columns(pk+non-pk) in base row will 
later shadow entire row in view if there is a normal column in base as primary 
key in view.
{quote}

This looks like a nasty problem. This patch does not cause this. This is a 
existing behaviour that any updates to view's pk columns will make old row 
marked as tombstone (at highest timestamp of all columns in base row) and will 
create a new row with updated pk. 

See view row timestamps in the below example : 

Existing behaviour without patch:

{code}
INSERT INTO test (a, b, c, d) VALUES (1, 1, 1, 1) using timestamp 0;

test  : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=0], [c=1 ts=0], 

[jira] [Commented] (CASSANDRA-13547) Filtered materialized views missing data

2017-06-28 Thread Krishna Dattu Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067734#comment-16067734
 ] 

Krishna Dattu Koneru commented on CASSANDRA-13547:
--

Thanks [~jasonstack] ! 

I will try to rework on {{1. Missing Update}} to address your comment.

About 
 {quote}
Using the greater timestamp from view's columns(pk+non-pk) in base row will 
later shadow entire row in view if there is a normal column in base as primary 
key in view.
{quote}

This looks like a nasty problem. This patch does not cause this. This is a 
existing behaviour that any updates to view's pk columns will make old row 
marked as tombstone (at highest timestamp of all columns in base row) and will 
create a new row with updated pk. 

See view row timestamps in the below example : 

Existing behaviour without patch:

{code}
INSERT INTO test (a, b, c, d) VALUES (1, 1, 1, 1) using timestamp 0;

test  : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=0], [c=1 ts=0], [d=1 ts=0]
mv_test1  : [1]@0 Row[info=[ts=0] ]: 1 | [c=1 ts=0], [d=1 ts=0]
{code}

{code}
UPDATE test using timestamp 5 set c = 0 WHERE a=1;

test  : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=0], [c=0 ts=5], [d=1 ts=0]
mv_test1  : [1]@0 Row[info=[ts=0] ]: 1 | [c=0 ts=5], [d=1 ts=0]
{code}

{code}
UPDATE test using timestamp 1 set b = 0 WHERE a=1;

test  : [1]@0 Row[info=[ts=0] ]:  | [b=0 ts=1], [c=0 ts=5], [d=1 ts=0]
mv_test1  :
[1]@0 Row[info=[ts=1] ]: 0 | [c=0 ts=5], [d=1 ts=0]
[1]@39 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707556(shadowable) ]: 1 |
{code}
{code}

UPDATE test using timestamp 2 set b = 1 WHERE a=1;

table : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=2], [c=0 ts=5], [d=1 ts=0]

View (before compaction)
[1]@0 Row[info=[ts=1] ]: 0 | [c=0 ts=5], [d=1 ts=0]
[1]@39 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707556(shadowable) ]: 1 |
[1]@0 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707897(shadowable) ]: 0 |
[1]@30 Row[info=[ts=2] ]: 1 | [c=0 ts=5], [d=1 ts=0]

View (after compaction)
[1]@0 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707897(shadowable) ]: 0 |
[1]@31 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498707556(shadowable) ]: 1 |

{code}


With this patch :

{code}

INSERT INTO test (a, b, c, d) VALUES (1, 1, 1, 1) using timestamp 0;

table : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=0], [c=1 ts=0], [d=1 ts=0]
view  : [1]@0 Row[info=[ts=0] ]: 1 | [c=1 ts=0], [d=1 ts=0]
{code}
{code}
UPDATE test using timestamp 5 set c = 0 WHERE a=1;
UPDATE test using timestamp 1 set b = 0 WHERE a=1; 

table : [1]@0 Row[info=[ts=0] ]:  | [b=0 ts=1], [c=0 ts=5], [d=1 ts=0]
view  :
[1]@0 Row[info=[ts=5] ]: 0 | [c=0 ts=5], [d=1 ts=0] /* row ts=5 because 
of this patch */
[1]@38 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498708886(shadowable) ]: 1 |
{code}
{code}
UPDATE test using timestamp 2 set b = 1 WHERE a=1;

table : [1]@0 Row[info=[ts=0] ]:  | [b=1 ts=2], [c=0 ts=5], [d=1 ts=0]

View (before compaction)
[1]@0 Row[info=[ts=5] ]: 0 | [c=0 ts=5], [d=1 ts=0] /* row ts=5 because of this 
patch */
[1]@38 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498708886(shadowable) ]: 1 |
[1]@0 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498709103(shadowable) ]: 0 |
[1]@30 Row[info=[ts=5] ]: 1 | [c=0 ts=5], [d=1 ts=0] /*-- row ts=5 because of 
this patch */

View (after compaction)
[1]@0 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498709103(shadowable) ]: 0 |
[1]@31 Row[info=[ts=-9223372036854775808] del=deletedAt=5, 
localDeletion=1498708886(shadowable) ]: 1 |

{code}

I am not sure yet how to fix this issue ... given that if live row and 
tombstone have same timestamp , tombstone wins.

Another problem is that these tombstones should not be marked as shadowable. 
But then that is a different problem and I belive it is being fixed in  
https://issues.apache.org/jira/browse/CASSANDRA-13409 .

> Filtered materialized views missing data
> 
>
> Key: CASSANDRA-13547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13547
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Official Cassandra 3.10 Docker image (ID 154b919bf8ce).
>Reporter: Craig Nicholson
>Assignee: Krishna Dattu Koneru
>Priority: Blocker
>  Labels: materializedviews
> Fix For: 3.11.x
>
>
> When creating a materialized view against a base table the materialized view 
> does not always reflect the correct data.
> Using the following test schema:
> {code:title=Schema|language=sql}
> DROP KEYSPACE IF EXISTS test;
> CREATE KEYSPACE test
>   WITH REPLICATION = { 
>

[jira] [Commented] (CASSANDRA-13645) Optimize the number of replicas required in Quorum read/write

2017-06-28 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067692#comment-16067692
 ] 

Brandon Williams commented on CASSANDRA-13645:
--

bq. Short of that, what if we just make finally CL pluggable, and let users 
define their own, then we don't have to bikeshed?

That's kind of what I meant by CL.INTEGER, it's as pluggable as you want it if 
you are a power user and know exactly what you're doing.

> Optimize the number of replicas required in Quorum read/write
> -
>
> Key: CASSANDRA-13645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13645
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 4.x
>
>
> Currently, for C* read/write requests with quorum consistent level, number of 
> replicas required for quorum write is W=N/2+1, and number of replicas 
> required for quorum read is R=N/2+1 as well. 
> It works fine in odd number of replicas case, which R + W = N + 1, but in 
> even number of replicas case, like RF=4, 6, 8, the R+W = N + 2, which means 
> we are having two overlapping nodes in read/write requests, which is not 
> necessary. It can not provide strong consistency, but will hurts P99 read 
> latency a lot (2X in our production cluster).
> In a lot of other database, like Amazon Aurora, they use W = N/2 + 1 and R = 
> N/2 for quorum requests, which will provide enough strong consistency, but 
> talk to one less replica in read path. "We use a quorum model with 6 votes (V 
> = 6), a write quorum of 4/6 (Vw = 4), and a read quorum of 3/6 (Vr = 3)."
> I propose we do the same optimization, change read quorum to talk to N/2 
> replicas, which should reduce the read latency for quorum read in general.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13645) Optimize the number of replicas required in Quorum read/write

2017-06-28 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067684#comment-16067684
 ] 

Jeff Jirsa commented on CASSANDRA-13645:


I like {{CL.HALF}} and have half round up, which gives us the desired behavior 
for even #s of replicas, and also  makes it so you can take the extra replica 
on either read or write (write half / read quorum, or write quorum / read 
half). 

Short of that, what if we just make finally CL pluggable, and let users define 
their own, then we don't have to bikeshed?



> Optimize the number of replicas required in Quorum read/write
> -
>
> Key: CASSANDRA-13645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13645
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 4.x
>
>
> Currently, for C* read/write requests with quorum consistent level, number of 
> replicas required for quorum write is W=N/2+1, and number of replicas 
> required for quorum read is R=N/2+1 as well. 
> It works fine in odd number of replicas case, which R + W = N + 1, but in 
> even number of replicas case, like RF=4, 6, 8, the R+W = N + 2, which means 
> we are having two overlapping nodes in read/write requests, which is not 
> necessary. It can not provide strong consistency, but will hurts P99 read 
> latency a lot (2X in our production cluster).
> In a lot of other database, like Amazon Aurora, they use W = N/2 + 1 and R = 
> N/2 for quorum requests, which will provide enough strong consistency, but 
> talk to one less replica in read path. "We use a quorum model with 6 votes (V 
> = 6), a write quorum of 4/6 (Vw = 4), and a read quorum of 3/6 (Vr = 3)."
> I propose we do the same optimization, change read quorum to talk to N/2 
> replicas, which should reduce the read latency for quorum read in general.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12484) Unknown exception caught while attempting to update MaterializedView! findkita.kitas java.lang.AssertionErro

2017-06-28 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-12484:
-
Component/s: Materialized Views

> Unknown exception caught while attempting to update MaterializedView! 
> findkita.kitas java.lang.AssertionErro
> 
>
> Key: CASSANDRA-12484
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12484
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Docker Container with Cassandra version 3.7 running on 
> local pc
>Reporter: cordlessWool
>Priority: Critical
>
> After restart my cassandra node does not start anymore. Ends with following 
> error message.
> ERROR 18:39:37 Unknown exception caught while attempting to update 
> MaterializedView! findkita.kitas
> java.lang.AssertionError: We shouldn't have got there is the base row had no 
> associated entry
> Cassandra has heavy cpu usage and use 2,1 gb of memory there is be 1gb more 
> available. I run nodetool cleanup and repair, but did not help.
> I have 5 materialzied views on this table, but the amount of rows in table is 
> under 2000, that is not much.
> The cassandra runs in a docker container. The container is access able, but 
> can not call cqlsh and my website cound not connect too



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13645) Optimize the number of replicas required in Quorum read/write

2017-06-28 Thread Justin Cameron (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067559#comment-16067559
 ] 

Justin Cameron edited comment on CASSANDRA-13645 at 6/29/17 1:29 AM:
-

Would a CL.HALF be sufficient to meet the requirements here (rather than 
CL.INTEGER)? 

RF3 - CL.HALF (rounding up) write + CL.QUORUM read > 3
RF4 - CL.HALF write + CL.QUORUM read > 4
RF5 - CL.HALF (rounding up) write + CL.QUORUM read > 5
RF6 - CL.HALF + CL.QUORUM read > 6
...etc

Might be a little more opaque due to rounding for odd RFs, but it would 
probably be simpler to implement than a CL.INTEGER


was (Author: jcameron):
Would a CL.HALF be sufficient to meet the requirements here (rather than 
CL.INTEGER)? 

RF3 - CL.HALF (rounding up) write + CL.QUORUM read > 3
RF4 - CL.HALF write + CL.QUORUM read > 4
RF5 - CL.HALF (rounding up) write + CL.QUORUM read > 5
RF6 - CL.HALF + CL.QUORUM read > 6
...etc

Might be a little less opaque due to rounding for odd RFs, but it would 
probably be simpler to implement than a CL.INTEGER

> Optimize the number of replicas required in Quorum read/write
> -
>
> Key: CASSANDRA-13645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13645
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 4.x
>
>
> Currently, for C* read/write requests with quorum consistent level, number of 
> replicas required for quorum write is W=N/2+1, and number of replicas 
> required for quorum read is R=N/2+1 as well. 
> It works fine in odd number of replicas case, which R + W = N + 1, but in 
> even number of replicas case, like RF=4, 6, 8, the R+W = N + 2, which means 
> we are having two overlapping nodes in read/write requests, which is not 
> necessary. It can not provide strong consistency, but will hurts P99 read 
> latency a lot (2X in our production cluster).
> In a lot of other database, like Amazon Aurora, they use W = N/2 + 1 and R = 
> N/2 for quorum requests, which will provide enough strong consistency, but 
> talk to one less replica in read path. "We use a quorum model with 6 votes (V 
> = 6), a write quorum of 4/6 (Vw = 4), and a read quorum of 3/6 (Vr = 3)."
> I propose we do the same optimization, change read quorum to talk to N/2 
> replicas, which should reduce the read latency for quorum read in general.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13645) Optimize the number of replicas required in Quorum read/write

2017-06-28 Thread Justin Cameron (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067559#comment-16067559
 ] 

Justin Cameron commented on CASSANDRA-13645:


Would a CL.HALF be sufficient to meet the requirements here (rather than 
CL.INTEGER)? 

RF3 - CL.HALF (rounding up) write + CL.QUORUM read > 3
RF4 - CL.HALF write + CL.QUORUM read > 4
RF5 - CL.HALF (rounding up) write + CL.QUORUM read > 5
RF6 - CL.HALF + CL.QUORUM read > 6
...etc

Might be a little less opaque due to rounding for odd RFs, but it would 
probably be simpler to implement than a CL.INTEGER

> Optimize the number of replicas required in Quorum read/write
> -
>
> Key: CASSANDRA-13645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13645
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 4.x
>
>
> Currently, for C* read/write requests with quorum consistent level, number of 
> replicas required for quorum write is W=N/2+1, and number of replicas 
> required for quorum read is R=N/2+1 as well. 
> It works fine in odd number of replicas case, which R + W = N + 1, but in 
> even number of replicas case, like RF=4, 6, 8, the R+W = N + 2, which means 
> we are having two overlapping nodes in read/write requests, which is not 
> necessary. It can not provide strong consistency, but will hurts P99 read 
> latency a lot (2X in our production cluster).
> In a lot of other database, like Amazon Aurora, they use W = N/2 + 1 and R = 
> N/2 for quorum requests, which will provide enough strong consistency, but 
> talk to one less replica in read path. "We use a quorum model with 6 votes (V 
> = 6), a write quorum of 4/6 (Vw = 4), and a read quorum of 3/6 (Vr = 3)."
> I propose we do the same optimization, change read quorum to talk to N/2 
> replicas, which should reduce the read latency for quorum read in general.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13629) Wait for batchlog replay during bootstrap

2017-06-28 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13629:

Status: Open  (was: Patch Available)

> Wait for batchlog replay during bootstrap
> -
>
> Key: CASSANDRA-13629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13629
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As part of the problem described in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], the 
> bootstrap logic won't wait for the backlogged batchlog to be fully replayed 
> before changing the new bootstrapping node to "UN" state. We should wait for 
> batchlog replay before making the node available.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13629) Wait for batchlog replay during bootstrap

2017-06-28 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13629:

Status: Awaiting Feedback  (was: Open)

> Wait for batchlog replay during bootstrap
> -
>
> Key: CASSANDRA-13629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13629
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As part of the problem described in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], the 
> bootstrap logic won't wait for the backlogged batchlog to be fully replayed 
> before changing the new bootstrapping node to "UN" state. We should wait for 
> batchlog replay before making the node available.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13614) Batchlog replay throttle should be dynamically configurable with jmx and possibly nodetool

2017-06-28 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13614:

Status: Open  (was: Patch Available)

> Batchlog replay throttle should be dynamically configurable with jmx and 
> possibly nodetool
> --
>
> Key: CASSANDRA-13614
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13614
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Configuration, Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As it is said in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], 
> batchlog replay can be excessively throttled with materialized views. The 
> throttle is controlled by the property {{batchlog_replay_throttle_in_kb}}, 
> which is set by default to (only) 1024KB, and it can't be configured 
> dynamically. It would be useful to be able of modifying it dynamically with 
> JMX and possibly nodetool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13614) Batchlog replay throttle should be dynamically configurable with jmx and possibly nodetool

2017-06-28 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13614:

Status: Awaiting Feedback  (was: Open)

> Batchlog replay throttle should be dynamically configurable with jmx and 
> possibly nodetool
> --
>
> Key: CASSANDRA-13614
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13614
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Configuration, Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As it is said in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], 
> batchlog replay can be excessively throttled with materialized views. The 
> throttle is controlled by the property {{batchlog_replay_throttle_in_kb}}, 
> which is set by default to (only) 1024KB, and it can't be configured 
> dynamically. It would be useful to be able of modifying it dynamically with 
> JMX and possibly nodetool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13629) Wait for batchlog replay during bootstrap

2017-06-28 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067550#comment-16067550
 ] 

Paulo Motta commented on CASSANDRA-13629:
-

bq. It throws a blocking replay of the batchlog right before starting the 
native transport service. I'm not sure about if this is the best place/moment 
to do so. 

I think the concern on CASSANDRA-13162 was that a bootstrapping node becomes 
available before its MV is successfully finished building. So, we should 
probably only wait for batchlog replay after bootstrap is finished and [before 
marking the MV as 
built|https://github.com/adelapena/cassandra/blob/771c4e1a3762bcc19bdfcd25cb25a01104515a1e/src/java/org/apache/cassandra/service/StorageService.java#L1494].
 Given this, the dtests should probably be updated to test the bootstrap 
scenario.

On normal node starts, we probably shouldn't wait for batchlog to be replayed 
since the data was already replicated to the correct number of nodes via the 
write CL.

bq. Also, the possible exceptions in the initial batchlog replay are passed to 
the JVM stability inspector, maybe we should simply stop the JVM in case of a 
failure in the initial batchlog replay. 

In the boostrap case we should probably fail the boostrap if there is an error 
during batchlog replay.

> Wait for batchlog replay during bootstrap
> -
>
> Key: CASSANDRA-13629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13629
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As part of the problem described in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], the 
> bootstrap logic won't wait for the backlogged batchlog to be fully replayed 
> before changing the new bootstrapping node to "UN" state. We should wait for 
> batchlog replay before making the node available.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13614) Batchlog replay throttle should be dynamically configurable with jmx and possibly nodetool

2017-06-28 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067500#comment-16067500
 ] 

Paulo Motta commented on CASSANDRA-13614:
-

Sorry for the delay, the latest changes look good! Just one minor nitpick: even 
though we have {{nodetool sethintedhandoffthrottlekb}}, the original convention 
was to not to have units on the nodetool command names 
({{get/setcompactionthroughput}}, {{get/setstreamthroughput}}, 
{{getmaxhintwindow}}) etc, so in order to have some consistency (except for 
{{sethintedhandoffthrottlekb}}, which we can probably change later), I think 
it's better to keep the command name {{get/setbatchlogreplaythrottle}}, WDYT? 
Sorry for not having pointed out this before.

> Batchlog replay throttle should be dynamically configurable with jmx and 
> possibly nodetool
> --
>
> Key: CASSANDRA-13614
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13614
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Configuration, Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As it is said in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], 
> batchlog replay can be excessively throttled with materialized views. The 
> throttle is controlled by the property {{batchlog_replay_throttle_in_kb}}, 
> which is set by default to (only) 1024KB, and it can't be configured 
> dynamically. It would be useful to be able of modifying it dynamically with 
> JMX and possibly nodetool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13645) Optimize the number of replicas required in Quorum read/write

2017-06-28 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067501#comment-16067501
 ] 

Brandon Williams commented on CASSANDRA-13645:
--

I think CL.TWO and CL.THREE cover your examples, however it may be time to 
think about CL.INTEGER with a parameter, because obviously we don't want to 
keep enumerating there and times eventually change.

> Optimize the number of replicas required in Quorum read/write
> -
>
> Key: CASSANDRA-13645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13645
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 4.x
>
>
> Currently, for C* read/write requests with quorum consistent level, number of 
> replicas required for quorum write is W=N/2+1, and number of replicas 
> required for quorum read is R=N/2+1 as well. 
> It works fine in odd number of replicas case, which R + W = N + 1, but in 
> even number of replicas case, like RF=4, 6, 8, the R+W = N + 2, which means 
> we are having two overlapping nodes in read/write requests, which is not 
> necessary. It can not provide strong consistency, but will hurts P99 read 
> latency a lot (2X in our production cluster).
> In a lot of other database, like Amazon Aurora, they use W = N/2 + 1 and R = 
> N/2 for quorum requests, which will provide enough strong consistency, but 
> talk to one less replica in read path. "We use a quorum model with 6 votes (V 
> = 6), a write quorum of 4/6 (Vw = 4), and a read quorum of 3/6 (Vr = 3)."
> I propose we do the same optimization, change read quorum to talk to N/2 
> replicas, which should reduce the read latency for quorum read in general.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13645) Optimize the number of replicas required in Quorum read/write

2017-06-28 Thread Dikang Gu (JIRA)
Dikang Gu created CASSANDRA-13645:
-

 Summary: Optimize the number of replicas required in Quorum 
read/write
 Key: CASSANDRA-13645
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13645
 Project: Cassandra
  Issue Type: Improvement
  Components: Coordination
Reporter: Dikang Gu
Assignee: Dikang Gu
 Fix For: 4.x


Currently, for C* read/write requests with quorum consistent level, number of 
replicas required for quorum write is W=N/2+1, and number of replicas required 
for quorum read is R=N/2+1 as well. 

It works fine in odd number of replicas case, which R + W = N + 1, but in even 
number of replicas case, like RF=4, 6, 8, the R+W = N + 2, which means we are 
having two overlapping nodes in read/write requests, which is not necessary. It 
can not provide strong consistency, but will hurts P99 read latency a lot (2X 
in our production cluster).

In a lot of other database, like Amazon Aurora, they use W = N/2 + 1 and R = 
N/2 for quorum requests, which will provide enough strong consistency, but talk 
to one less replica in read path. "We use a quorum model with 6 votes (V = 6), 
a write quorum of 4/6 (Vw = 4), and a read quorum of 3/6 (Vr = 3)."

I propose we do the same optimization, change read quorum to talk to N/2 
replicas, which should reduce the read latency for quorum read in general.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-28 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 6/28/17 8:20 PM:


Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sum up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question,
I put the code validating the option in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
 in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the option is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.


was (Author: rgerard):
Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sum up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the option in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
 in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the option is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over 

[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-28 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 6/28/17 8:18 PM:


Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sum up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the option in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the option is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.


was (Author: rgerard):
Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sump up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the option in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the option is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).

[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-28 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 6/28/17 8:18 PM:


Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sum up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the option in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
 in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the option is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.


was (Author: rgerard):
Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sum up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the option in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the option is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).

[jira] [Commented] (CASSANDRA-12735) org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out

2017-06-28 Thread Bing Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067097#comment-16067097
 ] 

Bing Wu commented on CASSANDRA-12735:
-

I ran into similar situation before. What I found out was
1. The cache refresh rate may be set too high. Cache refresh is done under 
SUPERUSER and requires a QUORUM CL.
2. Frequent localized GC spike may delay response to QUORUM request for 
refreshing

My suggestion is -
1. See whether the error happened around the same time when there was a spike 
of GC
2. If so, put your focus on why GC is happening - during Young Generation 
thrashing or moving to Tenured or whatever. Fix that problem will carry you a 
long way
3. If you can afford it, stretch out refresh rate as long as possible

Hope this helps,

Bing

> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out
> -
>
> Key: CASSANDRA-12735
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12735
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration, Core, Materialized Views
> Environment: Python 2.7.11, Datastax Cassandra 3.7.0  
>Reporter: Rajesh Radhakrishnan
> Fix For: 3.7
>
>
> We got a cluster of two nodes running Cassandra.3.7.0 and using client 
> running Python 2.7.11 injecting lot of data from maybe 100 or so jobs. 
> --
> Cache setting can be seen from system.log:
> INFO  [main] 2016-09-30 15:12:50,002 AuthCache.java:172 - (Re)initializing 
> CredentialsCache (validity period/update interval/max entries) 
> (2000/2000/1000)
> INFO  [SharedPool-Worker-1] 2016-09-30 15:15:09,561 AuthCache.java:172 - 
> (Re)initializing PermissionsCache (validity period/update interval/max 
> entries) (1/1/1000)
> INFO  [SharedPool-Worker-1] 2016-09-30 15:15:24,319 AuthCache.java:172 - 
> (Re)initializing RolesCache (validity period/update interval/max entries) 
> (5000/5000/1000)
> ===
> But I am getting the following exception :
> ERROR [SharedPool-Worker-90] 2016-09-30 15:17:20,883 ErrorMessage.java:338 - 
> Unexpected exception during request
> com.google.common.util.concurrent.UncheckedExecutionException: 
> com.google.common.util.concurrent.UncheckedExecutionException: 
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
> received only 0 responses.
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203) 
> ~[guava-18.0.jar:na]
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3937) 
> ~[guava-18.0.jar:na]
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941) 
> ~[guava-18.0.jar:na]
>   at 
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
>  ~[guava-18.0.jar:na]
>   at org.apache.cassandra.auth.AuthCache.get(AuthCache.java:108) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:45)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.authorize(ClientState.java:375) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:308)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:285)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:272) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:256)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:211)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.checkAccess(BatchStatement.java:137)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:502)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:495)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.transport.messages.BatchMessage.execute(BatchMessage.java:217)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]

[jira] [Updated] (CASSANDRA-13641) Properly evict pstmts from prepared statements cache

2017-06-28 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-13641:
-
   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   3.11.1
   Status: Resolved  (was: Ready to Commit)

Thanks for the quick review!

Committed as 
[9562b9b69e08b84ec1e8e431a846548fa8a83b44|https://github.com/apache/cassandra/commit/9562b9b69e08b84ec1e8e431a846548fa8a83b44]
 to [cassandra-3.11|https://github.com/apache/cassandra/tree/cassandra-3.11]


> Properly evict pstmts from prepared statements cache
> 
>
> Key: CASSANDRA-13641
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13641
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.11.1
>
>
> Prepared statements that are evicted from the prepared statements cache are 
> not removed from the underlying table {{system.prepared_statements}}. This 
> can lead to issues during startup.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/3] cassandra git commit: Properly evict pstmts from prepared statements cache

2017-06-28 Thread snazy
Properly evict pstmts from prepared statements cache

patch by Robert Stupp; reviewed by Benjamin Lerer for CASSANDRA-13641


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9562b9b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9562b9b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9562b9b6

Branch: refs/heads/trunk
Commit: 9562b9b69e08b84ec1e8e431a846548fa8a83b44
Parents: bb7e522
Author: Robert Stupp 
Authored: Wed Jun 28 21:15:03 2017 +0200
Committer: Robert Stupp 
Committed: Wed Jun 28 21:15:03 2017 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/cql3/QueryProcessor.java   |   9 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |   6 ++
 test/conf/cassandra.yaml|   1 +
 .../cassandra/cql3/PstmtPersistenceTest.java| 108 ++-
 5 files changed, 100 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9562b9b6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4297a15..88aa1ef 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.1
+ * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 Merged from 3.0:
  * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
  * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9562b9b6/src/java/org/apache/cassandra/cql3/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
index f5ce7e4..0e0ba3c 100644
--- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
@@ -88,6 +88,7 @@ public class QueryProcessor implements QueryHandler
  .listener((md5Digest, prepared) -> {
  metrics.preparedStatementsEvicted.inc();
  lastMinuteEvictionsCount.incrementAndGet();
+ 
SystemKeyspace.removePreparedStatement(md5Digest);
  }).build();
 
 thriftPreparedStatements = new 
ConcurrentLinkedHashMap.Builder()
@@ -162,11 +163,17 @@ public class QueryProcessor implements QueryHandler
 logger.info("Preloaded {} prepared statements", count);
 }
 
+/**
+ * Clears the prepared statement cache.
+ * @param memoryOnly {@code true} if only the in memory caches must be 
cleared, {@code false} otherwise.
+ */
 @VisibleForTesting
-public static void clearPrepraredStatements()
+public static void clearPreparedStatements(boolean memoryOnly)
 {
 preparedStatements.clear();
 thriftPreparedStatements.clear();
+if (!memoryOnly)
+SystemKeyspace.resetPreparedStatements();
 }
 
 private static QueryState internalQueryState()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9562b9b6/src/java/org/apache/cassandra/db/SystemKeyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/SystemKeyspace.java 
b/src/java/org/apache/cassandra/db/SystemKeyspace.java
index 82c9752..6c45329 100644
--- a/src/java/org/apache/cassandra/db/SystemKeyspace.java
+++ b/src/java/org/apache/cassandra/db/SystemKeyspace.java
@@ -1488,6 +1488,12 @@ public final class SystemKeyspace
 key.byteBuffer());
 }
 
+public static void resetPreparedStatements()
+{
+ColumnFamilyStore availableRanges = 
Keyspace.open(SchemaConstants.SYSTEM_KEYSPACE_NAME).getColumnFamilyStore(PREPARED_STATEMENTS);
+availableRanges.truncateBlocking();
+}
+
 public static List> loadPreparedStatements()
 {
 String query = String.format("SELECT logged_keyspace, query_string 
FROM %s.%s", SchemaConstants.SYSTEM_KEYSPACE_NAME, PREPARED_STATEMENTS);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9562b9b6/test/conf/cassandra.yaml
--
diff --git a/test/conf/cassandra.yaml b/test/conf/cassandra.yaml
index cf02634..96ca9a0 100644
--- a/test/conf/cassandra.yaml
+++ b/test/conf/cassandra.yaml
@@ -44,3 +44,4 @@ row_cache_class_name: org.apache.cassandra.cache.OHCProvider
 row_cache_size_in_mb: 16
 enable_user_defined_functions: true
 enable_scripted_user_defined_functions: true
+prepared_statements_cache_size_mb: 1


[1/3] cassandra git commit: Properly evict pstmts from prepared statements cache

2017-06-28 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 bb7e522b4 -> 9562b9b69
  refs/heads/trunk 26e025804 -> 9c6f87c35


Properly evict pstmts from prepared statements cache

patch by Robert Stupp; reviewed by Benjamin Lerer for CASSANDRA-13641


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9562b9b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9562b9b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9562b9b6

Branch: refs/heads/cassandra-3.11
Commit: 9562b9b69e08b84ec1e8e431a846548fa8a83b44
Parents: bb7e522
Author: Robert Stupp 
Authored: Wed Jun 28 21:15:03 2017 +0200
Committer: Robert Stupp 
Committed: Wed Jun 28 21:15:03 2017 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/cql3/QueryProcessor.java   |   9 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |   6 ++
 test/conf/cassandra.yaml|   1 +
 .../cassandra/cql3/PstmtPersistenceTest.java| 108 ++-
 5 files changed, 100 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9562b9b6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4297a15..88aa1ef 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.1
+ * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 Merged from 3.0:
  * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
  * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9562b9b6/src/java/org/apache/cassandra/cql3/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
index f5ce7e4..0e0ba3c 100644
--- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
@@ -88,6 +88,7 @@ public class QueryProcessor implements QueryHandler
  .listener((md5Digest, prepared) -> {
  metrics.preparedStatementsEvicted.inc();
  lastMinuteEvictionsCount.incrementAndGet();
+ 
SystemKeyspace.removePreparedStatement(md5Digest);
  }).build();
 
 thriftPreparedStatements = new 
ConcurrentLinkedHashMap.Builder()
@@ -162,11 +163,17 @@ public class QueryProcessor implements QueryHandler
 logger.info("Preloaded {} prepared statements", count);
 }
 
+/**
+ * Clears the prepared statement cache.
+ * @param memoryOnly {@code true} if only the in memory caches must be 
cleared, {@code false} otherwise.
+ */
 @VisibleForTesting
-public static void clearPrepraredStatements()
+public static void clearPreparedStatements(boolean memoryOnly)
 {
 preparedStatements.clear();
 thriftPreparedStatements.clear();
+if (!memoryOnly)
+SystemKeyspace.resetPreparedStatements();
 }
 
 private static QueryState internalQueryState()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9562b9b6/src/java/org/apache/cassandra/db/SystemKeyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/SystemKeyspace.java 
b/src/java/org/apache/cassandra/db/SystemKeyspace.java
index 82c9752..6c45329 100644
--- a/src/java/org/apache/cassandra/db/SystemKeyspace.java
+++ b/src/java/org/apache/cassandra/db/SystemKeyspace.java
@@ -1488,6 +1488,12 @@ public final class SystemKeyspace
 key.byteBuffer());
 }
 
+public static void resetPreparedStatements()
+{
+ColumnFamilyStore availableRanges = 
Keyspace.open(SchemaConstants.SYSTEM_KEYSPACE_NAME).getColumnFamilyStore(PREPARED_STATEMENTS);
+availableRanges.truncateBlocking();
+}
+
 public static List> loadPreparedStatements()
 {
 String query = String.format("SELECT logged_keyspace, query_string 
FROM %s.%s", SchemaConstants.SYSTEM_KEYSPACE_NAME, PREPARED_STATEMENTS);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9562b9b6/test/conf/cassandra.yaml
--
diff --git a/test/conf/cassandra.yaml b/test/conf/cassandra.yaml
index cf02634..96ca9a0 100644
--- a/test/conf/cassandra.yaml
+++ b/test/conf/cassandra.yaml
@@ -44,3 +44,4 @@ row_cache_class_name: org.apache.cassandra.cache.OHCProvider
 row_cache_size_in_mb: 

[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-06-28 Thread snazy
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9c6f87c3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9c6f87c3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9c6f87c3

Branch: refs/heads/trunk
Commit: 9c6f87c35f364ec6a88775cb3d0cf143e36635e7
Parents: 26e0258 9562b9b
Author: Robert Stupp 
Authored: Wed Jun 28 21:17:06 2017 +0200
Committer: Robert Stupp 
Committed: Wed Jun 28 21:17:06 2017 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/cql3/QueryProcessor.java   |  20 +++-
 .../org/apache/cassandra/db/SystemKeyspace.java |   6 +
 test/conf/cassandra.yaml|   1 +
 .../cassandra/cql3/PstmtPersistenceTest.java| 110 ++-
 5 files changed, 107 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9c6f87c3/CHANGES.txt
--
diff --cc CHANGES.txt
index 04640ab,88aa1ef..6ffd11a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,90 -1,8 +1,91 @@@
 +4.0
 + * Improve secondary index (re)build failure and concurrency handling 
(CASSANDRA-10130)
 + * Improve calculation of available disk space for compaction 
(CASSANDRA-13068)
 + * Change the accessibility of RowCacheSerializer for third party row cache 
plugins (CASSANDRA-13579)
 + * Allow sub-range repairs for a preview of repaired data (CASSANDRA-13570)
 + * NPE in IR cleanup when columnfamily has no sstables (CASSANDRA-13585)
 + * Fix Randomness of stress values (CASSANDRA-12744)
 + * Allow selecting Map values and Set elements (CASSANDRA-7396)
 + * Fast and garbage-free Streaming Histogram (CASSANDRA-13444)
 + * Update repairTime for keyspaces on completion (CASSANDRA-13539)
 + * Add configurable upper bound for validation executor threads 
(CASSANDRA-13521)
 + * Bring back maxHintTTL propery (CASSANDRA-12982)
 + * Add testing guidelines (CASSANDRA-13497)
 + * Add more repair metrics (CASSANDRA-13531)
 + * RangeStreamer should be smarter when picking endpoints for streaming 
(CASSANDRA-4650)
 + * Avoid rewrapping an exception thrown for cache load functions 
(CASSANDRA-13367)
 + * Log time elapsed for each incremental repair phase (CASSANDRA-13498)
 + * Add multiple table operation support to cassandra-stress (CASSANDRA-8780)
 + * Fix incorrect cqlsh results when selecting same columns multiple times 
(CASSANDRA-13262)
 + * Fix WriteResponseHandlerTest is sensitive to test execution order 
(CASSANDRA-13421)
 + * Improve incremental repair logging (CASSANDRA-13468)
 + * Start compaction when incremental repair finishes (CASSANDRA-13454)
 + * Add repair streaming preview (CASSANDRA-13257)
 + * Cleanup isIncremental/repairedAt usage (CASSANDRA-13430)
 + * Change protocol to allow sending key space independent of query string 
(CASSANDRA-10145)
 + * Make gc_log and gc_warn settable at runtime (CASSANDRA-12661)
 + * Take number of files in L0 in account when estimating remaining compaction 
tasks (CASSANDRA-13354)
 + * Skip building views during base table streams on range movements 
(CASSANDRA-13065)
 + * Improve error messages for +/- operations on maps and tuples 
(CASSANDRA-13197)
 + * Remove deprecated repair JMX APIs (CASSANDRA-11530)
 + * Fix version check to enable streaming keep-alive (CASSANDRA-12929)
 + * Make it possible to monitor an ideal consistency level separate from 
actual consistency level (CASSANDRA-13289)
 + * Outbound TCP connections ignore internode authenticator (CASSANDRA-13324)
 + * Upgrade junit from 4.6 to 4.12 (CASSANDRA-13360)
 + * Cleanup ParentRepairSession after repairs (CASSANDRA-13359)
 + * Upgrade snappy-java to 1.1.2.6 (CASSANDRA-13336)
 + * Incremental repair not streaming correct sstables (CASSANDRA-13328)
 + * Upgrade the jna version to 4.3.0 (CASSANDRA-13300)
 + * Add the currentTimestamp, currentDate, currentTime and currentTimeUUID 
functions (CASSANDRA-13132)
 + * Remove config option index_interval (CASSANDRA-10671)
 + * Reduce lock contention for collection types and serializers 
(CASSANDRA-13271)
 + * Make it possible to override MessagingService.Verb ids (CASSANDRA-13283)
 + * Avoid synchronized on prepareForRepair in ActiveRepairService 
(CASSANDRA-9292)
 + * Adds the ability to use uncompressed chunks in compressed files 
(CASSANDRA-10520)
 + * Don't flush sstables when streaming for incremental repair 
(CASSANDRA-13226)
 + * Remove unused method (CASSANDRA-13227)
 + * Fix minor bugs related to #9143 (CASSANDRA-13217)
 + * Output warning if user increases RF (CASSANDRA-13079)
 + * Remove pre-3.0 streaming compatibility code for 4.0 (CASSANDRA-13081)
 + * Add support for + and - operations on dates (CASSANDRA-11936)
 + * Fix 

[jira] [Commented] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-06-28 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067052#comment-16067052
 ] 

Stefan Podkowinski commented on CASSANDRA-13581:


Btw, I just noticed that you guys really did a great job on the capi-rowcache 
github page! We should definitely link the project from the already existing 
plugin page, as Jeff already suggested. You should try to setup sphinx locally 
as described in {{doc/README.md}}. This will allow you to build the 
documentation and check if all links work and if the final plugin page looks 
like what it should be on the official docs.

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.x
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12735) org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out

2017-06-28 Thread Anand Maharana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067042#comment-16067042
 ] 

Anand Maharana commented on CASSANDRA-12735:


I got the same error for a 10 node 2 DC (7-3) cluster. This is something we saw 
after using the cluster for 2 weeks.

> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out
> -
>
> Key: CASSANDRA-12735
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12735
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration, Core, Materialized Views
> Environment: Python 2.7.11, Datastax Cassandra 3.7.0  
>Reporter: Rajesh Radhakrishnan
> Fix For: 3.7
>
>
> We got a cluster of two nodes running Cassandra.3.7.0 and using client 
> running Python 2.7.11 injecting lot of data from maybe 100 or so jobs. 
> --
> Cache setting can be seen from system.log:
> INFO  [main] 2016-09-30 15:12:50,002 AuthCache.java:172 - (Re)initializing 
> CredentialsCache (validity period/update interval/max entries) 
> (2000/2000/1000)
> INFO  [SharedPool-Worker-1] 2016-09-30 15:15:09,561 AuthCache.java:172 - 
> (Re)initializing PermissionsCache (validity period/update interval/max 
> entries) (1/1/1000)
> INFO  [SharedPool-Worker-1] 2016-09-30 15:15:24,319 AuthCache.java:172 - 
> (Re)initializing RolesCache (validity period/update interval/max entries) 
> (5000/5000/1000)
> ===
> But I am getting the following exception :
> ERROR [SharedPool-Worker-90] 2016-09-30 15:17:20,883 ErrorMessage.java:338 - 
> Unexpected exception during request
> com.google.common.util.concurrent.UncheckedExecutionException: 
> com.google.common.util.concurrent.UncheckedExecutionException: 
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
> received only 0 responses.
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203) 
> ~[guava-18.0.jar:na]
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3937) 
> ~[guava-18.0.jar:na]
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941) 
> ~[guava-18.0.jar:na]
>   at 
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
>  ~[guava-18.0.jar:na]
>   at org.apache.cassandra.auth.AuthCache.get(AuthCache.java:108) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:45)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.authorize(ClientState.java:375) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:308)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:285)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:272) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:256)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:211)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.checkAccess(BatchStatement.java:137)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:502)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:495)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.transport.messages.BatchMessage.execute(BatchMessage.java:217)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
>   at 
> 

[jira] [Commented] (CASSANDRA-13601) Changes requested to the cassandra's debian + rpm installers packages

2017-06-28 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066923#comment-16066923
 ] 

Michael Shuler commented on CASSANDRA-13601:


Looks incomplete for testing and {{ cassandra-env.sh}} sets a duplicate 
{{-Xss}} value which comes from {{jvm.options}} as far as I can tell. I'm not 
sure how to do conditionals for {{ant test}} (and friends), which I thought was 
being manually edited in the jenkins job.

A quick grep in the source tree:
{noformat}
(trunk)mshuler@hana:~/git/cassandra$ git grep -C3 '\-Xss'
build.xml-
build.xml-
build.xml-
build.xml:
build.xml-
build.xml-
build.xml-
--
build.xml-
build.xml-
build.xml-
build.xml:
build.xml-
build.xml-
build.xml-
--
build.xml-
build.xml-
build.xml-
build.xml:
build.xml-
build.xml-
build.xml-
--
conf/jvm.options--XX:+HeapDumpOnOutOfMemoryError
conf/jvm.options-
conf/jvm.options-# Per-thread stack size.
conf/jvm.options:-Xss256k
conf/jvm.options-
conf/jvm.options-# Larger interned string table, for gossip's benefit 
(CASSANDRA-6410)
conf/jvm.options--XX:StringTableSize=103
--
doc/source/development/ide.rst-
doc/source/development/ide.rst-::
doc/source/development/ide.rst-
doc/source/development/ide.rst:   -Xms1024M -Xmx1024M -Xmn220M -Xss256k -ea 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -XX:+UseParNewGC 
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCondCardMark 
-javaagent:./lib/jamm-0.3.0.jar -Djava.net.preferIPv4Stack=true
doc/source/development/ide.rst-
doc/source/development/ide.rst-.. image:: images/eclipse_debug6.png
doc/source/development/ide.rst-
{noformat}

> Changes requested to the cassandra's debian + rpm installers packages
> -
>
> Key: CASSANDRA-13601
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13601
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
> Environment: ~$ lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
>Reporter: Amitkumar Ghatwal
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: ppc64le_unaligned_memory_access.patch
>
>
> Hi All,
> Thanks [~mshuler] for helping in installing cassandra using arch independent 
> installers  for debian + rpm packages from here : 
> http://cassandra.apache.org/download/
> For my architecture - " ppc64le" , the installation process from debian + rpm 
> wasn't straightforward. And needed below configuration level changes.
> For Ubuntu- Cassandra 3.10 release - below changes were needed
> 1) echo "deb [arch=amd64] http://www.apache.org/dist/cassandra/debian 310x 
> main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list 
> 2) sed -i -e s/Xss256/Xss512/g /etc/cassandra/jvm.options
> 3) Removing jna-4.0.0.jar and replacing it with latest jna-4.4.0.jar in 
> (/usr/share/cassandra/lib)- Downloaded from here . 
> 4) Restart cassandra service
> For RHEL - Cassandra 3.0.13 release - below changes were needed
> 1) sed -i -e s/Xss256/Xss512/g /etc/cassandra/default.conf/cassandra-env.sh
> 3) Removing jna-4.0.0.jar and replacing it with latest jna-4.4.0.jar in 
> (/usr/share/cassandra/lib)- Downloaded from here . 
> 4) Restart cassandra service
> Could you please help in introducing above changes so that cassandra can be 
> installed from the debian + rpm pcakages and will indeed become architecture 
> independent.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13601) Changes requested to the cassandra's debian + rpm installers packages

2017-06-28 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066923#comment-16066923
 ] 

Michael Shuler edited comment on CASSANDRA-13601 at 6/28/17 5:39 PM:
-

Looks incomplete for testing and {{cassandra-env.sh}} sets a duplicate {{-Xss}} 
value which comes from {{jvm.options}} as far as I can tell. I'm not sure how 
to do conditionals for {{ant test}} (and friends), which I thought was being 
manually edited in the jenkins job.

A quick grep in the source tree:
{noformat}
(trunk)mshuler@hana:~/git/cassandra$ git grep -C3 '\-Xss'
build.xml-
build.xml-
build.xml-
build.xml:
build.xml-
build.xml-
build.xml-
--
build.xml-
build.xml-
build.xml-
build.xml:
build.xml-
build.xml-
build.xml-
--
build.xml-
build.xml-
build.xml-
build.xml:
build.xml-
build.xml-
build.xml-
--
conf/jvm.options--XX:+HeapDumpOnOutOfMemoryError
conf/jvm.options-
conf/jvm.options-# Per-thread stack size.
conf/jvm.options:-Xss256k
conf/jvm.options-
conf/jvm.options-# Larger interned string table, for gossip's benefit 
(CASSANDRA-6410)
conf/jvm.options--XX:StringTableSize=103
--
doc/source/development/ide.rst-
doc/source/development/ide.rst-::
doc/source/development/ide.rst-
doc/source/development/ide.rst:   -Xms1024M -Xmx1024M -Xmn220M -Xss256k -ea 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -XX:+UseParNewGC 
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCondCardMark 
-javaagent:./lib/jamm-0.3.0.jar -Djava.net.preferIPv4Stack=true
doc/source/development/ide.rst-
doc/source/development/ide.rst-.. image:: images/eclipse_debug6.png
doc/source/development/ide.rst-
{noformat}


was (Author: mshuler):
Looks incomplete for testing and {{ cassandra-env.sh}} sets a duplicate 
{{-Xss}} value which comes from {{jvm.options}} as far as I can tell. I'm not 
sure how to do conditionals for {{ant test}} (and friends), which I thought was 
being manually edited in the jenkins job.

A quick grep in the source tree:
{noformat}
(trunk)mshuler@hana:~/git/cassandra$ git grep -C3 '\-Xss'
build.xml-
build.xml-
build.xml-
build.xml:
build.xml-
build.xml-
build.xml-
--
build.xml-
build.xml-
build.xml-
build.xml:
build.xml-
build.xml-
build.xml-
--
build.xml-
build.xml-
build.xml-
build.xml:
build.xml-
build.xml-
build.xml-
--
conf/jvm.options--XX:+HeapDumpOnOutOfMemoryError
conf/jvm.options-
conf/jvm.options-# Per-thread stack size.
conf/jvm.options:-Xss256k
conf/jvm.options-
conf/jvm.options-# Larger interned string table, for gossip's benefit 
(CASSANDRA-6410)
conf/jvm.options--XX:StringTableSize=103
--
doc/source/development/ide.rst-
doc/source/development/ide.rst-::
doc/source/development/ide.rst-
doc/source/development/ide.rst:   -Xms1024M -Xmx1024M -Xmn220M -Xss256k -ea 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -XX:+UseParNewGC 
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCondCardMark 
-javaagent:./lib/jamm-0.3.0.jar -Djava.net.preferIPv4Stack=true
doc/source/development/ide.rst-
doc/source/development/ide.rst-.. image:: images/eclipse_debug6.png
doc/source/development/ide.rst-
{noformat}

> Changes requested to the cassandra's debian + rpm installers packages
> -
>
> Key: CASSANDRA-13601
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13601
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
> Environment: ~$ lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
>Reporter: Amitkumar Ghatwal
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: ppc64le_unaligned_memory_access.patch
>
>
> Hi All,
> Thanks [~mshuler] for helping in installing cassandra using arch independent 
> installers  for debian + rpm packages from here : 
> http://cassandra.apache.org/download/
> For my architecture - " ppc64le" , the installation process from debian + rpm 
> wasn't straightforward. And needed below configuration level changes.
> For Ubuntu- Cassandra 3.10 release - below changes were needed
> 1) echo "deb [arch=amd64] http://www.apache.org/dist/cassandra/debian 310x 
> main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list 
> 2) sed -i -e s/Xss256/Xss512/g /etc/cassandra/jvm.options
> 3) Removing jna-4.0.0.jar and replacing it with latest jna-4.4.0.jar in 
> (/usr/share/cassandra/lib)- Downloaded from here . 
> 4) Restart cassandra service
> For RHEL - 

[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-28 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 6/28/17 5:24 PM:


Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sump up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the option in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the options is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.


was (Author: rgerard):
Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sump up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the options in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the options is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over 

[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-28 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 6/28/17 5:24 PM:


Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sump up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the options in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the options is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.


was (Author: rgerard):
Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sump up moving things closer to TWCS as not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the options in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the options is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over 

[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-28 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 6/28/17 5:24 PM:


Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sump up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the option in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the option is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.


was (Author: rgerard):
Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sump up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the option in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the options is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over 

[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-28 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 6/28/17 5:23 PM:


Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sump up moving things closer to TWCS as not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the options in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the options is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.


was (Author: rgerard):
Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165]
So only modifying things at the TWS level would have resulted in compacting the 
sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.
It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sump up moving things closer to TWCS as not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the options in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the options is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).

[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-28 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 6/28/17 5:23 PM:


Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165]
So only modifying things at the TWS level would have resulted in compacting the 
sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.
It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sump up moving things closer to TWCS as not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the options in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the options is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.


was (Author: rgerard):
Hi back Marcus,

So I took into account your comments and regarding your the 1rst one I wanted 
to do that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165]
So only modifying things at the TWS level would have resulted in compacting the 
sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.
It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sump up moving things closer to TWCS as not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the options in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the options is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over 

[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-28 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 6/28/17 5:22 PM:


Hi back Marcus,

So I took into account your comments and regarding your the 1rst one I wanted 
to do that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165]
So only modifying things at the TWS level would have resulted in compacting the 
sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.
It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sump up moving things closer to TWCS as not possible (to me) without 
impacting more external code. 

Regarding the 2nd question I put the code validating the options in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the options is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.


was (Author: rgerard):
Hi back Marcus,

So I took into account your comments and regarding your the 1rst one I wanted 
to do that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165]

So only modifying things at the TWS level would have resulted in compacting the 
sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.
It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)

Regarding the 2nd question I put the code validating the options in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the options is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up 

[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-28 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 6/28/17 5:21 PM:


Hi back Marcus,

So I took into account your comments and regarding your the 1rst one I wanted 
to do that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165]

So only modifying things at the TWS level would have resulted in compacting the 
sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.
It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)

Regarding the 2nd question I put the code validating the options in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the options is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.


was (Author: rgerard):
Hi back Marcus,

So I took into account your comments and regarding your the 1rst one I wanted 
to do that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165]

So only modifying things at the TWS level would have resulted in compacting the 
sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.
It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)

Regarding the 2nd question I put the code validating the options in 
TimeWindowCompactionStategyOptions 
https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157
in order to trigger an exception if the options is used elsewhere than TWCS.
https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161






P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: 

[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-28 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 6/28/17 5:20 PM:


Hi back Marcus,

So I took into account your comments and regarding your the 1rst one I wanted 
to do that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165]

So only modifying things at the TWS level would have resulted in compacting the 
sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.
It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)

Regarding the 2nd question I put the code validating the options in 
TimeWindowCompactionStategyOptions 
https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157
in order to trigger an exception if the options is used elsewhere than TWCS.
https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161






P.s: I will have more time in the upcoming days, so I will be more responsive.


was (Author: rgerard):
Hi back Marcus,

So I took into account your comments and regarding your the 1rst one I wanted 
to do that at first but 
getFullyExpiredSSTables is also used in CompactionTask
https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165
So only modifying things at the TWS level would have resulted in compacting the 
sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.
It is also making worthDroppingTombstones ignoring overlaps
 
https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)

Regarding the 2nd question I put the code validating the options in 
TimeWindowCompactionStategyOptions 
https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157
in order to trigger an exception if the options is used elsewhere than TWCS.
https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161






P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], 

[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-06-28 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD commented on CASSANDRA-13418:
---

Hi back Marcus,

So I took into account your comments and regarding your the 1rst one I wanted 
to do that at first but 
getFullyExpiredSSTables is also used in CompactionTask
https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165
So only modifying things at the TWS level would have resulted in compacting the 
sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.
It is also making worthDroppingTombstones ignoring overlaps
 
https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)

Regarding the 2nd question I put the code validating the options in 
TimeWindowCompactionStategyOptions 
https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157
in order to trigger an exception if the options is used elsewhere than TWCS.
https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161






P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13644) Debian Init script override HeapDumpPath from cassandra-env.sh

2017-06-28 Thread FACORAT (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066874#comment-16066874
 ] 

FACORAT commented on CASSANDRA-13644:
-

Patch can be found here: 
https://github.com/linux-wizard/cassandra/compare/trunk...CASSANDRA-13644

> Debian Init script override HeapDumpPath from cassandra-env.sh
> --
>
> Key: CASSANDRA-13644
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13644
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
> Environment: debian cassandra packages
> cassandra 2.1.x, 2.2.x, 3.x
>Reporter: FACORAT
>Priority: Minor
>
> HeadDumpPath can be defined in /etc/cassandra/cassandra-env.sh. However this 
> is not taken into account as the debian init script will override it with its 
> own patch defined as Cassandra home dir.
> We should prevent init script from overriding settings from cassandra-env.sh



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13601) Changes requested to the cassandra's debian + rpm installers packages

2017-06-28 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066872#comment-16066872
 ] 

Jeff Jirsa commented on CASSANDRA-13601:


At face value the files in the PR seem reasonable - checking arch with uname 
and adding a flag to JVM_OPTS for that arch seems perfectly fine. [~mshuler] 
still interested in being reviewer?





> Changes requested to the cassandra's debian + rpm installers packages
> -
>
> Key: CASSANDRA-13601
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13601
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
> Environment: ~$ lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
>Reporter: Amitkumar Ghatwal
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: ppc64le_unaligned_memory_access.patch
>
>
> Hi All,
> Thanks [~mshuler] for helping in installing cassandra using arch independent 
> installers  for debian + rpm packages from here : 
> http://cassandra.apache.org/download/
> For my architecture - " ppc64le" , the installation process from debian + rpm 
> wasn't straightforward. And needed below configuration level changes.
> For Ubuntu- Cassandra 3.10 release - below changes were needed
> 1) echo "deb [arch=amd64] http://www.apache.org/dist/cassandra/debian 310x 
> main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list 
> 2) sed -i -e s/Xss256/Xss512/g /etc/cassandra/jvm.options
> 3) Removing jna-4.0.0.jar and replacing it with latest jna-4.4.0.jar in 
> (/usr/share/cassandra/lib)- Downloaded from here . 
> 4) Restart cassandra service
> For RHEL - Cassandra 3.0.13 release - below changes were needed
> 1) sed -i -e s/Xss256/Xss512/g /etc/cassandra/default.conf/cassandra-env.sh
> 3) Removing jna-4.0.0.jar and replacing it with latest jna-4.4.0.jar in 
> (/usr/share/cassandra/lib)- Downloaded from here . 
> 4) Restart cassandra service
> Could you please help in introducing above changes so that cassandra can be 
> installed from the debian + rpm pcakages and will indeed become architecture 
> independent.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-06-28 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066870#comment-16066870
 ] 

Jeff Jirsa commented on CASSANDRA-13581:


[~amitkumar_ghatwal] - I merged the basic doc you gave me 
[here|https://github.com/apache/cassandra/commit/74e3f152229078] that added the 
plugin section, but we haven't rebuilt the site yet. If you want that [doc 
page|https://github.com/apache/cassandra/blob/74e3f152229078f31591a15761d35d119733aa45/doc/source/plugins/index.rst]
 to link to your other repo (which is a reasonable request), please upload a 
diff for it, and I'll merge that. I think Stefan is suggesting you also add 
some descriptive text beyond just a link to the repo. PR #118 adds some links 
to the plugin page, but the plugin page you provide doesn't say what CAPI does 
or where to find the plugin.


> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.x
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11500) Obsolete MV entry may not be properly deleted

2017-06-28 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-11500:
-
Component/s: Materialized Views

> Obsolete MV entry may not be properly deleted
> -
>
> Key: CASSANDRA-11500
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11500
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Sylvain Lebresne
>
> When a Materialized View uses a non-PK base table column in its PK, if an 
> update changes that column value, we add the new view entry and remove the 
> old one. When doing that removal, the current code uses the same timestamp 
> than for the liveness info of the new entry, which is the max timestamp for 
> any columns participating to the view PK. This is not correct for the 
> deletion as the old view entry could have other columns with higher timestamp 
> which won't be deleted as can easily shown by the failing of the following 
> test:
> {noformat}
> CREATE TABLE t (k int PRIMARY KEY, a int, b int);
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (k, a);
> INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
> UPDATE t USING TIMESTAMP 4 SET b = 2 WHERE k = 1;
> UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1;
> SELECT * FROM mv WHERE k = 1; // This currently return 2 entries, the old 
> (invalid) and the new one
> {noformat}
> So the correct timestamp to use for the deletion is the biggest timestamp in 
> the old view entry (which we know since we read the pre-existing base row), 
> and that is what CASSANDRA-11475 does (the test above thus doesn't fail on 
> that branch).
> Unfortunately, even then we can still have problems if further updates 
> requires us to overide the old entry. Consider the following case:
> {noformat}
> CREATE TABLE t (k int PRIMARY KEY, a int, b int);
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (k, a);
> INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
> UPDATE t USING TIMESTAMP 10 SET b = 2 WHERE k = 1;
> UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1; // This will delete the 
> entry for a=1 with timestamp 10
> UPDATE t USING TIMESTAMP 3 SET a = 1 WHERE k = 1; // This needs to re-insert 
> an entry for a=1 but shouldn't be deleted by the prior deletion
> UPDATE t USING TIMESTAMP 4 SET a = 2 WHERE k = 1; // ... and we can play this 
> game more than once
> UPDATE t USING TIMESTAMP 5 SET a = 1 WHERE k = 1;
> ...
> {noformat}
> In a way, this is saying that the "shadowable" deletion mechanism is not 
> general enough: we need to be able to re-insert an entry when a prior one had 
> been deleted before, but we can't rely on timestamps being strictly bigger on 
> the re-insert. In that sense, this can be though as a similar problem than 
> CASSANDRA-10965, though the solution there of a single flag is not enough 
> since we can have to replace more than once.
> I think the proper solution would be to ship enough information to always be 
> able to decide when a view deletion is shadowed. Which means that both 
> liveness info (for updates) and shadowable deletion would need to ship the 
> timestamp of any base table column that is part the view PK (so {{a}} in the 
> example below).  It's doable (and not that hard really), but it does require 
> a change to the sstable and intra-node protocol, which makes this a bit 
> painful right now.
> But I'll also note that as CASSANDRA-1096 shows, the timestamp is not even 
> enough since on equal timestamp the value can be the deciding factor. So in 
> theory we'd have to ship the value of those columns (in the case of a 
> deletion at least since we have it in the view PK for updates). That said, on 
> that last problem, my preference would be that we start prioritizing 
> CASSANDRA-6123 seriously so we don't have to care about conflicting timestamp 
> anymore, which would make this problem go away.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13644) Debian Init script override HeapDumpPath from cassandra-env.sh

2017-06-28 Thread FACORAT (JIRA)
FACORAT created CASSANDRA-13644:
---

 Summary: Debian Init script override HeapDumpPath from 
cassandra-env.sh
 Key: CASSANDRA-13644
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13644
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
 Environment: debian cassandra packages
cassandra 2.1.x, 2.2.x, 3.x
Reporter: FACORAT
Priority: Minor


HeadDumpPath can be defined in /etc/cassandra/cassandra-env.sh. However this is 
not taken into account as the debian init script will override it with its own 
patch defined as Cassandra home dir.

We should prevent init script from overriding settings from cassandra-env.sh



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13592) Null Pointer exception at SELECT JSON statement

2017-06-28 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066797#comment-16066797
 ] 

Benjamin Lerer commented on CASSANDRA-13592:


Thanks for the patch.

The {{pagingOnToJsonQuery()}} test is passing even without the change to 
{{Selection::rowToJson}}. After looking into it seems that the problem is in 
fact in {{TupleType::toJSONString}}. The method is actually moving the buffer 
position while it should not. Which is why simple types like {{int}} or 
{{text}} are not affected by the problem.
I guess that we might have similar problems with the {{list}}, {{set}}, {{map}} 
or UDT types. So, it will be nice to add some extra tests for those type as 
well.

I would put the tests in {{JsonTest}} instead of in {{PagingQueryTest}}.

The {{json}} support has been introduced in 2.2 so we will also need some 
patches for 2.2, 3.0, 3.11. 

> Null Pointer exception at SELECT JSON statement
> ---
>
> Key: CASSANDRA-13592
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13592
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Debian Linux
>Reporter: Wyss Philipp
>Assignee: ZhaoYang
>  Labels: beginner
> Attachments: system.log
>
>
> A Nulll pointer exception appears when the command
> {code}
> SELECT JSON * FROM examples.basic;
> ---MORE---
>  message="java.lang.NullPointerException">
> Examples.basic has the following description (DESC examples.basic;):
> CREATE TABLE examples.basic (
> key frozen> PRIMARY KEY,
> wert text
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> The error appears after the ---MORE--- line.
> The field "wert" has a JSON formatted string.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11500) Obsolete MV entry may not be properly deleted

2017-06-28 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066749#comment-16066749
 ] 

ZhaoYang commented on CASSANDRA-11500:
--

{quote}
we need to be able to re-insert an entry when a prior one had been deleted 
before, but we can't rely on timestamps being strictly bigger on the re-insert. 
In that sense, this can be though as a similar problem than CASSANDRA-10965, 
though the solution there of a single flag is not enough since we can have to 
replace more than once.
{quote}

Agreed.

How about shipping an extra "view-update-time" (`nowInSecond` that 
view-operation is triggered) per view row.  it will be used to check who's new 
when TS ties.

The `nowInSeconds` could still be the same in certain cases, but rare, similar 
to CASSANDRA-1096


> Obsolete MV entry may not be properly deleted
> -
>
> Key: CASSANDRA-11500
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11500
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>
> When a Materialized View uses a non-PK base table column in its PK, if an 
> update changes that column value, we add the new view entry and remove the 
> old one. When doing that removal, the current code uses the same timestamp 
> than for the liveness info of the new entry, which is the max timestamp for 
> any columns participating to the view PK. This is not correct for the 
> deletion as the old view entry could have other columns with higher timestamp 
> which won't be deleted as can easily shown by the failing of the following 
> test:
> {noformat}
> CREATE TABLE t (k int PRIMARY KEY, a int, b int);
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (k, a);
> INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
> UPDATE t USING TIMESTAMP 4 SET b = 2 WHERE k = 1;
> UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1;
> SELECT * FROM mv WHERE k = 1; // This currently return 2 entries, the old 
> (invalid) and the new one
> {noformat}
> So the correct timestamp to use for the deletion is the biggest timestamp in 
> the old view entry (which we know since we read the pre-existing base row), 
> and that is what CASSANDRA-11475 does (the test above thus doesn't fail on 
> that branch).
> Unfortunately, even then we can still have problems if further updates 
> requires us to overide the old entry. Consider the following case:
> {noformat}
> CREATE TABLE t (k int PRIMARY KEY, a int, b int);
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (k, a);
> INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
> UPDATE t USING TIMESTAMP 10 SET b = 2 WHERE k = 1;
> UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1; // This will delete the 
> entry for a=1 with timestamp 10
> UPDATE t USING TIMESTAMP 3 SET a = 1 WHERE k = 1; // This needs to re-insert 
> an entry for a=1 but shouldn't be deleted by the prior deletion
> UPDATE t USING TIMESTAMP 4 SET a = 2 WHERE k = 1; // ... and we can play this 
> game more than once
> UPDATE t USING TIMESTAMP 5 SET a = 1 WHERE k = 1;
> ...
> {noformat}
> In a way, this is saying that the "shadowable" deletion mechanism is not 
> general enough: we need to be able to re-insert an entry when a prior one had 
> been deleted before, but we can't rely on timestamps being strictly bigger on 
> the re-insert. In that sense, this can be though as a similar problem than 
> CASSANDRA-10965, though the solution there of a single flag is not enough 
> since we can have to replace more than once.
> I think the proper solution would be to ship enough information to always be 
> able to decide when a view deletion is shadowed. Which means that both 
> liveness info (for updates) and shadowable deletion would need to ship the 
> timestamp of any base table column that is part the view PK (so {{a}} in the 
> example below).  It's doable (and not that hard really), but it does require 
> a change to the sstable and intra-node protocol, which makes this a bit 
> painful right now.
> But I'll also note that as CASSANDRA-1096 shows, the timestamp is not even 
> enough since on equal timestamp the value can be the deciding factor. So in 
> theory we'd have to ship the value of those columns (in the case of a 
> deletion at least since we have it in the view PK for updates). That said, on 
> that last problem, my preference would be that we start prioritizing 
> CASSANDRA-6123 seriously so we don't have to care about conflicting timestamp 
> anymore, which would make this problem go away.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: 

[jira] [Commented] (CASSANDRA-13569) Schedule schema pulls just once per endpoint

2017-06-28 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066564#comment-16066564
 ] 

Stefan Podkowinski commented on CASSANDRA-13569:


The mentioned future for the delayed schema migration will only represent the 
status of the scheduled execution, but not if the schema pull itself was 
successful. A completed future simply indicates that the delayed execution took 
place. Afterwards we're free to try the next schema pull if we have to. 

The intention of this ticket is not to redesign any retry/timeout semantics or 
to reimplement the schema exchange protocol. This would need more careful 
thinking and should possibly done in context of  CASSANDRA-10699.

> Schedule schema pulls just once per endpoint
> 
>
> Key: CASSANDRA-13569
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13569
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Schema mismatches detected through gossip will get resolved by calling 
> {{MigrationManager.maybeScheduleSchemaPull}}. This method may decide to 
> schedule execution of {{MigrationTask}}, but only after using a 
> {{MIGRATION_DELAY_IN_MS = 6}} delay (for reasons unclear to me). 
> Meanwhile, as long as the migration task hasn't been executed, we'll continue 
> to have schema mismatches reported by gossip and will have corresponding 
> {{maybeScheduleSchemaPull}} calls, which will schedule further tasks with the 
> mentioned delay. Some local testing shows that dozens of tasks for the same 
> endpoint will eventually be executed and causing the same, stormy behavior 
> for this very endpoints.
> My proposal would be to simply not schedule new tasks for the same endpoint, 
> in case we still have pending tasks waiting for execution after 
> {{MIGRATION_DELAY_IN_MS}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-builds git commit: Add package signing instructions

2017-06-28 Thread spod
Repository: cassandra-builds
Updated Branches:
  refs/heads/master 311046bf0 -> b15c7c055


Add package signing instructions


Project: http://git-wip-us.apache.org/repos/asf/cassandra-builds/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-builds/commit/b15c7c05
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-builds/tree/b15c7c05
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-builds/diff/b15c7c05

Branch: refs/heads/master
Commit: b15c7c055226603790a2c5d57bd51516f8758db4
Parents: 311046b
Author: Stefan Podkowinski 
Authored: Wed Jun 28 15:50:59 2017 +0200
Committer: Stefan Podkowinski 
Committed: Wed Jun 28 15:52:45 2017 +0200

--
 README.md | 30 ++
 1 file changed, 30 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-builds/blob/b15c7c05/README.md
--
diff --git a/README.md b/README.md
index 3b77fdd..8bb85ee 100644
--- a/README.md
+++ b/README.md
@@ -30,6 +30,36 @@ Packages for official releases can only be build from tags. 
In this case, the ta
 
 Builds based on any branch will use the version defined in either `build.xml` 
(RPM) or `debian/changes` (deb). Afterwards a snapshot indicator will be 
appended.
 
+##  Signing packages
+
+### RPM
+
+Signatures can be used for both yum repository integrity protection and 
end-to-end package verification.
+
+Providing a signature 
([repomd.xml.asc](https://www.apache.org/dist/cassandra/redhat/311x/repodata/repomd.xml.asc))
 for 
[repomd.xml](https://www.apache.org/dist/cassandra/redhat/311x/repodata/repomd.xml)
 allows clients to verify the repository's meta-data, as enabled by 
`repo_gpgcheck=1` in the yum config.
+
+Individual package files can also contain a signature in the RPM header. This 
can be done either during the build process (`rpmbuild --sign`) or afterwards 
on the final artifact. As the RPMs should be build using docker without any 
user intervention, we have to go with the later option here. One solution for 
this is to use the rpmsign wrapper (`yum install rpm-sign`) and use it on the 
package, e.g.:
+```rpmsign -D '%_gpg_name MyAlias' --addsign cassandra-3.0.13-1.noarch.rpm```
+
+Verifying package signatures requires to import the public keys first:
+
+```
+rpm --import https://www.apache.org/dist/cassandra/KEYS
+```
+
+Afterwards the following command should report "OK" for included hashes and 
gpg signatures:
+
+```
+rpm -K cassandra-3.0.13-1.noarch.rpm
+```
+
+Once the RPM is signed, both the import key and verification steps should take 
place automatically during installation from the yum repo (see `gpgcheck=1`).
+
+### Debian
+
+See use of `debsign` in `cassandra-release/prepare_release.sh`.
+
+
 ## Publishing packages
 
 TODO


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13433) RPM distribution improvements and known issues

2017-06-28 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066455#comment-16066455
 ] 

Michael Shuler commented on CASSANDRA-13433:


Cassandra requires Python 2.7. The earliest RHEL/CentOS version that Python 2.7 
is available to satisfy that dependency is RHEL/CentOS 7.0, so that is the 
distribution that we build the packages on.

> RPM distribution improvements and known issues
> --
>
> Key: CASSANDRA-13433
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13433
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>
> Starting with CASSANDRA-13252, new releases will be provided as both official 
> RPM and Debian packages.  While the Debian packages are already well 
> established with our user base, the RPMs just have been release for the first 
> time and still require some attention. 
> Feel free to discuss RPM related issues in this ticket and open a sub-task to 
> fill a bug report. 
> Please note that native systemd support will be implemented with 
> CASSANDRA-13148 and this is not strictly a RPM specific issue. We still 
> intent to offer non-systemd support based on the already working init scripts 
> that we ship. Therefor the first step is to make use of systemd backward 
> compatibility for SysV/LSB scripts, so we can provide RPMs for both systemd 
> and non-systemd environments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13547) Filtered materialized views missing data

2017-06-28 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066434#comment-16066434
 ] 

ZhaoYang commented on CASSANDRA-13547:
--

2. incorrect non-pk tombstones

{quote}
But,when inserting/updating, computeLivenessInfoForEntry() uses the biggest 
timestamp of the primary keys for liveliness info.
This causes non-pk columns to be treated as deleted because view tombstones 
have higher timestamp than live cell from base row.
{quote}

Using the greater timestamp from view's columns(pk+non-pk) in base row will 
later shadow entire row in view if there is a normal column in base as primary 
key in view. 

{code}
"createTable("CREATE TABLE test (a int, b int, c int, d int, PRIMARY KEY (a))"
"CREATE MATERIALIZED VIEW mv_test1 AS SELECT * FROM test WHERE a IS NOT NULL 
AND b IS NOT NULL PRIMARY KEY (a, b)");
"INSERT INTO test (a, b, c, d) VALUES (1, 1, 1, 1) using timestamp 0"
"UPDATE test using timestamp 5 set c = 0 WHERE a=1"
"UPDATE test using timestamp 1 set b = 0 WHERE a=1" 
"UPDATE test using timestamp 2 set b = 1 WHERE a=1"
"SELECT * FROM mv_test1"; // no data 
{code}


> Filtered materialized views missing data
> 
>
> Key: CASSANDRA-13547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13547
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Official Cassandra 3.10 Docker image (ID 154b919bf8ce).
>Reporter: Craig Nicholson
>Assignee: Krishna Dattu Koneru
>Priority: Blocker
>  Labels: materializedviews
> Fix For: 3.11.x
>
>
> When creating a materialized view against a base table the materialized view 
> does not always reflect the correct data.
> Using the following test schema:
> {code:title=Schema|language=sql}
> DROP KEYSPACE IF EXISTS test;
> CREATE KEYSPACE test
>   WITH REPLICATION = { 
>'class' : 'SimpleStrategy', 
>'replication_factor' : 1 
>   };
> CREATE TABLE test.table1 (
> id int,
> name text,
> enabled boolean,
> foo text,
> PRIMARY KEY (id, name));
> CREATE MATERIALIZED VIEW test.table1_mv1 AS SELECT id, name, foo
> FROM test.table1
> WHERE id IS NOT NULL 
> AND name IS NOT NULL 
> AND enabled = TRUE
> PRIMARY KEY ((name), id);
> CREATE MATERIALIZED VIEW test.table1_mv2 AS SELECT id, name, foo, enabled
> FROM test.table1
> WHERE id IS NOT NULL 
> AND name IS NOT NULL 
> AND enabled = TRUE
> PRIMARY KEY ((name), id);
> {code}
> When I insert a row into the base table the materialized views are updated 
> appropriately. (+)
> {code:title=Insert row|language=sql}
> cqlsh> INSERT INTO test.table1 (id, name, enabled, foo) VALUES (1, 'One', 
> TRUE, 'Bar');
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |True | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
>   One |  1 | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+-
>   One |  1 |True | Bar
> (1 rows)
> {code}
> Updating the record in the base table and setting enabled to FALSE will 
> filter the record from both materialized views. (+)
> {code:title=Disable the row|language=sql}
> cqlsh> UPDATE test.table1 SET enabled = FALSE WHERE id = 1 AND name = 'One';
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |   False | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
> (0 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+-
> (0 rows)
> {code}
> However a further update to the base table setting enabled to TRUE should 
> include the record in both materialzed views, however only one view 
> (table1_mv2) gets updated. (-)
> It appears that only the view (table1_mv2) that returns the filtered column 
> (enabled) is updated. (-)
> Additionally columns that are not part of the partiion or clustering key are 
> not updated. You can see that the foo column has a null value in table1_mv2. 
> (-)
> {code:title=Enable the row|language=sql}
> cqlsh> UPDATE test.table1 SET enabled = TRUE WHERE id = 1 AND name = 'One';
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |True | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
> (0 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+--
>   One |  1 |True | 

[jira] [Comment Edited] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-06-28 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066310#comment-16066310
 ] 

Amitkumar Ghatwal edited comment on CASSANDRA-13581 at 6/28/17 11:41 AM:
-

I was referring to the plugins ( *.jars) here : 
https://github.com/hhorii/capi-rowcache/releases and write up about the 
plugin/usage/build/run is here : https://github.com/hhorii/capi-rowcache . 
Forgive me ,if i am misunderstanding your comments . Let me know please !!!

Stefan Podkowinski added a comment - 13/Jun/17 09:02

Still waiting for the actual content for the plugins page, including a link to 
the CAPI project pages.



was (Author: amitkumar_ghatwal):
I was referring to the plugins ( *.jars) here : 
https://github.com/hhorii/capi-rowcache/releases and write up about the 
plugin/usage/build/run is here : https://github.com/hhorii/capi-rowcache . 
Forgive me ,if i am misunderstanding your comments . Let me know please !!!

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.x
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-06-28 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066310#comment-16066310
 ] 

Amitkumar Ghatwal commented on CASSANDRA-13581:
---

I was referring to the plugins ( *.jars) here : 
https://github.com/hhorii/capi-rowcache/releases and write up about the 
plugin/usage/build/run is here : https://github.com/hhorii/capi-rowcache . 
Forgive me ,if i am misunderstanding your comments . Let me know please !!!

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.x
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13547) Filtered materialized views missing data

2017-06-28 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066298#comment-16066298
 ] 

ZhaoYang commented on CASSANDRA-13547:
--

Just my opinion

1. Missing Update.

if we force user to SELECT a restricted column in View, it will work. May have 
some compatibility issue since users can do it before patching.
And it will cost extra space to store already know value from 
ViewMetadata.selection.whereClause

By changing the filtering logic to check if there is restricted columns in 
View, and generate corresponding view-updates will save some spaces on view 
tables.

| With this we can also make sure that ALTER TABLE does not drop a column that 
is used in view.

by checking ViewMetadata, this could also be prevented.



> Filtered materialized views missing data
> 
>
> Key: CASSANDRA-13547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13547
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Official Cassandra 3.10 Docker image (ID 154b919bf8ce).
>Reporter: Craig Nicholson
>Assignee: Krishna Dattu Koneru
>Priority: Blocker
>  Labels: materializedviews
> Fix For: 3.11.x
>
>
> When creating a materialized view against a base table the materialized view 
> does not always reflect the correct data.
> Using the following test schema:
> {code:title=Schema|language=sql}
> DROP KEYSPACE IF EXISTS test;
> CREATE KEYSPACE test
>   WITH REPLICATION = { 
>'class' : 'SimpleStrategy', 
>'replication_factor' : 1 
>   };
> CREATE TABLE test.table1 (
> id int,
> name text,
> enabled boolean,
> foo text,
> PRIMARY KEY (id, name));
> CREATE MATERIALIZED VIEW test.table1_mv1 AS SELECT id, name, foo
> FROM test.table1
> WHERE id IS NOT NULL 
> AND name IS NOT NULL 
> AND enabled = TRUE
> PRIMARY KEY ((name), id);
> CREATE MATERIALIZED VIEW test.table1_mv2 AS SELECT id, name, foo, enabled
> FROM test.table1
> WHERE id IS NOT NULL 
> AND name IS NOT NULL 
> AND enabled = TRUE
> PRIMARY KEY ((name), id);
> {code}
> When I insert a row into the base table the materialized views are updated 
> appropriately. (+)
> {code:title=Insert row|language=sql}
> cqlsh> INSERT INTO test.table1 (id, name, enabled, foo) VALUES (1, 'One', 
> TRUE, 'Bar');
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |True | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
>   One |  1 | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+-
>   One |  1 |True | Bar
> (1 rows)
> {code}
> Updating the record in the base table and setting enabled to FALSE will 
> filter the record from both materialized views. (+)
> {code:title=Disable the row|language=sql}
> cqlsh> UPDATE test.table1 SET enabled = FALSE WHERE id = 1 AND name = 'One';
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |   False | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
> (0 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+-
> (0 rows)
> {code}
> However a further update to the base table setting enabled to TRUE should 
> include the record in both materialzed views, however only one view 
> (table1_mv2) gets updated. (-)
> It appears that only the view (table1_mv2) that returns the filtered column 
> (enabled) is updated. (-)
> Additionally columns that are not part of the partiion or clustering key are 
> not updated. You can see that the foo column has a null value in table1_mv2. 
> (-)
> {code:title=Enable the row|language=sql}
> cqlsh> UPDATE test.table1 SET enabled = TRUE WHERE id = 1 AND name = 'One';
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |True | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
> (0 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+--
>   One |  1 |True | null
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-06-28 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066272#comment-16066272
 ] 

Stefan Podkowinski commented on CASSANDRA-13581:


I don't understand why you want to link to an empty page. That doesn't make 
sense to me.

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.x
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-06-28 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066250#comment-16066250
 ] 

Amitkumar Ghatwal commented on CASSANDRA-13581:
---

any updates on above - [~jjirsa] , [~spo...@gmail.com]

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.x
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13433) RPM distribution improvements and known issues

2017-06-28 Thread regis le bretonnic (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066177#comment-16066177
 ] 

regis le bretonnic edited comment on CASSANDRA-13433 at 6/28/17 9:09 AM:
-

Hi

First, thanks a lot for providing rpm packages of cassandra.
We just upgrade our cluster from 3.0.13 to 3.0.14... Except if I make a wrong 
analysis, take care that this release requires glibc 2.14 which is not 
compatible with centos 6. Maybe you should not make a noarch rpm but a el7 
one...

Regards


was (Author: easyoups):
Hi

First thanks a lot for providing rpm packages of cassandra.
We just upgrade our cluster from 3.0.13 to 3.0.14... Except if I make a wrong 
analysis, take care that this release requires glibc 2.14 which is not 
compatible with centos 6. Maybe you should not make a noarch rpm but a el7 
one...

Regards

> RPM distribution improvements and known issues
> --
>
> Key: CASSANDRA-13433
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13433
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>
> Starting with CASSANDRA-13252, new releases will be provided as both official 
> RPM and Debian packages.  While the Debian packages are already well 
> established with our user base, the RPMs just have been release for the first 
> time and still require some attention. 
> Feel free to discuss RPM related issues in this ticket and open a sub-task to 
> fill a bug report. 
> Please note that native systemd support will be implemented with 
> CASSANDRA-13148 and this is not strictly a RPM specific issue. We still 
> intent to offer non-systemd support based on the already working init scripts 
> that we ship. Therefor the first step is to make use of systemd backward 
> compatibility for SysV/LSB scripts, so we can provide RPMs for both systemd 
> and non-systemd environments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13433) RPM distribution improvements and known issues

2017-06-28 Thread regis le bretonnic (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066177#comment-16066177
 ] 

regis le bretonnic commented on CASSANDRA-13433:


Hi

First thanks a lot for providing rpm packages of cassandra.
We just upgrade our cluster from 3.0.13 to 3.0.14... Except if I make a wrong 
analysis, take care that this release requires glibc 2.14 which is not 
compatible with centos 6. Maybe you should not make a noarch rpm but a el7 
one...

Regards

> RPM distribution improvements and known issues
> --
>
> Key: CASSANDRA-13433
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13433
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>
> Starting with CASSANDRA-13252, new releases will be provided as both official 
> RPM and Debian packages.  While the Debian packages are already well 
> established with our user base, the RPMs just have been release for the first 
> time and still require some attention. 
> Feel free to discuss RPM related issues in this ticket and open a sub-task to 
> fill a bug report. 
> Please note that native systemd support will be implemented with 
> CASSANDRA-13148 and this is not strictly a RPM specific issue. We still 
> intent to offer non-systemd support based on the already working init scripts 
> that we ship. Therefor the first step is to make use of systemd backward 
> compatibility for SysV/LSB scripts, so we can provide RPMs for both systemd 
> and non-systemd environments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13598) Started & Completed repair metrics

2017-06-28 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-13598:
---
Reviewer: Stefan Podkowinski

> Started & Completed repair metrics
> --
>
> Key: CASSANDRA-13598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13598
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Cameron Zemek
>Assignee: Cameron Zemek
>Priority: Minor
>  Labels: repair
> Fix For: 4.x
>
>
> There are no metrics to monitor repairs run as co-ordinator. A number of 
> metrics were added with CASSANDRA-13531 but didn't include metrics to monitor 
> if repair is running or how many repairs have ran.
> |4.x|[patch|https://github.com/apache/cassandra/compare/instaclustr:trunk...instaclustr:13598-4.x]|
> |3.11|[patch|https://github.com/instaclustr/cassandra/compare/cassandra-3.11...instaclustr:13598-3.11]|
> |3.0|[patch|https://github.com/instaclustr/cassandra/compare/cassandra-3.0...instaclustr:13598-3.0]|
> |2.2|[patch|https://github.com/instaclustr/cassandra/compare/cassandra-2.2...instaclustr:13598-2.2]|



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13598) Started & Completed repair metrics

2017-06-28 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066153#comment-16066153
 ] 

Stefan Podkowinski commented on CASSANDRA-13598:



||trunk||3.11||3.0||
|[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-13598-trunk]|[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-13598-3.11]|[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-13598-3.0]|
|[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/106]|[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/105]|[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/104]|
|[testall|https://circleci.com/gh/spodkowinski/cassandra/65]|[testall|https://circleci.com/gh/spodkowinski/cassandra/66]|[testall|https://circleci.com/gh/spodkowinski/cassandra/64]|


> Started & Completed repair metrics
> --
>
> Key: CASSANDRA-13598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13598
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Cameron Zemek
>Assignee: Cameron Zemek
>Priority: Minor
>  Labels: repair
> Fix For: 4.x
>
>
> There are no metrics to monitor repairs run as co-ordinator. A number of 
> metrics were added with CASSANDRA-13531 but didn't include metrics to monitor 
> if repair is running or how many repairs have ran.
> |4.x|[patch|https://github.com/apache/cassandra/compare/instaclustr:trunk...instaclustr:13598-4.x]|
> |3.11|[patch|https://github.com/instaclustr/cassandra/compare/cassandra-3.11...instaclustr:13598-3.11]|
> |3.0|[patch|https://github.com/instaclustr/cassandra/compare/cassandra-3.0...instaclustr:13598-3.0]|
> |2.2|[patch|https://github.com/instaclustr/cassandra/compare/cassandra-2.2...instaclustr:13598-2.2]|



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13641) Properly evict pstmts from prepared statements cache

2017-06-28 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13641:
---
Status: Ready to Commit  (was: Patch Available)

> Properly evict pstmts from prepared statements cache
> 
>
> Key: CASSANDRA-13641
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13641
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.11.x
>
>
> Prepared statements that are evicted from the prepared statements cache are 
> not removed from the underlying table {{system.prepared_statements}}. This 
> can lead to issues during startup.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13641) Properly evict pstmts from prepared statements cache

2017-06-28 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066064#comment-16066064
 ] 

Benjamin Lerer commented on CASSANDRA-13641:


Thanks for the patches, they look good to me.

> Properly evict pstmts from prepared statements cache
> 
>
> Key: CASSANDRA-13641
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13641
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.11.x
>
>
> Prepared statements that are evicted from the prepared statements cache are 
> not removed from the underlying table {{system.prepared_statements}}. This 
> can lead to issues during startup.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org