[jira] [Comment Edited] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-07-06 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076258#comment-16076258
 ] 

Amitkumar Ghatwal edited comment on CASSANDRA-13615 at 7/7/17 5:52 AM:
---

Hi [~jjirsa] - can you create a jenkins job on ppc64le for validating creation 
of this - "libsigar-ppc64le-linux.so" using below steps

$ git clone https://github.com/hyperic/sigar.git
$ git checkout  sigar-1.6.4 
$ cd sigar/bindings/java
$ ant
$ ls -l sigar-bin/lib
total 740
rw-rr- 1 root root 1127 Jun 16 05:03 history.xml
rw-rr- 1 root root 313128 Jun 16 05:03 *libsigar-ppc64le-linux.so*
rw-rr- 1 root root 435772 Jun 16 05:03 sigar.jar

The included version of SIGAR ( sigar-1.6.4.jar ) does not support ppc64le. 
$ cp libsigar-ppc64le-linux.so $CASSANDRA_HOME/lib/sigar-bin



was (Author: amitkumar_ghatwal):
Hi [~jjirsa] - can you create a jenkins job on ppc64le for validating creation 
of this - "libsigar-ppc64le-linux.so" using below steps

$ git clone https://github.com/hyperic/sigar.git
$ git checkout  sigar-1.6.4 
$ cd sigar/bindings/java
$ ant
$ ls -l sigar-bin/lib
total 740
rw-rr- 1 root root 1127 Jun 16 05:03 history.xml
rw-rr- 1 root root 313128 Jun 16 05:03* libsigar-ppc64le-linux.so*
rw-rr- 1 root root 435772 Jun 16 05:03 sigar.jar

The included version of SIGAR ( sigar-1.6.4.jar ) does not support ppc64le. 
$ cp libsigar-ppc64le-linux.so $CASSANDRA_HOME/lib/sigar-bin


> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-07-06 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055118#comment-16055118
 ] 

Amitkumar Ghatwal edited comment on CASSANDRA-13615 at 7/7/17 5:51 AM:
---

[~mshuler] [~jjirsa] -  Apologies for not providing the steps to create (*.so ) 
earlier, missed that part.
$ git clone https://github.com/hyperic/sigar.git 
$ git checkout sigar-1.6.4 
$ cd sigar/bindings/java
$ ant
$ sigar/bindings/java# ls -l sigar-bin/lib
total 740
-rw-r--r-- 1 root root   1127 Jun 16 05:03 history.xml
-rw-r--r-- 1 root root 313128 Jun 16 05:03 *libsigar-ppc64le-linux.so*
-rw-r--r-- 1 root root 435772 Jun 16 05:03 sigar.jar

So as the repo " https://github.com/hyperic/sigar " seems to inactive since 
long . Was wondering can this ppc64le support be added to "lib/sigar-bin" of 
the cassandra mainline. 



was (Author: amitkumar_ghatwal):
[~mshuler] [~jjirsa] -  Apologies for not providing the steps to create (*.so ) 
earlier, missed that part.
$ git clone https://github.com/hyperic/sigar.git 
$ cd sigar/bindings/java
$ ant
$ sigar/bindings/java# ls -l sigar-bin/lib
total 740
-rw-r--r-- 1 root root   1127 Jun 16 05:03 history.xml
-rw-r--r-- 1 root root 313128 Jun 16 05:03 *libsigar-ppc64le-linux.so*
-rw-r--r-- 1 root root 435772 Jun 16 05:03 sigar.jar

So as the repo " https://github.com/hyperic/sigar " seems to inactive since 
long . Was wondering can this ppc64le support be added to "lib/sigar-bin" of 
the cassandra mainline. 


> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-07-06 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076258#comment-16076258
 ] 

Amitkumar Ghatwal edited comment on CASSANDRA-13615 at 7/7/17 5:51 AM:
---

Hi [~jjirsa] - can you create a jenkins job on ppc64le for validating creation 
of this - "libsigar-ppc64le-linux.so" using below steps

$ git clone https://github.com/hyperic/sigar.git
$ git checkout  sigar-1.6.4 
$ cd sigar/bindings/java
$ ant
$ ls -l sigar-bin/lib
total 740
rw-rr- 1 root root 1127 Jun 16 05:03 history.xml
rw-rr- 1 root root 313128 Jun 16 05:03* libsigar-ppc64le-linux.so*
rw-rr- 1 root root 435772 Jun 16 05:03 sigar.jar

The included version of SIGAR ( sigar-1.6.4.jar ) does not support ppc64le. 
$ cp libsigar-ppc64le-linux.so $CASSANDRA_HOME/lib/sigar-bin



was (Author: amitkumar_ghatwal):
Hi [~jjirsa] - can you create a jenkins job on ppc64le for validating creation 
of this - "libsigar-ppc64le-linux.so" using below steps

$ git clone https://github.com/hyperic/sigar.git
$ git checkout  sigar-1.6.4 
$ cd sigar/bindings/java
$ build.xml
$ ant
$ ls -l sigar-bin/lib
total 740
rw-rr- 1 root root 1127 Jun 16 05:03 history.xml
rw-rr- 1 root root 313128 Jun 16 05:03 libsigar-ppc64le-linux.so
rw-rr- 1 root root 435772 Jun 16 05:03 sigar.jar

The included version of SIGAR ( sigar-1.6.4.jar ) does not support ppc64le. 
$ cp libsigar-ppc64le-linux.so $CASSANDRA_HOME/lib/sigar-bin


> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-07-06 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055118#comment-16055118
 ] 

Amitkumar Ghatwal edited comment on CASSANDRA-13615 at 7/7/17 5:50 AM:
---

[~mshuler] [~jjirsa] -  Apologies for not providing the steps to create (*.so ) 
earlier, missed that part.
$ git clone https://github.com/hyperic/sigar.git 
$ cd sigar/bindings/java
$ ant
$ sigar/bindings/java# ls -l sigar-bin/lib
total 740
-rw-r--r-- 1 root root   1127 Jun 16 05:03 history.xml
-rw-r--r-- 1 root root 313128 Jun 16 05:03 *libsigar-ppc64le-linux.so*
-rw-r--r-- 1 root root 435772 Jun 16 05:03 sigar.jar

So as the repo " https://github.com/hyperic/sigar " seems to inactive since 
long . Was wondering can this ppc64le support be added to "lib/sigar-bin" of 
the cassandra mainline. 



was (Author: amitkumar_ghatwal):
[~mshuler] [~jjirsa] -  Apologies for not providing the steps to create (*.so ) 
earlier, missed that part.
$ git clone https://github.com/hyperic/sigar.git 
$ cd sigar/bindings/java
$ build.xml
$ ant
$ sigar/bindings/java# ls -l sigar-bin/lib
total 740
-rw-r--r-- 1 root root   1127 Jun 16 05:03 history.xml
-rw-r--r-- 1 root root 313128 Jun 16 05:03 *libsigar-ppc64le-linux.so*
-rw-r--r-- 1 root root 435772 Jun 16 05:03 sigar.jar

So as the repo " https://github.com/hyperic/sigar " seems to inactive since 
long . Was wondering can this ppc64le support be added to "lib/sigar-bin" of 
the cassandra mainline. 


> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-07-06 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076258#comment-16076258
 ] 

Amitkumar Ghatwal edited comment on CASSANDRA-13615 at 7/7/17 5:46 AM:
---

Hi [~jjirsa] - can you create a jenkins job on ppc64le for validating creation 
of this - "libsigar-ppc64le-linux.so" using below steps

$ git clone https://github.com/hyperic/sigar.git
$ git checkout  sigar-1.6.4 
$ cd sigar/bindings/java
$ build.xml
$ ant
$ ls -l sigar-bin/lib
total 740
rw-rr- 1 root root 1127 Jun 16 05:03 history.xml
rw-rr- 1 root root 313128 Jun 16 05:03 libsigar-ppc64le-linux.so
rw-rr- 1 root root 435772 Jun 16 05:03 sigar.jar

The included version of SIGAR ( sigar-1.6.4.jar ) does not support ppc64le. 
$ cp libsigar-ppc64le-linux.so $CASSANDRA_HOME/lib/sigar-bin



was (Author: amitkumar_ghatwal):
Hi [~jjirsa] - can you create a jenkins job on ppc64le for validating creation 
of this - "libsigar-ppc64le-linux.so" using below steps

$ git clone https://github.com/hyperic/sigar.git
$ cd sigar/bindings/java
$ build.xml
$ ant
$ ls -l sigar-bin/lib
total 740
rw-rr- 1 root root 1127 Jun 16 05:03 history.xml
rw-rr- 1 root root 313128 Jun 16 05:03 libsigar-ppc64le-linux.so
rw-rr- 1 root root 435772 Jun 16 05:03 sigar.jar

The included version of SIGAR ( sigar-1.6.4.jar ) does not support ppc64le. 
$ cp libsigar-ppc64le-linux.so $CASSANDRA_HOME/lib/sigar-bin


> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13652) Deadlock in AbstractCommitLogSegmentManager

2017-07-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077630#comment-16077630
 ] 

ASF GitHub Bot commented on CASSANDRA-13652:


GitHub user Fuud opened a pull request:

https://github.com/apache/cassandra/pull/129

CASSANDRA-13652: Deadlock in AbstractCompactionManager

PR with same result as #127.
Instead of small fix in #127 this PR contains refactoring to make 
AbstractCommitLogSegmentManager code more clear. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Fuud/cassandra commitlog_deadlock_v2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cassandra/pull/129.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #129


commit e1a695874dc24e532ae21ef627e852bf999a75f3
Author: Fedor Bobin 
Date:   2017-07-07T05:37:22Z

CASSANDRA-13652: Deadlock in AbstractCompactionManager




> Deadlock in AbstractCommitLogSegmentManager
> ---
>
> Key: CASSANDRA-13652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Fuud
>
> AbstractCommitLogManager uses LockSupport.(un)park incorreclty. It invokes 
> unpark without checking if manager thread was parked in approriate place. 
> For example, logging frameworks uses queues and queues uses ReadWriteLock's 
> that uses LockSupport. Therefore AbstractCommitLogManager.wakeManager can 
> wake thread inside Lock and manager thread will sleep forever at park() 
> method (because unpark permit was already consumed inside lock).
> For examle stack traces:
> {code}
> "MigrationStage:1" id=412 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.awaitAvailableSegment(AbstractCommitLogSegmentManager.java:263)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.advanceAllocatingFrom(AbstractCommitLogSegmentManager.java:237)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.forceRecycleAll(AbstractCommitLogSegmentManager.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:210)
> at org.apache.cassandra.config.Schema.dropView(Schema.java:708)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$updateKeyspace$23(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$382/1123232162.accept(Unknown
>  Source)
> at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1332)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1282)
>   - locked java.lang.Class@cc38904
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$LocalSessionWrapper.run(DebuggableThreadPoolExecutor.java:322)
> at 
> com.ringcentral.concurrent.executors.MonitoredRunnable.run(MonitoredRunnable.java:36)
> at MON_R_MigrationStage.run(NamedRunnableFactory.java:67)
> at 
> com.ringcentral.concurrent.executors.MonitoredThreadPoolExecutor$MdcAwareRunnable.run(MonitoredThreadPoolExecutor.java:114)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> "COMMIT-LOG-ALLOCATOR:1" id=80 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> 

[jira] [Commented] (CASSANDRA-13561) Purge TTL on expiration

2017-07-06 Thread Andrew Whang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077291#comment-16077291
 ] 

Andrew Whang commented on CASSANDRA-13561:
--

Correct me if I'm wrong, but the limitations I see with CASSANDRA-13418 are:
1. It's tied to a compaction strategy, and currently only supported in TWCS.
2. It helps cleanup sstables, but only when all items in the sstable have 
expired. 

The approach I'm proposing would support dropping individual (expired) cells 
within the sstable, without having to wait for all other items  to expire.

> Purge TTL on expiration
> ---
>
> Key: CASSANDRA-13561
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13561
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Andrew Whang
>Assignee: Andrew Whang
>Priority: Minor
> Fix For: 4.0
>
>
> Tables with mostly TTL columns tend to suffer from high droppable tombstone 
> ratio, which results in higher read latency, cpu utilization, and disk usage. 
> Expired TTL data become tombstones, and the nature of purging tombstones 
> during compaction (due to checking for overlapping SSTables) make them 
> susceptible to surviving much longer than expected. A table option to purge 
> TTL on expiration would address this issue, by preventing them from becoming 
> tombstones. A boolean purge_ttl_on_expiration table setting would allow users 
> to easily turn the feature on or off. 
> Being more aggressive with gc_grace could also address the problem of long 
> lasting tombstones, but that would affect tombstones from deletes as well. 
> Even if a purged [expired] cell is revived via repair from a node that hasn't 
> yet compacted away the cell, it would be revived as an expiring cell with the 
> same localDeletionTime, so reads should properly handle them. As well, it 
> would be purged in the next compaction. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking

2017-07-06 Thread Xiaolong Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077257#comment-16077257
 ] 

Xiaolong Jiang commented on CASSANDRA-10726:


1. I will change isQuorum to satisfiesQuorumFor and add unit tests. Not sure 
about your suggestion "satisfiedQuorumFor(int quorum)" though. I will mock 
keyspace and do the unit test
2. I will remove FBUtilities#waitOnFuturesNanos
3. I will make changes to wait maximum timeToWaitNanos for all responses 
instead of for each one
4. I do have tests to cover read repair response from second node which is  
testResolveOneReadRepairRetry in DataResolverTest. It's not directly checking 
the response, it's making sure the correct data is sent to peer4. (the response 
is actually mocked by calling resolver.preprocess which is meanless, we only 
need to make sure correct data is retried to peer4)
5. hum, it's building in my personal CASSANDRA-10726 branch. I will remove the 
"final" keyword. 

> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>Assignee: Xiaolong Jiang
> Fix For: 3.0.x
>
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking

2017-07-06 Thread Xiaolong Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077236#comment-16077236
 ] 

Xiaolong Jiang commented on CASSANDRA-10726:


[~bdeggleston]Thanks for taking the time to review. 
The reason why I need responseCntSnapshot is sources.length != 
responseCntSnapshot because of the read speculative retry. When read is slow, 
cassandra will try read on one more host

> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>Assignee: Xiaolong Jiang
> Fix For: 3.0.x
>
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-3200) Repair: compare all trees together (for a given range/cf) instead of by pair in isolation

2017-07-06 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-3200:
---
Reviewer: Blake Eggleston

> Repair: compare all trees together (for a given range/cf) instead of by pair 
> in isolation
> -
>
> Key: CASSANDRA-3200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3200
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Marcus Eriksson
>Priority: Minor
>  Labels: repair
> Fix For: 4.x
>
>
> Currently, repair compare merkle trees by pair, in isolation of any other 
> tree. What that means concretely is that if I have three node A, B and C 
> (RF=3) with A and B in sync, but C having some range r inconsitent with both 
> A and B (since those are consistent), we will do the following transfer of r: 
> A -> C, C -> A, B -> C, C -> B.
> The fact that we do both A -> C and C -> A is fine, because we cannot know 
> which one is more to date from A or C. However, the transfer B -> C is 
> useless provided we do A -> C if A and B are in sync. Not doing that transfer 
> will be a 25% improvement in that case. With RF=5 and only one node 
> inconsistent with all the others, that almost a 40% improvement, etc...
> Given that this situation of one node not in sync while the others are is 
> probably fairly common (one node died so it is behind), this could be a fair 
> improvement over what is transferred. In the case where we use repair to 
> rebuild completely a node, this will be a dramatic improvement, because it 
> will avoid the rebuilded node to get RF times the data it should get.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-5901) Bootstrap should also make the data consistent on the new node

2017-07-06 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-5901:
---
Reviewer: Blake Eggleston

> Bootstrap should also make the data consistent on the new node
> --
>
> Key: CASSANDRA-5901
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5901
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 4.x
>
>
> Currently when we are bootstrapping a new node, it might bootstrap from a 
> node which does not have most upto date data. Because of this, we need to run 
> a repair after that.
> Most people will always run the repair so it would help if we can provide a 
> parameter to bootstrap to run the repair once the bootstrap has finished. 
> It can also stop the node from responding to reads till repair has finished. 
> This could be another param as well. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13166) Test case failures encountered on ppc64le

2017-07-06 Thread Jason Wee (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077147#comment-16077147
 ] 

Jason Wee commented on CASSANDRA-13166:
---

got similar errors as of following.
{noformat}
build-project:
 [echo] apache-cassandra: 
/home/jenkins/home/jobs/cassandra-unit-test-short/workspace/build.xml
[javac] Compiling 760 source files to 
/home/jenkins/home/jobs/cassandra-unit-test-short/workspace/build/classes/main
[javac] Note: Processing compiler hints annotations
[javac] Note: Processing compiler hints annotations
[javac] Note: Writing compiler command file at META-INF/hotspot_compiler
[javac] Note: Done processing compiler hints annotations
[javac] 
/home/jenkins/home/jobs/cassandra-unit-test-short/workspace/src/java/org/apache/cassandra/db/ColumnFamilyStore.java:1802:
 error: reference to SchemaConstants is ambiguous
[javac] if 
(!SchemaConstants.SYSTEM_KEYSPACE_NAMES.contains(metadata.keyspace) && 
!SchemaConstants.REPLICATED_SYSTEM_KEYSPACE_NAMES.contains(metadata.keyspace))
[javac]  ^
[javac]   both class org.apache.cassandra.schema.SchemaConstants in 
org.apache.cassandra.schema and class 
org.apache.cassandra.config.SchemaConstants in org.apache.cassandra.config match
[javac] 
/home/jenkins/home/jobs/cassandra-unit-test-short/workspace/src/java/org/apache/cassandra/db/ColumnFamilyStore.java:1802:
 error: reference to SchemaConstants is ambiguous
[javac] if 
(!SchemaConstants.SYSTEM_KEYSPACE_NAMES.contains(metadata.keyspace) && 
!SchemaConstants.REPLICATED_SYSTEM_KEYSPACE_NAMES.contains(metadata.keyspace))
[javac] 
   ^
[javac]   both class org.apache.cassandra.schema.SchemaConstants in 
org.apache.cassandra.schema and class 
org.apache.cassandra.config.SchemaConstants in org.apache.cassandra.config match
[javac] 
/home/jenkins/home/jobs/cassandra-unit-test-short/workspace/src/java/org/apache/cassandra/db/ColumnFamilyStore.java:2100:
 error: reference to Schema is ambiguous
[javac] List stores = new 
ArrayList(Schema.instance.getKeyspaces().size());
[javac] 
  ^
[javac]   both class org.apache.cassandra.schema.Schema in 
org.apache.cassandra.schema and class org.apache.cassandra.config.Schema in 
org.apache.cassandra.config match
[javac] 
/home/jenkins/home/jobs/cassandra-unit-test-short/workspace/src/java/org/apache/cassandra/db/ColumnFamilyStore.java:2619:
 error: reference to Schema is ambiguous
[javac] TableMetadata metadata = 
Schema.instance.getTableMetadata(id);
[javac]  ^
[javac]   both class org.apache.cassandra.schema.Schema in 
org.apache.cassandra.schema and class org.apache.cassandra.config.Schema in 
org.apache.cassandra.config match
[javac] 
/home/jenkins/home/jobs/cassandra-unit-test-short/workspace/src/java/org/apache/cassandra/db/ColumnFamilyStore.java:2645:
 error: reference to Schema is ambiguous
[javac] TableMetadata table = 
Schema.instance.getTableMetadata(ksName, cfName);
[javac]   ^
[javac]   both class org.apache.cassandra.schema.Schema in 
org.apache.cassandra.schema and class org.apache.cassandra.config.Schema in 
org.apache.cassandra.config match
[javac] 
/home/jenkins/home/jobs/cassandra-unit-test-short/workspace/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:210:
 error: reference to Schema is ambiguous
[javac] KeyspaceMetadata ksm = 
Schema.instance.getKeyspaceMetadata(keyspace());
[javac]^
[javac]   both class org.apache.cassandra.schema.Schema in 
org.apache.cassandra.schema and class org.apache.cassandra.config.Schema in 
org.apache.cassandra.config match
[javac] 
/home/jenkins/home/jobs/cassandra-unit-test-short/workspace/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:221:
 error: reference to SchemaConstants is ambiguous
[javac] if (columnFamily().length() > 
SchemaConstants.NAME_LENGTH)
[javac]   ^
[javac]   both class org.apache.cassandra.schema.SchemaConstants in 
org.apache.cassandra.schema and class 
org.apache.cassandra.config.SchemaConstants in org.apache.cassandra.config match
[javac] 
/home/jenkins/home/jobs/cassandra-unit-test-short/workspace/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:222:
 error: reference to SchemaConstants is ambiguous
[javac] throw new 
InvalidRequestException(String.format("Table names shouldn't be more than %s 
characters long (got \"%s\")", 

[jira] [Updated] (CASSANDRA-13678) Misconfiguration in cassandra-env.sh does not report any errors.

2017-07-06 Thread Lucas Benevides Dias (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lucas Benevides Dias updated CASSANDRA-13678:
-
Remaining Estimate: 2h
 Original Estimate: 2h
   Description: 
Once I have misconfigured the file /etc/cassandra/cassandra-env.sh. I changed 
the MAX_HEAP_SIZE="1500M" and did not change the HEAP_NEWSIZE="350M", letting 
it commented. Now I know that if you change one, you MUST change the other, but 
even knowing that it is written in the file in comments, I think some error 
message should appear to the user.

When it occured to me, I looked in the files /var/log/cassandra/system.log and 
debug.log and there wasn't any line telling about the error. The command {{ 
#service cassandra start  }} is just responseless.

There should be some warning in the logs and maybe in the stdout, warning that 
the interpretation of the cassandra-env.sh parameters was wrong.

I attached the file in which the error happens.


  was:
Once I have misconfigured the file /etc/cassandra/cassandra-env.sh. I changed 
the MAX_HEAP_SIZE="1500M" and did not change the HEAP_NEWSIZE="350M", letting 
it commented. Now I know that if you change one, you MUST change the other, but 
even knowing that it is written in the file in comments, I think some error 
message should appear to the user.

When it occured to me, I looked in the files /var/log/cassandra/system.log and 
debug.log and there wasn't any line telling about the error. The command {{ 
#service cassandra start  }} is just responseless.

There should be some warning in the logs and maybe in the stdout, warning that 
the interpretation of the cassandra-env.sh parameters was wrong.




> Misconfiguration in cassandra-env.sh does not report any errors.
> 
>
> Key: CASSANDRA-13678
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13678
> Project: Cassandra
>  Issue Type: Wish
>  Components: Configuration
> Environment: Cassandra 3.10 at Ubuntu 15.04.
>Reporter: Lucas Benevides Dias
>Priority: Trivial
>  Labels: error-feedback, usability
> Attachments: cassandra-env.sh
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Once I have misconfigured the file /etc/cassandra/cassandra-env.sh. I changed 
> the MAX_HEAP_SIZE="1500M" and did not change the HEAP_NEWSIZE="350M", letting 
> it commented. Now I know that if you change one, you MUST change the other, 
> but even knowing that it is written in the file in comments, I think some 
> error message should appear to the user.
> When it occured to me, I looked in the files /var/log/cassandra/system.log 
> and debug.log and there wasn't any line telling about the error. The command 
> {{ #service cassandra start  }} is just responseless.
> There should be some warning in the logs and maybe in the stdout, warning 
> that the interpretation of the cassandra-env.sh parameters was wrong.
> I attached the file in which the error happens.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13678) Misconfiguration in cassandra-env.sh does not report any errors.

2017-07-06 Thread Lucas Benevides Dias (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lucas Benevides Dias updated CASSANDRA-13678:
-
   Attachment: cassandra-env.sh
  Environment: Cassandra 3.10 at Ubuntu 15.04.
Reproduced In: 3.10
   Labels: error-feedback usability  (was: )
 Priority: Trivial  (was: Major)
  Description: 
Once I have misconfigured the file /etc/cassandra/cassandra-env.sh. I changed 
the MAX_HEAP_SIZE="1500M" and did not change the HEAP_NEWSIZE="350M", letting 
it commented. Now I know that if you change one, you MUST change the other, but 
even knowing that it is written in the file in comments, I think some error 
message should appear to the user.

When it occured to me, I looked in the files /var/log/cassandra/system.log and 
debug.log and there wasn't any line telling about the error. The command {{ 
#service cassandra start  }} is just responseless.

There should be some warning in the logs and maybe in the stdout, warning that 
the interpretation of the cassandra-env.sh parameters was wrong.


  Component/s: Configuration
   Issue Type: Wish  (was: Bug)

> Misconfiguration in cassandra-env.sh does not report any errors.
> 
>
> Key: CASSANDRA-13678
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13678
> Project: Cassandra
>  Issue Type: Wish
>  Components: Configuration
> Environment: Cassandra 3.10 at Ubuntu 15.04.
>Reporter: Lucas Benevides Dias
>Priority: Trivial
>  Labels: error-feedback, usability
> Attachments: cassandra-env.sh
>
>
> Once I have misconfigured the file /etc/cassandra/cassandra-env.sh. I changed 
> the MAX_HEAP_SIZE="1500M" and did not change the HEAP_NEWSIZE="350M", letting 
> it commented. Now I know that if you change one, you MUST change the other, 
> but even knowing that it is written in the file in comments, I think some 
> error message should appear to the user.
> When it occured to me, I looked in the files /var/log/cassandra/system.log 
> and debug.log and there wasn't any line telling about the error. The command 
> {{ #service cassandra start  }} is just responseless.
> There should be some warning in the logs and maybe in the stdout, warning 
> that the interpretation of the cassandra-env.sh parameters was wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13678) Misconfiguration in cassandra-env.sh does not report any errors.

2017-07-06 Thread Lucas Benevides Dias (JIRA)
Lucas Benevides Dias created CASSANDRA-13678:


 Summary: Misconfiguration in cassandra-env.sh does not report any 
errors.
 Key: CASSANDRA-13678
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13678
 Project: Cassandra
  Issue Type: Bug
Reporter: Lucas Benevides Dias






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-1704) CQL reads (aka SELECT)

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-1704:

Component/s: (was: Materialized Views)

> CQL reads (aka SELECT)
> --
>
> Key: CASSANDRA-1704
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1704
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Affects Versions: 0.8 beta 1
>Reporter: Eric Evans
>Assignee: Eric Evans
>Priority: Minor
> Fix For: 0.8 beta 1
>
> Attachments: 
> ASF.LICENSE.NOT.GRANTED--v3-0001-CASSANDRA-1704.-doc-update-for-proposed-SELECT.txt,
>  ASF.LICENSE.NOT.GRANTED--v3-0002-refactor-CQL-SELECT-to-be-more-SQLish.txt, 
> ASF.LICENSE.NOT.GRANTED--v3-0003-make-avro-exception-factory-methods-public.txt,
>  
> ASF.LICENSE.NOT.GRANTED--v3-0004-wrap-AvroRemoteExceptions-in-CQLExcpetions.txt,
>  ASF.LICENSE.NOT.GRANTED--v3-0005-backfill-missing-system-tests.txt, 
> ASF.LICENSE.NOT.GRANTED--v3-0006-add-support-for-index-scans.txt, 
> ASF.LICENSE.NOT.GRANTED--v3-0007-support-empty-unset-where-clause.txt, 
> ASF.LICENSE.NOT.GRANTED--v3-0008-SELECT-COUNT-.-FROM-support.txt
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> Data access specification and implementation for CQL.  
> This corresponds to the following RPC methods:
> * get()
> * get_slice()
> * get_count()
> * multiget_slice()
> * multiget_count()
> * get_range_slices()
> * get_indexed_slices()
> The initial check-in to trunk/ uses a syntax that looks like:
> {code:SQL}
> SELECT (FROM)?  [USING CONSISTENCY.] WHERE  [ROWLIMIT X] 
> [COLLIMIT Y] [ASC|DESC]
> {code}
> Where:
> *  is the column family name.
> *  consists of relations chained by the AND keyword.
> *  corresponds to one of the enum values in the RPC interface(s).
> What is still undone:
> * Support for indexes
> * Counts
> * Complete test coverage
> And of course, all of this is still very much open to further discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-4920) Add collation semantics to abstract type to provide standard sort order for Strings

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-4920:

Component/s: (was: Materialized Views)

> Add collation semantics to abstract type to provide standard sort order for 
> Strings
> ---
>
> Key: CASSANDRA-4920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4920
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Affects Versions: 1.2.0 beta 1
>Reporter: Sidharth
>Priority: Minor
>  Labels: cassandra
>
> Adding a way to sort UTF8 based on below described collation semantics can be 
> useful. 
> Use case: Say for example you have wide rows where you cannot use cassandra's 
> standard indexes(secondary/primary index). Lets say each column had a string 
> value that was either one of alphanumeric or purely numeric and you wanted an 
> index by value. MOre specifically you want to slice range over a bunch of 
> column values and say "get me all the ID's associated with value ABC to XYZ 
> ". As usual I would index these values in a materialized views  
> More specifically I create an index CF; And add these values into a 
> CompositeType column and SliceRange over them for the indexing to work and I 
> dont really care weather its a alpha or a numeric as long as its ordered by 
> the following collation semantics as follows:
> 1) If the string is a numeric then it should be comparable like a numeric
> 2) If its a alpha then it should be comparable like a normal string. 
> 3) If its a alhpa-numeric then a contiguos sequence of numbers in the string 
> should be compared as numbers like "c10" > "c2".
> 4) UTF8 type strings assumed everywhere.
> How this helps?:
> 1) You dont end up creating multiple CF for different value types. 
> 2) You dont have to write boiler plate to do complicated type detection and 
> do this manually in the application. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9232) "timestamp" is considered as a reserved keyword in cqlsh completion

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-9232:

Component/s: (was: Materialized Views)

> "timestamp" is considered as a reserved keyword in cqlsh completion
> ---
>
> Key: CASSANDRA-9232
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9232
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Michaël Figuière
>Assignee: Stefania
>Priority: Trivial
>  Labels: cqlsh
> Fix For: 2.1.10, 2.2.1, 3.0 beta 2
>
>
> cqlsh seems to treat "timestamp" as a reserved keyword when used as an 
> identifier:
> {code}
> cqlsh:ks1> create table t1 (int int primary key, ascii ascii, bigint bigint, 
> blob blob, boolean boolean, date date, decimal decimal, double double, float 
> float, inet inet, text text, time time, timestamp timestamp, timeuuid 
> timeuuid, uuid uuid, varchar varchar, varint varint);
> {code}
> Leads to the following completion when building an {{INSERT}} statement:
> {code}
> cqlsh:ks1> insert into t1 (int, 
> "timestamp" ascii   bigint  blobboolean date
> decimal double  float   inettexttime
> timeuuiduuidvarchar varint
> {code}
> "timestamp" is a keyword but not a reserved one and should therefore not be 
> proposed as a quoted string. It looks like this error happens only for 
> timestamp. Not a big deal of course, but it might be worth reviewing the 
> keywords treated as reserved in cqlsh, especially with the many changes 
> introduced in 3.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10899) CQL-3.0.html spec is not published yet

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10899:
-
Component/s: (was: Materialized Views)

> CQL-3.0.html spec is not published yet
> --
>
> Key: CASSANDRA-10899
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10899
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Wei Deng
>Assignee: Tyler Hobbs
>Priority: Minor
> Attachments: 10889-3.0.txt
>
>
> We have https://cassandra.apache.org/doc/cql3/CQL-2.2.html but CQL-3.0.html 
> doesn't exist yet and needs to be published, since Cassandra 3.0 is now 
> officially GA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10788) Upgrade from 2.2.1 to 3.0.0 fails with NullPointerException

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10788:
-
Component/s: (was: Materialized Views)

> Upgrade from 2.2.1 to 3.0.0 fails with NullPointerException
> ---
>
> Key: CASSANDRA-10788
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10788
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Tomas Ramanauskas
>Assignee: Stefania
> Fix For: 3.0.1, 3.1
>
>
> I tried to upgrade Cassandra from 2.2.1 to 3.0.0, however, I get this error 
> on startup after Cassandra 3.0 software was installed:
> {code}
> ERROR [main] 2015-11-30 15:44:50,164 CassandraDaemon.java:702 - Exception 
> encountered during startup
> java.lang.NullPointerException: null
>   at org.apache.cassandra.io.util.FileUtils.delete(FileUtils.java:374) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.SystemKeyspace.migrateDataDirs(SystemKeyspace.java:1341)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:180) 
> [apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:561)
>  [apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
> [apache-cassandra-3.0.0.jar:3.0.0]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10921) Bump CQL version on 3.0

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10921:
-
Component/s: (was: Materialized Views)

> Bump CQL version on 3.0
> ---
>
> Key: CASSANDRA-10921
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10921
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0.3, 3.2
>
>
> Appears we haven't bump the version properly on 3.0, and we should, at least 
> for materialized views.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10929) cql_tests.py:AbortedQueriesTester fails

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10929:
-
Component/s: (was: Materialized Views)

> cql_tests.py:AbortedQueriesTester fails
> ---
>
> Key: CASSANDRA-10929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10929
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Philip Thompson
> Attachments: node1_debug.log, node1.log, node2_debug.log, node2.log
>
>
> All four tests in the {{cql_tests.AbortedQueriesTester}} dtest suite are 
> failing on HEAD of cassandra-3.0, here is an example link from cassci:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/455/testReport/cql_tests/AbortedQueriesTester/remote_query_test/
> The tests set {{'read_request_timeout_in_ms': 1000}}  and 
> {{"-Dcassandra.test.read_iteration_delay_ms=1500"}}, then issues read queries 
> and expects them to timeout. However, they are succeeding. I can reproduce 
> this locally. 
> Looking at remote_query_test, from the logs, it appears that the query is 
> being sent from the driver to node1, which forwards it to node2 
> appropriately. I've tried also setting {{range_request_timeout_in_ms}} lower, 
> but that has had no effect. Trace logs from remote_query_test are attached.
> The same issue is happening on local_query_test, remote_query_test, 
> materialized_view_test, and index_query_test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10964) Startup errors in Docker containers depending on memtable allocation type

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10964:
-
Component/s: (was: Materialized Views)

> Startup errors in Docker containers depending on memtable allocation type
> -
>
> Key: CASSANDRA-10964
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10964
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: Docker, Debian Testing, 3.0.1
>Reporter: Jacek Furmankiewicz
>
> We are creating Docker containers for various versions of Cassandra. All are 
> based on Debian, Oracle JDK 1.8 and the Cassandra versions are installed 
> directly from the DataStax Debian repos via apt-get.
> We noticed that with 3.0.1 (only that version, 2.1.11 and 2.2.4 work always 
> fine) the Cassandra process fails to start up randonly (about 50% of the 
> time) with the following error:
> {noformat}
> Caused by: java.lang.RuntimeException: 
> system_distributed:parent_repair_history not found in the schema definitions 
> keyspace.
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:940)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:931)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:894)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesOnly(SchemaKeyspace.java:886)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1276)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1255)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.service.MigrationManager$1.runMayThrow(MigrationManager.java:531)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_45]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
> {noformat}
> Started playing with different configuration parameters and by trial and 
> error figured out it seems to be related to this configuration parameter:
> {noformat}
> memtable_allocation_type: offheap_buffers
> {noformat}
> If we set it to offheap_buffers, this error occurs about 50% of the time 
> (when starting on a new clean filesystem).
> If we set it to heap_buffers, it works always, 100% of the time, never an 
> issue. 
> Attaching full stack output to help debug:
> {noformat}
> INFO  16:11:44 Configuration location: 
> file:/etc/cassandra/cassandra.yaml
> INFO  16:11:44 Node 
> configuration:[authenticator=PasswordAuthenticator; 
> authorizer=CassandraAuthorizer; auto_snapshot=true; 
> batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; 
> batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; 
> client_encryption_options=; cluster_name=TEST_CLUSTER; 
> column_index_size_in_kb=64; commit_failure_policy=stop; 
> commitlog_directory=/var/lib/cassandra/commitlog2; 
> commitlog_segment_size_in_mb=32; commitlog_sync=periodic; 
> commitlog_sync_period_in_ms=1; 
> compaction_large_partition_warning_threshold_mb=100; 
> compaction_throughput_mb_per_sec=16; concurrent_counter_writes=12; 
> concurrent_materialized_view_writes=7; concurrent_reads=64; 
> concurrent_writes=10; counter_cache_save_period=17200; 
> counter_cache_size_in_mb=1027; counter_write_request_timeout_in_ms=5000; 
> cross_node_timeout=false; data_file_directories=[/var/lib/cassandra/data2]; 
> disk_failure_policy=stop; disk_optimization_strategy=spinning; 
> dynamic_snitch_badness_threshold=0.1; 
> dynamic_snitch_reset_interval_in_ms=60; 
> dynamic_snitch_update_interval_in_ms=100; 
> enable_scripted_user_defined_functions=false; 
> enable_user_defined_functions=false; endpoint_snitch=SimpleSnitch; 
> gc_warn_threshold_in_ms=1000; hinted_handoff_enabled=true; 
> hinted_handoff_throttle_in_kb=1024; 
> hints_directory=/var/lib/cassandra/hints2; hints_flush_period_in_ms=1; 
> 

[jira] [Updated] (CASSANDRA-11413) dtest failure in cql_tests.AbortedQueriesTester.remote_query_test

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11413:
-
Component/s: (was: Materialized Views)

> dtest failure in cql_tests.AbortedQueriesTester.remote_query_test
> -
>
> Key: CASSANDRA-11413
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11413
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1076/testReport/cql_tests/AbortedQueriesTester/remote_query_test
> Failed on CassCI build trunk_dtest #1076
> Also breaking:
> cql_tests.AbortedQueriesTester.materialized_view_test
> topology_test.TestTopology.do_not_join_ring_test
> Broken by https://github.com/pcmanus/ccm/pull/479



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11465:
-
Component/s: (was: Materialized Views)

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
> --
>
> Key: CASSANDRA-11465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11465
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Philip Thompson
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.2.8, 3.0.9, 3.8
>
>
> Failing on the following assert, on trunk only: 
> {{self.assertEqual(len(errs[0]), 1)}}
> Is not failing consistently.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test
> Failed on CassCI build trunk_dtest #1087



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11594) Too many open files on directories

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11594:
-
Component/s: (was: Materialized Views)

> Too many open files on directories
> --
>
> Key: CASSANDRA-11594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11594
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: n0rad
>Assignee: Stefania
>Priority: Critical
> Fix For: 3.0.9, 3.10
>
> Attachments: apache-cassandra-3.0.8-SNAPSHOT.jar, Grafana   Cassandra 
>   Cluster.png, openfiles.zip, screenshot.png
>
>
> I have a 6 nodes cluster in prod in 3 racks.
> each node :
> - 4Gb commitlogs on 343 files
> - 275Gb data on 504 files 
> On saturday, 1 node in each rack crash with with too many open files (seems 
> to be the similar node in each rack).
> {code}
> lsof -n -p $PID give me 66899 out of 65826 max
> {code}
> it contains 64527 open directories (2371 uniq)
> a part of the list :
> {code}
> java19076 root 2140r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2141r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2142r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2143r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2144r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2145r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2146r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2147r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2148r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2149r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2150r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2151r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2152r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2153r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2154r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> java19076 root 2155r  DIR   8,17  143360 4386718705 
> /opt/stage2/pod-cassandra-aci-cassandra/rootfs/data/keyspaces/email_logs_query/emails-2d4abd00e9ea11e591199d740e07bd95
> {code}
> The 3 others nodes crashes 4 hours later



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12475) dtest failure in consistency_test.TestConsistency.short_read_test

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12475:
-
Component/s: (was: Materialized Views)

> dtest failure in consistency_test.TestConsistency.short_read_test
> -
>
> Key: CASSANDRA-12475
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12475
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest/42/testReport/junit/consistency_test/TestConsistency/short_read_test/
> Error:
> {code}
> Error from server: code=2200 [Invalid query] message="No keyspace has been 
> specified. USE a keyspace, or explicitly specify keyspace.tablename"
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12700:
-
Component/s: (was: Materialized Views)

> During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes 
> Connection get lost, because of Server NullPointerException
> --
>
> Key: CASSANDRA-12700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12700
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra cluster with two nodes running C* version 
> 3.7.0 and Python Driver 3.7 using Python 2.7.11. 
> OS: Red Hat Enterprise Linux 6.x x64, 
> RAM :8GB
> DISK :210GB
> Cores: 2
> Java 1.8.0_73 JRE
>Reporter: Rajesh Radhakrishnan
>Assignee: Jeff Jirsa
> Fix For: 2.2.9, 3.0.10, 3.10
>
>
> In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) 
> with Python driver 3.7. Trying to insert 2 million row or more data into the 
> database, but sometimes we are getting "Null pointer Exception". 
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in 
> the client its Python 2.7.12.
> {code:title=cassandra server log}
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc208da86, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.service.ClientState.login(ClientState.java:227) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_73]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
> ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x8e2eae00, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  

[jira] [Updated] (CASSANDRA-12735) org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12735:
-
Component/s: (was: Materialized Views)

> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out
> -
>
> Key: CASSANDRA-12735
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12735
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration, Core
> Environment: Python 2.7.11, Datastax Cassandra 3.7.0  
>Reporter: Rajesh Radhakrishnan
> Fix For: 3.7
>
>
> We got a cluster of two nodes running Cassandra.3.7.0 and using client 
> running Python 2.7.11 injecting lot of data from maybe 100 or so jobs. 
> --
> Cache setting can be seen from system.log:
> INFO  [main] 2016-09-30 15:12:50,002 AuthCache.java:172 - (Re)initializing 
> CredentialsCache (validity period/update interval/max entries) 
> (2000/2000/1000)
> INFO  [SharedPool-Worker-1] 2016-09-30 15:15:09,561 AuthCache.java:172 - 
> (Re)initializing PermissionsCache (validity period/update interval/max 
> entries) (1/1/1000)
> INFO  [SharedPool-Worker-1] 2016-09-30 15:15:24,319 AuthCache.java:172 - 
> (Re)initializing RolesCache (validity period/update interval/max entries) 
> (5000/5000/1000)
> ===
> But I am getting the following exception :
> ERROR [SharedPool-Worker-90] 2016-09-30 15:17:20,883 ErrorMessage.java:338 - 
> Unexpected exception during request
> com.google.common.util.concurrent.UncheckedExecutionException: 
> com.google.common.util.concurrent.UncheckedExecutionException: 
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
> received only 0 responses.
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203) 
> ~[guava-18.0.jar:na]
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3937) 
> ~[guava-18.0.jar:na]
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941) 
> ~[guava-18.0.jar:na]
>   at 
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
>  ~[guava-18.0.jar:na]
>   at org.apache.cassandra.auth.AuthCache.get(AuthCache.java:108) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:45)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.authorize(ClientState.java:375) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:308)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:285)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:272) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:256)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:211)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.checkAccess(BatchStatement.java:137)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:502)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:495)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.transport.messages.BatchMessage.execute(BatchMessage.java:217)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
>   at 
> 

[jira] [Updated] (CASSANDRA-12791) MessageIn logic to determine if the message is cross-node is wrong

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12791:
-
Component/s: (was: Materialized Views)

> MessageIn logic to determine if the message is cross-node is wrong
> --
>
> Key: CASSANDRA-12791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12791
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability, Streaming and Messaging
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 3.10
>
>
> {{MessageIn}} has the following code to read the 'creation time' of the 
> message on the receiving side:
> {noformat}
> public static ConstructionTime readTimestamp(InetAddress from, DataInputPlus 
> input, long timestamp) throws IOException
> {
> // make sure to readInt, even if cross_node_to is not enabled
> int partial = input.readInt();
> long crossNodeTimestamp = (timestamp & 0xL) | (((partial 
> & 0xL) << 2) >> 2);
> if (timestamp > crossNodeTimestamp)
> {
> MessagingService.instance().metrics.addTimeTaken(from, timestamp - 
> crossNodeTimestamp);
> }
> if(DatabaseDescriptor.hasCrossNodeTimeout())
> {
> return new ConstructionTime(crossNodeTimestamp, timestamp != 
> crossNodeTimestamp);
> }
> else
> {
> return new ConstructionTime();
> }
> }
> {noformat}
> where {{timestamp}} is really the local time on the receiving node when 
> calling that method.
> The incorrect part, I believe, is the {{timestamp != crossNodeTimestamp}} 
> used to set the {{isCrossNode}} field of {{ConstructionTime}}. A first 
> problem is that this will basically always be {{true}}: for it to be 
> {{false}}, we'd need the low-bytes of the timestamp taken on the sending node 
> to coincide exactly with the ones taken on the receiving side, which is 
> _very_ unlikely. It is also a relatively meaningless test: having that test 
> be {{false}} basically means the lack of clock sync between the 2 nodes is 
> exactly the time the 2 calls to {{System.currentTimeMillis()}} (on sender and 
> receiver), which is definitively not what we care about.
> What the result of this test is used for is to determine if the message was 
> crossNode or local. It's used to increment different metrics (we separate 
> metric local versus crossNode dropped messages) in {{MessagingService}} for 
> instance. And that's where this is kind of a bug: not only the {{timestamp != 
> crossNodeTimestamp}}, but if {{DatabaseDescriptor.hasCrossNodeTimeout()}}, we 
> *always* have this {{isCrossNode}} false, which means we'll never increment 
> the "cross-node dropped messages" metric, which is imo unexpected.
> That is, it is true that if {{DatabaseDescriptor.hasCrossNodeTimeout() == 
> false}}, then we end using the receiver side timestamp to timeout messages, 
> and so you end up only dropping messages that timeout locally. And _in that 
> sense_, always incrementing the "locally" dropped messages metric is not 
> completely illogical. But I doubt most users are aware of those pretty 
> specific nuance when looking at the related metrics, and I'm relatively sure 
> users expect a metrics named {{droppedCrossNodeTimeout}} to actually count 
> cross-node messages by default (keep in mind that 
> {{DatabaseDescriptor.hasCrossNodeTimeout()}} is actually false by default).
> Anyway, to sum it up I suggest that the following change should be done:
> # the {{timestamp != crossNodeTimestamp}} test is definitively not what we 
> want. We should at a minimum just replace it to {{true}} as that's basically 
> what it ends up being except for very rare and arguably random cases.
> # given how the {{ConstructionTime.isCrossNode}} is used, I suggest that we 
> really want it to mean if the message has shipped cross-node, not just be a 
> synonymous of {{DatabaseDescriptor.hasCrossNodeTimeout()}}. It should be 
> whether the message shipped cross-node, i.e. whether {{from == 
> BroadcastAdress()}} or not.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13174) Indexing is allowed on Duration type when it should not be

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13174:
-
Component/s: (was: Materialized Views)

> Indexing is allowed on Duration type when it should not be
> --
>
> Key: CASSANDRA-13174
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13174
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.10
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
> Fix For: 3.11.x, 4.x
>
>
> Looks like secondary indexing is allowed on duration type columns. Since 
> comparisons are not possible for the duration type, indexing on it also 
> should be invalid.
> 1) 
> {noformat}
> CREATE TABLE duration_table (k int PRIMARY KEY, d duration);
> INSERT INTO duration_table (k, d) VALUES (0, 1s);
> SELECT * from duration_table WHERE d=1s ALLOW FILTERING;
> {noformat}
> The above throws an error: 
> {noformat}
> WARN  [ReadStage-2] 2017-01-31 17:09:57,821 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,10,main]: {}
> java.lang.RuntimeException: java.lang.UnsupportedOperationException
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2591)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  ~[main/:na]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: java.lang.UnsupportedOperationException: null
>   at 
> org.apache.cassandra.db.marshal.AbstractType.compareCustom(AbstractType.java:174)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.compare(AbstractType.java:160) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.compareForCQL(AbstractType.java:204)
>  ~[main/:na]
>   at org.apache.cassandra.cql3.Operator.isSatisfiedBy(Operator.java:201) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:719)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:324)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:120) 
> ~[main/:na]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:110) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:44) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:174) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:140)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:307)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:292)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:310)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:333) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1884)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2587)
>  ~[main/:na]
>   ... 5 common frames omitted
> {noformat}
> 2)
> Similarly, if an index is created on the duration column:
> {noformat}
> CREATE INDEX d_index ON simplex.duration_table (d);
> SELECT * from duration_table WHERE d=1s;
> {noformat}
> results in:
> {noformat}
> WARN  [ReadStage-2] 2017-01-31 17:12:00,623 
> AbstractLocalAwareExecutorService.java:167 - 

[jira] [Updated] (CASSANDRA-13248) testall failure in org.apache.cassandra.db.compaction.PendingRepairManagerTest.userDefinedTaskTest

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13248:
-
Component/s: (was: Materialized Views)

> testall failure in 
> org.apache.cassandra.db.compaction.PendingRepairManagerTest.userDefinedTaskTest
> --
>
> Key: CASSANDRA-13248
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13248
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Blake Eggleston
>  Labels: test-failure, testall
> Attachments: 
> TEST-org.apache.cassandra.db.compaction.PendingRepairManagerTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1416/testReport/org.apache.cassandra.db.compaction/PendingRepairManagerTest/userDefinedTaskTest
> {code}
> Error Message
> expected:<1> but was:<0>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<1> but was:<0>
>   at 
> org.apache.cassandra.db.compaction.PendingRepairManagerTest.userDefinedTaskTest(PendingRepairManagerTest.java:194)
> {code}{code}
> Standard Output
> ERROR [main] 2017-02-21 17:00:01,792 ?:? - SLF4J: stderr
> INFO  [main] 2017-02-21 17:00:02,001 ?:? - Configuration location: 
> file:/home/automaton/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2017-02-21 17:00:02,002 ?:? - Loading settings from 
> file:/home/automaton/cassandra/test/conf/cassandra.yaml
> INFO  [main] 2017-02-21 17:00:02,530 ?:? - Node 
> configuration:[allocate_tokens_for_keyspace=null; authenticator=null; 
> authorizer=null; auto_bootstrap=true; auto_snapshot=true; 
> back_pressure_enabled=false; back_pressure_strategy=null; 
> batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; 
> batchlog_replay_throttle_in_kb=1024; broadcast_address=null; 
> broadcast_rpc_address=null; buffer_pool_use_heap_if_exhausted=true; 
> cas_contention_timeout_in_ms=1000; cdc_enabled=false; 
> cdc_free_space_check_interval_ms=250; 
> cdc_raw_directory=build/test/cassandra/cdc_raw:165; cdc_total_space_in_mb=0; 
> client_encryption_options=; cluster_name=Test Cluster; 
> column_index_cache_size_in_kb=2; column_index_size_in_kb=4; 
> commit_failure_policy=stop; commitlog_compression=null; 
> commitlog_directory=build/test/cassandra/commitlog:165; 
> commitlog_max_compression_buffers_in_pool=3; 
> commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=5; 
> commitlog_sync=batch; commitlog_sync_batch_window_in_ms=1.0; 
> commitlog_sync_period_in_ms=0; commitlog_total_space_in_mb=null; 
> compaction_large_partition_warning_threshold_mb=100; 
> compaction_throughput_mb_per_sec=0; concurrent_compactors=4; 
> concurrent_counter_writes=32; concurrent_materialized_view_writes=32; 
> concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; 
> counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; 
> counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; 
> credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; 
> credentials_validity_in_ms=2000; cross_node_timeout=false; 
> data_file_directories=[Ljava.lang.String;@1757cd72; disk_access_mode=mmap; 
> disk_failure_policy=ignore; disk_optimization_estimate_percentile=0.95; 
> disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; 
> dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; 
> dynamic_snitch_reset_interval_in_ms=60; 
> dynamic_snitch_update_interval_in_ms=100; 
> enable_scripted_user_defined_functions=true; 
> enable_user_defined_functions=true; 
> enable_user_defined_functions_threads=true; encryption_options=null; 
> endpoint_snitch=org.apache.cassandra.locator.SimpleSnitch; 
> file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; 
> gc_warn_threshold_in_ms=0; hinted_handoff_disabled_datacenters=[]; 
> hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; 
> hints_compression=null; hints_directory=build/test/cassandra/hints:165; 
> hints_flush_period_in_ms=1; incremental_backups=true; 
> index_interval=null; index_summary_capacity_in_mb=null; 
> index_summary_resize_interval_in_minutes=60; initial_token=null; 
> inter_dc_stream_throughput_outbound_megabits_per_sec=200; 
> inter_dc_tcp_nodelay=true; internode_authenticator=null; 
> internode_compression=none; internode_recv_buff_size_in_bytes=0; 
> internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; 
> key_cache_save_period=14400; key_cache_size_in_mb=null; 
> listen_address=127.0.0.1; listen_interface=null; 
> listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; 
> max_hint_window_in_ms=1080; max_hints_delivery_threads=2; 
> max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; 
> 

[jira] [Updated] (CASSANDRA-13339) java.nio.BufferOverflowException: null

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13339:
-
Component/s: (was: Materialized Views)

> java.nio.BufferOverflowException: null
> --
>
> Key: CASSANDRA-13339
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13339
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Richards
>
> I'm seeing the following exception running Cassandra 3.9 (with Netty updated 
> to 4.1.8.Final) running on a 2 node cluster.  It would have been processing 
> around 50 queries/second at the time (mixture of 
> inserts/updates/selects/deletes) : there's a collection of tables (some with 
> counters some without) and a single materialized view.
> ERROR [MutationStage-4] 2017-03-15 22:50:33,052 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:122)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:790)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:393)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1347)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2539)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> and then again shortly afterwards
> ERROR [MutationStage-3] 2017-03-15 23:27:36,198 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> 

[jira] [Updated] (CASSANDRA-13317) Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due to includeCallerData being false by default no appender

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13317:
-
Component/s: (was: Materialized Views)

> Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due 
> to includeCallerData being false by default no appender
> 
>
> Key: CASSANDRA-13317
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13317
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
> Fix For: 3.11.0, 4.0
>
> Attachments: 13317_v1.diff
>
>
> We specify the logging pattern as "%-5level [%thread] %date{ISO8601} %F:%L - 
> %msg%n". 
> %F:%L is intended to print the Filename:Line Number. For performance reasons 
> logback (like log4j2) disables tracking line numbers as it requires the 
> entire stack to be materialized every time.
> This causes logs to look like:
> WARN  [main] 2017-03-09 13:27:11,272 ?:? - Protocol Version 5/v5-beta not 
> supported by java driver
> INFO  [main] 2017-03-09 13:27:11,813 ?:? - No commitlog files found; skipping 
> replay
> INFO  [main] 2017-03-09 13:27:12,477 ?:? - Initialized prepared statement 
> caches with 14 MB
> INFO  [main] 2017-03-09 13:27:12,727 ?:? - Initializing system.IndexInfo
> When instead you'd expect something like:
> INFO  [main] 2017-03-09 13:23:44,204 ColumnFamilyStore.java:419 - 
> Initializing system.available_ranges
> INFO  [main] 2017-03-09 13:23:44,210 ColumnFamilyStore.java:419 - 
> Initializing system.transferred_ranges
> INFO  [main] 2017-03-09 13:23:44,215 ColumnFamilyStore.java:419 - 
> Initializing system.views_builds_in_progress
> The fix is to add "true" to the 
> appender config to enable the line number and stack tracing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10485) Missing host ID on hinted handoff write

2017-07-06 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-10485:
---
Component/s: (was: Materialized Views)

> Missing host ID on hinted handoff write
> ---
>
> Key: CASSANDRA-10485
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10485
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Paulo Motta
>Assignee: Paulo Motta
> Fix For: 2.1.12, 2.2.4, 3.0.1, 3.1
>
>
> when I restart one of them I receive the error "Missing host ID":
> {noformat}
> WARN  [SharedPool-Worker-1] 2015-10-08 13:15:33,882 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.AssertionError: Missing host ID for 63.251.156.141
> at 
> org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:978)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.service.StorageProxy$6.runMayThrow(StorageProxy.java:950)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:2235)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-2.1.3.jar:2.1.3]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> {noformat}
> If I made nodetool status, the problematic node has ID:
> {noformat}
> UN  10.10.10.12  1.3 TB 1   ?   
> 4d5c8fd2-a909-4f09-a23c-4cd6040f338a  rack3
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10778) RowIndexEntry$Serializer invoked to serialize old format RIE triggering assertion

2017-07-06 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-10778:
---
Component/s: (was: Materialized Views)

> RowIndexEntry$Serializer invoked to serialize old format RIE triggering 
> assertion
> -
>
> Key: CASSANDRA-10778
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10778
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Windows 7 64-bit, Cassandra 3.0.0, Java 1.8u60
>Reporter: Will Zhang
>Assignee: Ariel Weisberg
>  Labels: error
> Fix For: 3.0.1, 3.1
>
>
> Hi,
> I have been running some test for upgrading from v2.2.2 to v3.0.0. 
> I encountered the following `ERROR` in the `system.log` when I was 
> *creating/dropping materialized views*. I did some searches online but 
> couldn't find anything useful so filing this. The log seem to suggest that it 
> is bug-like.
> Any thoughts on this would be appreciated.
> Main error line in log:
> {code}
> ERROR [CompactionExecutor:4] 2015-11-26 15:40:56,033 CassandraDaemon.java:195 
> - Exception in thread Thread[CompactionExecutor:4,1,main]
> java.lang.AssertionError: We read old index files but we should never 
> write them
> {code}
> Longer log:
> {code}
> INFO  [SharedPool-Worker-2] 2015-11-26 15:25:37,152 
> MigrationManager.java:336 - Create new view: 
> org.apache.cassandra.config.ViewDefinition@1b7fc5e6[ksName=demo,viewName=broker_quotes_by_date,baseTableId=bf928280-3c23-11e5-a4ba-07dc7eba8ee2,baseTableName=broker_quotes,includeAllColumns=true,whereClause=date
>  IS NOT NULL AND datetime IS NOT NULL AND isin IS NOT NULL AND side IS NOT 
> NULL AND broker IS NOT 
> NULL,metadata=org.apache.cassandra.config.CFMetaData@49123522[cfId=f19cb8f0-9451-11e5-af90-6916ca23ea25,ksName=demo,cfName=broker_quotes_by_date,flags=[COMPOUND],params=TableParams{comment=,
>  read_repair_chance=0.0, dclocal_read_repair_chance=0.1, 
> bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, 
> default_time_to_live=0, memtable_flush_period_in_ms=0, 
> min_index_interval=128, max_index_interval=2048, 
> speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' 
> : 'NONE'}, 
> compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,
>  options={min_threshold=4, max_threshold=32}}, 
> compression=org.apache.cassandra.schema.CompressionParams@f3ef4959, 
> extensions={}},comparator=comparator(org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.TimestampType),
>  org.apache.cassandra.db.marshal.UTF8Type, 
> org.apache.cassandra.db.marshal.UTF8Type, 
> org.apache.cassandra.db.marshal.UTF8Type),partitionColumns=[[] | 
> [bmark_spread g_spread is_axed oas_spread price size ytw 
> z_spread]],partitionKeyColumns=[ColumnDefinition{name=date, 
> type=org.apache.cassandra.db.marshal.TimestampType, kind=PARTITION_KEY, 
> position=0}],clusteringColumns=[ColumnDefinition{name=datetime, 
> type=org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.TimestampType),
>  kind=CLUSTERING, position=0}, ColumnDefinition{name=side, 
> type=org.apache.cassandra.db.marshal.UTF8Type, kind=CLUSTERING, position=1}, 
> ColumnDefinition{name=isin, type=org.apache.cassandra.db.marshal.UTF8Type, 
> kind=CLUSTERING, position=2}, ColumnDefinition{name=broker, 
> type=org.apache.cassandra.db.marshal.UTF8Type, kind=CLUSTERING, 
> position=3}],keyValidator=org.apache.cassandra.db.marshal.TimestampType,columnMetadata=[ColumnDefinition{name=z_spread,
>  type=org.apache.cassandra.db.marshal.FloatType, kind=REGULAR, position=-1}, 
> ColumnDefinition{name=datetime, 
> type=org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.TimestampType),
>  kind=CLUSTERING, position=0}, ColumnDefinition{name=date, 
> type=org.apache.cassandra.db.marshal.TimestampType, kind=PARTITION_KEY, 
> position=0}, ColumnDefinition{name=oas_spread, 
> type=org.apache.cassandra.db.marshal.FloatType, kind=REGULAR, position=-1}, 
> ColumnDefinition{name=isin, type=org.apache.cassandra.db.marshal.UTF8Type, 
> kind=CLUSTERING, position=2}, ColumnDefinition{name=bmark_spread, 
> type=org.apache.cassandra.db.marshal.FloatType, kind=REGULAR, position=-1}, 
> ColumnDefinition{name=side, type=org.apache.cassandra.db.marshal.UTF8Type, 
> kind=CLUSTERING, position=1}, ColumnDefinition{name=broker, 
> type=org.apache.cassandra.db.marshal.UTF8Type, kind=CLUSTERING, position=3}, 
> ColumnDefinition{name=is_axed, 
> type=org.apache.cassandra.db.marshal.BooleanType, kind=REGULAR, position=-1}, 
> ColumnDefinition{name=ytw, type=org.apache.cassandra.db.marshal.FloatType, 
> kind=REGULAR, position=-1}, 

[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.

2017-07-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13369:
-
Component/s: (was: Materialized Views)

> If there are multiple values for a key, CQL grammar choses last value. This 
> should not be silent or should not be allowed.
> --
>
> Key: CASSANDRA-13369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13369
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Nachiket Patil
>Assignee: Nachiket Patil
>Priority: Minor
> Fix For: 3.11.0, 4.0
>
> Attachments: 3.X.diff, test_stdout.txt, trunk.diff
>
>
> If through CQL, multiple values are specified for a key, grammar parses the 
> map and last value for the key wins. This behavior is bad.
> e.g. 
> {code}
> CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': 
> 'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};
> {code}
> Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
> result in loss of data. This behavior should not be silent or not be allowed 
> at all.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13530) GroupCommitLogService

2017-07-06 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077060#comment-16077060
 ] 

Ariel Weisberg commented on CASSANDRA-13530:


Actually I would like to see just a regular mutation workload. Nothing at 
SERIAL. Just so there are no CAS related wrinkles we might have to address 
separately.

> GroupCommitLogService
> -
>
> Key: CASSANDRA-13530
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13530
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Yuji Ito
>Assignee: Yuji Ito
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
> Attachments: groupCommit22.patch, groupCommit30.patch, 
> groupCommit3x.patch, groupCommitLog_result.xlsx, GuavaRequestThread.java, 
> MicroRequestThread.java
>
>
> I propose a new CommitLogService, GroupCommitLogService, to improve the 
> throughput when lots of requests are received.
> It improved the throughput by maximum 94%.
> I'd like to discuss about this CommitLogService.
> Currently, we can select either 2 CommitLog services; Periodic and Batch.
> In Periodic, we might lose some commit log which hasn't written to the disk.
> In Batch, we can write commit log to the disk every time. The size of commit 
> log to write is too small (< 4KB). When high concurrency, these writes are 
> gathered and persisted to the disk at once. But, when insufficient 
> concurrency, many small writes are issued and the performance decreases due 
> to the latency of the disk. Even if you use SSD, processes of many IO 
> commands decrease the performance.
> GroupCommitLogService writes some commitlog to the disk at once.
> The patch adds GroupCommitLogService (It is enabled by setting 
> `commitlog_sync` and `commitlog_sync_group_window_in_ms` in cassandra.yaml).
> The difference from Batch is just only waiting for the semaphore.
> By waiting for the semaphore, some writes for commit logs are executed at the 
> same time.
> In GroupCommitLogService, the latency becomes worse if the there is no 
> concurrency.
> I measured the performance with my microbench (MicroRequestThread.java) by 
> increasing the number of threads.The cluster has 3 nodes (Replication factor: 
> 3). Each nodes is AWS EC2 m4.large instance + 200IOPS io1 volume.
> The result is as below. The GroupCommitLogService with 10ms window improved 
> update with Paxos by 94% and improved select with Paxos by 76%.
> h6. SELECT / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|192|103|
> |2|163|212|
> |4|264|416|
> |8|454|800|
> |16|744|1311|
> |32|1151|1481|
> |64|1767|1844|
> |128|2949|3011|
> |256|4723|5000|
> h6. UPDATE / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|45|26|
> |2|39|51|
> |4|58|102|
> |8|102|198|
> |16|167|213|
> |32|289|295|
> |64|544|548|
> |128|1046|1058|
> |256|2020|2061|



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13072) Cassandra failed to run on Linux-aarch64

2017-07-06 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077032#comment-16077032
 ] 

Jeff Jirsa commented on CASSANDRA-13072:


Elasticsearch in the linked ticket just build their own JNA jar that linked 
against an older version of glibc. [~mshuler] any desire to set that up for an 
{{org.apache.cassandra.jna}} lib so that we can decouple that depedency? 

Short of that (or until that's setup), falling back to {{4.2.2}} seems like an 
ok option (not a great option, but an OK option).






> Cassandra failed to run on Linux-aarch64
> 
>
> Key: CASSANDRA-13072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Hardware: ARM aarch64
> OS: Ubuntu 16.04.1 LTS
>Reporter: Jun He
>Assignee: Benjamin Lerer
>  Labels: incompatible
> Fix For: 3.0.14, 3.11.0, 4.0
>
> Attachments: compat_report.html
>
>
> Steps to reproduce:
> 1. Download cassandra latest source
> 2. Build it with "ant"
> 3. Run with "./bin/cassandra". Daemon is crashed with following error message:
> {quote}
> INFO  05:30:21 Initializing system.schema_functions
> INFO  05:30:21 Initializing system.schema_aggregates
> ERROR 05:30:22 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:97) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:316)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:330)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:163) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:93)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:114)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:519)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:497)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:480)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.createFlushWriter(Memtable.java:439) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:371) 
> ~[main/:na]
> at org.apache.cassandra.db.Memtable.flush(Memtable.java:332) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_111]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
> {quote}
> Analyze:
> This issue is caused by bundled jna-4.0.0.jar which doesn't come with aarch64 
> native support. Replace lib/jna-4.0.0.jar with jna-4.2.0.jar from 
> http://central.maven.org/maven2/net/java/dev/jna/jna/4.2.0/ can fix this 
> problem.
> Attached is the binary compatibility report of jna.jar between 4.0 and 4.2. 
> The result is good (97.4%). So is there possibility to upgrade jna to 4.2.0 
> in upstream? Should there be any kind of tests to execute, please kindly 
> point me. Thanks a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13594) Use an ExecutorService for repair commands instead of new Thread(..).start()

2017-07-06 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076988#comment-16076988
 ] 

Ariel Weisberg commented on CASSANDRA-13594:


Maybe a legit failure? 
{{thread_count_repair_test (repair_tests.repair_test.TestRepair) ... Build 
timed out (after 20 minutes). Marking the build as aborted.}}

> Use an ExecutorService for repair commands instead of new Thread(..).start()
> 
>
> Key: CASSANDRA-13594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13594
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Currently when starting a new repair, we create a new Thread and start it 
> immediately
> It would be nice to be able to 1) limit the number of threads and 2) reject 
> starting new repair commands if we are already running too many.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13583) test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test

2017-07-06 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076980#comment-16076980
 ] 

Ariel Weisberg commented on CASSANDRA-13583:


The dtest failures look unrelated so  1.

> test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test
> --
>
> Key: CASSANDRA-13583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13583
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Hamm
>Assignee: Marcus Eriksson
>  Labels: dtest, test-failure
> Fix For: 4.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/524/testReport/rebuild_test/TestRebuild/disallow_rebuild_from_nonreplica_test
> {noformat}
> Error Message
> ToolError not raised
>  >> begin captured logging << 
> dtest: DEBUG: Python driver version in use: 3.10
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-0tUjhX
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.cluster: INFO: New Cassandra host  discovered
> cassandra.cluster: INFO: New Cassandra host  discovered
> - >> end captured logging << -
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrappedtestrebuild
> f(obj)
>   File "/home/automaton/cassandra-dtest/rebuild_test.py", line 357, in 
> disallow_rebuild_from_nonreplica_test
> node1.nodetool('rebuild -ks ks1 -ts (%s,%s] -s %s' % (node3_token, 
> node1_token, node3_address))
>   File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
> "{0} not raised".format(exc_name))
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13583) test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test

2017-07-06 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13583:
---
Status: Ready to Commit  (was: Patch Available)

> test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test
> --
>
> Key: CASSANDRA-13583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13583
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Hamm
>Assignee: Marcus Eriksson
>  Labels: dtest, test-failure
> Fix For: 4.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/524/testReport/rebuild_test/TestRebuild/disallow_rebuild_from_nonreplica_test
> {noformat}
> Error Message
> ToolError not raised
>  >> begin captured logging << 
> dtest: DEBUG: Python driver version in use: 3.10
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-0tUjhX
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.cluster: INFO: New Cassandra host  discovered
> cassandra.cluster: INFO: New Cassandra host  discovered
> - >> end captured logging << -
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrappedtestrebuild
> f(obj)
>   File "/home/automaton/cassandra-dtest/rebuild_test.py", line 357, in 
> disallow_rebuild_from_nonreplica_test
> node1.nodetool('rebuild -ks ks1 -ts (%s,%s] -s %s' % (node3_token, 
> node1_token, node3_address))
>   File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
> "{0} not raised".format(exc_name))
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.

2017-07-06 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076977#comment-16076977
 ] 

Ariel Weisberg commented on CASSANDRA-13369:


[~jeromatron] Another ticket hit with the materialized view hammer?

> If there are multiple values for a key, CQL grammar choses last value. This 
> should not be silent or should not be allowed.
> --
>
> Key: CASSANDRA-13369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13369
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Materialized Views
>Reporter: Nachiket Patil
>Assignee: Nachiket Patil
>Priority: Minor
> Fix For: 3.11.0, 4.0
>
> Attachments: 3.X.diff, test_stdout.txt, trunk.diff
>
>
> If through CQL, multiple values are specified for a key, grammar parses the 
> map and last value for the key wins. This behavior is bad.
> e.g. 
> {code}
> CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': 
> 'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};
> {code}
> Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
> result in loss of data. This behavior should not be silent or not be allowed 
> at all.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13317) Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due to includeCallerData being false by default no appender

2017-07-06 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076975#comment-16076975
 ] 

Ariel Weisberg commented on CASSANDRA-13317:


[~jeromatron] is this really materialized view related? Seems purely log config 
related to me.

> Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due 
> to includeCallerData being false by default no appender
> 
>
> Key: CASSANDRA-13317
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13317
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Materialized Views
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
> Fix For: 3.11.0, 4.0
>
> Attachments: 13317_v1.diff
>
>
> We specify the logging pattern as "%-5level [%thread] %date{ISO8601} %F:%L - 
> %msg%n". 
> %F:%L is intended to print the Filename:Line Number. For performance reasons 
> logback (like log4j2) disables tracking line numbers as it requires the 
> entire stack to be materialized every time.
> This causes logs to look like:
> WARN  [main] 2017-03-09 13:27:11,272 ?:? - Protocol Version 5/v5-beta not 
> supported by java driver
> INFO  [main] 2017-03-09 13:27:11,813 ?:? - No commitlog files found; skipping 
> replay
> INFO  [main] 2017-03-09 13:27:12,477 ?:? - Initialized prepared statement 
> caches with 14 MB
> INFO  [main] 2017-03-09 13:27:12,727 ?:? - Initializing system.IndexInfo
> When instead you'd expect something like:
> INFO  [main] 2017-03-09 13:23:44,204 ColumnFamilyStore.java:419 - 
> Initializing system.available_ranges
> INFO  [main] 2017-03-09 13:23:44,210 ColumnFamilyStore.java:419 - 
> Initializing system.transferred_ranges
> INFO  [main] 2017-03-09 13:23:44,215 ColumnFamilyStore.java:419 - 
> Initializing system.views_builds_in_progress
> The fix is to add "true" to the 
> appender config to enable the line number and stack tracing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13671) nodes compute their own gcBefore times for validation compactions

2017-07-06 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13671:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

committed as {{9fdec0a82851f5c35cd21d02e8c4da8fc685edb2}}

> nodes compute their own gcBefore times for validation compactions
> -
>
> Key: CASSANDRA-13671
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13671
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> {{doValidationCompaction}} computes {{gcBefore}} based on the time the method 
> is called. If different nodes start validation on different seconds, 
> tombstones might not be purged consistently, leading to over streaming.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Use common nowInSec for validation compactions

2017-07-06 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/trunk af3748909 -> 9fdec0a82


Use common nowInSec for validation compactions

Patch by Blake Eggleston; Reviewed by Marcus Eriksson for CASSANDRA-13671


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9fdec0a8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9fdec0a8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9fdec0a8

Branch: refs/heads/trunk
Commit: 9fdec0a82851f5c35cd21d02e8c4da8fc685edb2
Parents: af37489
Author: Blake Eggleston 
Authored: Wed Jul 5 11:18:21 2017 -0700
Committer: Blake Eggleston 
Committed: Thu Jul 6 10:41:31 2017 -0700

--
 CHANGES.txt   |  1 +
 .../db/compaction/CompactionManager.java  | 18 ++
 .../org/apache/cassandra/repair/RepairJob.java| 16 
 .../repair/RepairMessageVerbHandler.java  |  2 +-
 .../apache/cassandra/repair/ValidationTask.java   |  8 
 .../org/apache/cassandra/repair/Validator.java| 14 +++---
 .../repair/messages/ValidationRequest.java| 18 +-
 .../cassandra/service/SerializationsTest.java |  2 +-
 8 files changed, 33 insertions(+), 46 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9fdec0a8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9584f63..4f2d2a1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Use common nowInSec for validation compactions (CASSANDRA-13671)
  * Improve handling of IR prepare failures (CASSANDRA-13672)
  * Send IR coordinator messages synchronously (CASSANDRA-13673)
  * Flush system.repair table before IR finalize promise (CASSANDRA-13660)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9fdec0a8/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d7e00da..0532515 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1312,9 +1312,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 Refs sstables = null;
 try
 {
-
-int gcBefore;
-int nowInSec = FBUtilities.nowInSeconds();
 UUID parentRepairSessionId = validator.desc.parentSessionId;
 String snapshotName;
 boolean isGlobalSnapshotValidation = 
cfs.snapshotExists(parentRepairSessionId.toString());
@@ -1330,13 +1327,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 // note that we populate the parent repair session when 
creating the snapshot, meaning the sstables in the snapshot are the ones we
 // are supposed to validate.
 sstables = cfs.getSnapshotSSTableReaders(snapshotName);
-
-
-// Computing gcbefore based on the current time wouldn't be 
very good because we know each replica will execute
-// this at a different time (that's the whole purpose of 
repair with snaphsot). So instead we take the creation
-// time of the snapshot, which should give us roughtly the 
same time on each replica (roughtly being in that case
-// 'as good as in the non-snapshot' case)
-gcBefore = 
cfs.gcBefore((int)(cfs.getSnapshotCreationTime(snapshotName) / 1000));
 }
 else
 {
@@ -1348,10 +1338,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 sstables = getSSTablesToValidate(cfs, validator);
 if (sstables == null)
 return; // this means the parent repair session was 
removed - the repair session failed on another node and we removed it
-if (validator.gcBefore > 0)
-gcBefore = validator.gcBefore;
-else
-gcBefore = getDefaultGcBefore(cfs, nowInSec);
 }
 
 // Create Merkle trees suitable to hold estimated partitions for 
the given ranges.
@@ -1360,8 +1346,8 @@ public class CompactionManager implements 
CompactionManagerMBean
 long start = System.nanoTime();
 long partitionCount = 0;
 try (AbstractCompactionStrategy.ScannerList scanners = 
cfs.getCompactionStrategyManager().getScanners(sstables, validator.desc.ranges);
- ValidationCompactionController controller = new 

[jira] [Commented] (CASSANDRA-13078) Increase unittest test.runners to speed up the test

2017-07-06 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076962#comment-16076962
 ] 

Ariel Weisberg commented on CASSANDRA-13078:


It's fine we would just override it for CircleCI so it's 1. For ASF Jenkins it 
really depends on how big those boxes, but I am pretty sure we can pick a 
number > 1.  A quad-core box can handle 4 just fine.

> Increase unittest test.runners to speed up the test
> ---
>
> Key: CASSANDRA-13078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13078
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jay Zhuang
>Priority: Minor
> Attachments: unittest.png, unittest_time.png
>
>
> The unittest takes very long time to run (about 40 minutes on a macbook). By 
> overriding the 
> [{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62],
>  it could speed up the test, especially on powerful servers. Currently, it's 
> set to 1 by default. I would like to propose to set the {{test.runners}} by 
> the [number of CPUs 
> dynamically|http://www.iliachemodanov.ru/en/blog-en/15-tools/ant/48-get-number-of-processors-in-ant-en].
>  For example, {{runners = num_cores / 4}}. What do you guys think?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13530) GroupCommitLogService

2017-07-06 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076956#comment-16076956
 ] 

Ariel Weisberg commented on CASSANDRA-13530:


You shouldn't set commitlog_sync_batch_window_in_ms and that might be hurting 
performance. It's not meant to be a small number. The batch commit log syncs as 
soon as the first entry arrives always. That's why at low concurrency it does 
well.

Can you increase the test to do a few million operations, and then run a warmup 
(in the same client JVM) first? It really does take hotspot a long time to warm 
up at both client and server. Basically put the test into a method you can call 
and then call it twice and report the values from the second.

I don't need to see a huge matrix. Just UPDATE with batch and group at whatever 
value performs best on your hardware. If that doesn't close the gap enough then 
there is a case to be made that maybe you are better off using group commit log 
service to match the fsync frequency your device supports with your workload.

> GroupCommitLogService
> -
>
> Key: CASSANDRA-13530
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13530
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Yuji Ito
>Assignee: Yuji Ito
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
> Attachments: groupCommit22.patch, groupCommit30.patch, 
> groupCommit3x.patch, groupCommitLog_result.xlsx, GuavaRequestThread.java, 
> MicroRequestThread.java
>
>
> I propose a new CommitLogService, GroupCommitLogService, to improve the 
> throughput when lots of requests are received.
> It improved the throughput by maximum 94%.
> I'd like to discuss about this CommitLogService.
> Currently, we can select either 2 CommitLog services; Periodic and Batch.
> In Periodic, we might lose some commit log which hasn't written to the disk.
> In Batch, we can write commit log to the disk every time. The size of commit 
> log to write is too small (< 4KB). When high concurrency, these writes are 
> gathered and persisted to the disk at once. But, when insufficient 
> concurrency, many small writes are issued and the performance decreases due 
> to the latency of the disk. Even if you use SSD, processes of many IO 
> commands decrease the performance.
> GroupCommitLogService writes some commitlog to the disk at once.
> The patch adds GroupCommitLogService (It is enabled by setting 
> `commitlog_sync` and `commitlog_sync_group_window_in_ms` in cassandra.yaml).
> The difference from Batch is just only waiting for the semaphore.
> By waiting for the semaphore, some writes for commit logs are executed at the 
> same time.
> In GroupCommitLogService, the latency becomes worse if the there is no 
> concurrency.
> I measured the performance with my microbench (MicroRequestThread.java) by 
> increasing the number of threads.The cluster has 3 nodes (Replication factor: 
> 3). Each nodes is AWS EC2 m4.large instance + 200IOPS io1 volume.
> The result is as below. The GroupCommitLogService with 10ms window improved 
> update with Paxos by 94% and improved select with Paxos by 76%.
> h6. SELECT / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|192|103|
> |2|163|212|
> |4|264|416|
> |8|454|800|
> |16|744|1311|
> |32|1151|1481|
> |64|1767|1844|
> |128|2949|3011|
> |256|4723|5000|
> h6. UPDATE / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|45|26|
> |2|39|51|
> |4|58|102|
> |8|102|198|
> |16|167|213|
> |32|289|295|
> |64|544|548|
> |128|1046|1058|
> |256|2020|2061|



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13672) incremental repair prepare phase can cause nodetool to hang in some failure scenarios

2017-07-06 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13672:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

committed as {{af37489092ca90bca336538adad02fb5ba859945}}

> incremental repair prepare phase can cause nodetool to hang in some failure 
> scenarios
> -
>
> Key: CASSANDRA-13672
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13672
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> Also doesn't log anything helpful



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Improve handling of IR prepare failures

2017-07-06 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/trunk 3234c0704 -> af3748909


Improve handling of IR prepare failures

Patch by Blake Eggleston; Reviewed by Marcus Eriksson for CASSANDRA-13672


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/af374890
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/af374890
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/af374890

Branch: refs/heads/trunk
Commit: af37489092ca90bca336538adad02fb5ba859945
Parents: 3234c07
Author: Blake Eggleston 
Authored: Wed Jul 5 13:28:04 2017 -0700
Committer: Blake Eggleston 
Committed: Thu Jul 6 10:35:53 2017 -0700

--
 CHANGES.txt|  1 +
 .../repair/consistent/CoordinatorSession.java  |  4 
 .../cassandra/repair/consistent/LocalSessions.java | 17 +++--
 .../repair/consistent/PendingAntiCompaction.java   |  4 +++-
 .../repair/consistent/LocalSessionTest.java|  2 +-
 5 files changed, 24 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/af374890/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6cd8bc5..9584f63 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Improve handling of IR prepare failures (CASSANDRA-13672)
  * Send IR coordinator messages synchronously (CASSANDRA-13673)
  * Flush system.repair table before IR finalize promise (CASSANDRA-13660)
  * Fix column filter creation for wildcard queries (CASSANDRA-13650)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/af374890/src/java/org/apache/cassandra/repair/consistent/CoordinatorSession.java
--
diff --git 
a/src/java/org/apache/cassandra/repair/consistent/CoordinatorSession.java 
b/src/java/org/apache/cassandra/repair/consistent/CoordinatorSession.java
index 830ed2c..d0ec7fd 100644
--- a/src/java/org/apache/cassandra/repair/consistent/CoordinatorSession.java
+++ b/src/java/org/apache/cassandra/repair/consistent/CoordinatorSession.java
@@ -240,6 +240,10 @@ public class CoordinatorSession extends ConsistentSession
 }
 }
 setAll(State.FAILED);
+
+String exceptionMsg = String.format("Incremental repair session %s has 
failed", sessionID);
+finalizeProposeFuture.setException(new RuntimeException(exceptionMsg));
+prepareFuture.setException(new RuntimeException(exceptionMsg));
 }
 
 private static String formatDuration(long then, long now)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/af374890/src/java/org/apache/cassandra/repair/consistent/LocalSessions.java
--
diff --git a/src/java/org/apache/cassandra/repair/consistent/LocalSessions.java 
b/src/java/org/apache/cassandra/repair/consistent/LocalSessions.java
index 61df2b0..a25f65c 100644
--- a/src/java/org/apache/cassandra/repair/consistent/LocalSessions.java
+++ b/src/java/org/apache/cassandra/repair/consistent/LocalSessions.java
@@ -568,8 +568,21 @@ public class LocalSessions
 
 public void onFailure(Throwable t)
 {
-logger.error(String.format("Prepare phase for incremental 
repair session %s failed", sessionID), t);
-failSession(sessionID);
+logger.error("Prepare phase for incremental repair session {} 
failed", sessionID, t);
+if (t instanceof 
PendingAntiCompaction.SSTableAcquisitionException)
+{
+logger.warn("Prepare phase for incremental repair session 
{} was unable to " +
+"acquire exclusive access to the neccesary 
sstables. " +
+"This is usually caused by running multiple 
incremental repairs on nodes that share token ranges",
+sessionID);
+
+}
+else
+{
+logger.error("Prepare phase for incremental repair session 
{} failed", sessionID, t);
+}
+sendMessage(coordinator, new 
PrepareConsistentResponse(sessionID, getBroadcastAddress(), false));
+failSession(sessionID, false);
 executor.shutdown();
 }
 });

http://git-wip-us.apache.org/repos/asf/cassandra/blob/af374890/src/java/org/apache/cassandra/repair/consistent/PendingAntiCompaction.java
--
diff --git 
a/src/java/org/apache/cassandra/repair/consistent/PendingAntiCompaction.java 

[jira] [Updated] (CASSANDRA-13673) Incremental repair coordinator sometimes doesn't send commit messages

2017-07-06 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13673:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

committed as {{3234c0704a4fef08dedc4ff78f4ded3b9226fe80}}

> Incremental repair coordinator sometimes doesn't send commit messages
> -
>
> Key: CASSANDRA-13673
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13673
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Send IR coordinator messages synchronously

2017-07-06 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/trunk 7df240e74 -> 3234c0704


Send IR coordinator messages synchronously

Patch by Blake Eggleston; Reviewed by Marcus Eriksson for CASSANDRA-13673


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3234c070
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3234c070
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3234c070

Branch: refs/heads/trunk
Commit: 3234c0704a4fef08dedc4ff78f4ded3b9226fe80
Parents: 7df240e
Author: Blake Eggleston 
Authored: Wed Jul 5 13:20:32 2017 -0700
Committer: Blake Eggleston 
Committed: Thu Jul 6 10:31:37 2017 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/repair/RepairRunnable.java |  3 +-
 .../repair/consistent/ConsistentSession.java|  5 ++-
 .../repair/consistent/CoordinatorSession.java   | 35 
 .../consistent/CoordinatorSessionTest.java  | 22 +---
 .../consistent/CoordinatorSessionsTest.java |  3 +-
 6 files changed, 27 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3234c070/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 22045e8..6cd8bc5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Send IR coordinator messages synchronously (CASSANDRA-13673)
  * Flush system.repair table before IR finalize promise (CASSANDRA-13660)
  * Fix column filter creation for wildcard queries (CASSANDRA-13650)
  * Add 'nodetool getbatchlogreplaythrottle' and 'nodetool 
setbatchlogreplaythrottle' (CASSANDRA-13614)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3234c070/src/java/org/apache/cassandra/repair/RepairRunnable.java
--
diff --git a/src/java/org/apache/cassandra/repair/RepairRunnable.java 
b/src/java/org/apache/cassandra/repair/RepairRunnable.java
index 29347a4..3f761ee 100644
--- a/src/java/org/apache/cassandra/repair/RepairRunnable.java
+++ b/src/java/org/apache/cassandra/repair/RepairRunnable.java
@@ -329,8 +329,7 @@ public class RepairRunnable extends WrappedRunnable 
implements ProgressEventNoti
 CoordinatorSession coordinatorSession = 
ActiveRepairService.instance.consistent.coordinated.registerSession(parentSession,
 allParticipants);
 ListeningExecutorService executor = createExecutor();
 AtomicBoolean hasFailure = new AtomicBoolean(false);
-ListenableFuture repairResult = coordinatorSession.execute(executor,
-   () -> 
submitRepairSessions(parentSession, true, executor, commonRanges, cfnames),
+ListenableFuture repairResult = coordinatorSession.execute(() -> 
submitRepairSessions(parentSession, true, executor, commonRanges, cfnames),
hasFailure);
 Collection ranges = new HashSet<>();
 for (Collection range : 
Iterables.transform(commonRanges, cr -> cr.right))

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3234c070/src/java/org/apache/cassandra/repair/consistent/ConsistentSession.java
--
diff --git 
a/src/java/org/apache/cassandra/repair/consistent/ConsistentSession.java 
b/src/java/org/apache/cassandra/repair/consistent/ConsistentSession.java
index af0a0dd..803a1f8 100644
--- a/src/java/org/apache/cassandra/repair/consistent/ConsistentSession.java
+++ b/src/java/org/apache/cassandra/repair/consistent/ConsistentSession.java
@@ -25,7 +25,6 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 import java.util.UUID;
-import java.util.concurrent.Executor;
 
 import com.google.common.base.Preconditions;
 import com.google.common.collect.ImmutableSet;
@@ -96,8 +95,8 @@ import org.apache.cassandra.tools.nodetool.RepairAdmin;
  *  conflicts with in progress compactions. The sstables will be marked 
repaired as part of the normal compaction process.
  *  
  *
- *  On the coordinator side, see {@link 
CoordinatorSession#finalizePropose(Executor)}, {@link 
CoordinatorSession#handleFinalizePromise(InetAddress, boolean)},
- *  & {@link CoordinatorSession#finalizeCommit(Executor)}
+ *  On the coordinator side, see {@link CoordinatorSession#finalizePropose()}, 
{@link CoordinatorSession#handleFinalizePromise(InetAddress, boolean)},
+ *  & {@link CoordinatorSession#finalizeCommit()}
  *  
  *
  *  On the local session side, see {@link 
LocalSessions#handleFinalizeProposeMessage(InetAddress, FinalizePropose)}


[jira] [Updated] (CASSANDRA-13660) Correctly timed kill -9 can put incremental repair sessions in an illegal state

2017-07-06 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13660:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

committed as {{7df240e74f0bda9a15eff3c9de02eb0cd8771b20}}

> Correctly timed kill -9 can put incremental repair sessions in an illegal 
> state
> ---
>
> Key: CASSANDRA-13660
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13660
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> If a node is killed after it has sent a finalize promise message to an 
> incremental repair coordinator, but before that section of commit log has 
> been synced to disk, it can startup with the incremental repair session in a 
> previous state, leading the following exception:
> {code}
> java.lang.RuntimeException: java.lang.IllegalArgumentException: Invalid state 
> transition PREPARED -> FINALIZED
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:201)
>  ~[cie-cassandra-3.0.13.5.jar:3.0.13.5]
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
> ~[cie-cassandra-3.0.13.5.jar:3.0.13.5]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_112]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[?:1.8.0_112]
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  ~[cie-cassandra-3.0.13.5.jar:3.0.13.5]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> Caused by: java.lang.IllegalArgumentException: Invalid state transition 
> PREPARED -> FINALIZED
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:145) 
> ~[guava-18.0.jar:?]
>   at 
> org.apache.cassandra.repair.consistent.LocalSessions.setStateAndSave(LocalSessions.java:452)
>  ~[cie-cassandra-3.0.13.5.jar:3.0.13.5]
>   at 
> org.apache.cassandra.repair.consistent.LocalSessions.handleStatusResponse(LocalSessions.java:679)
>  ~[cie-cassandra-3.0.13.5.jar:3.0.13.5]
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:188)
>  ~[cie-cassandra-3.0.13.5.jar:3.0.13.5]
>   ... 7 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Flush system.repair table before IR finalize promise

2017-07-06 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/trunk 9359e1e97 -> 7df240e74


Flush system.repair table before IR finalize promise

Patch by Blake Eggleston; Reviewed by Marcus Eriksson for CASSANDRA-13660


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7df240e7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7df240e7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7df240e7

Branch: refs/heads/trunk
Commit: 7df240e74f0bda9a15eff3c9de02eb0cd8771b20
Parents: 9359e1e
Author: Blake Eggleston 
Authored: Mon Jul 3 15:16:35 2017 -0700
Committer: Blake Eggleston 
Committed: Thu Jul 6 10:17:59 2017 -0700

--
 CHANGES.txt |  1 +
 .../repair/consistent/LocalSessions.java| 21 ++--
 2 files changed, 20 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7df240e7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 52bb6d2..22045e8 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Flush system.repair table before IR finalize promise (CASSANDRA-13660)
  * Fix column filter creation for wildcard queries (CASSANDRA-13650)
  * Add 'nodetool getbatchlogreplaythrottle' and 'nodetool 
setbatchlogreplaythrottle' (CASSANDRA-13614)
  * fix race condition in PendingRepairManager (CASSANDRA-13659)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7df240e7/src/java/org/apache/cassandra/repair/consistent/LocalSessions.java
--
diff --git a/src/java/org/apache/cassandra/repair/consistent/LocalSessions.java 
b/src/java/org/apache/cassandra/repair/consistent/LocalSessions.java
index 72ec50b..61df2b0 100644
--- a/src/java/org/apache/cassandra/repair/consistent/LocalSessions.java
+++ b/src/java/org/apache/cassandra/repair/consistent/LocalSessions.java
@@ -374,6 +374,13 @@ public class LocalSessions
 QueryProcessor.executeInternal(String.format(query, keyspace, table), 
sessionID);
 }
 
+private void syncTable()
+{
+TableId tid = Schema.instance.getTableMetadata(keyspace, table).id;
+ColumnFamilyStore cfm = 
Schema.instance.getColumnFamilyStoreInstance(tid);
+cfm.forceBlockingFlush();
+}
+
 /**
  * Loads a session directly from the table. Should be used for testing only
  */
@@ -585,7 +592,7 @@ public class LocalSessions
 LocalSession session = getSession(sessionID);
 if (session == null)
 {
-logger.debug("Received FinalizePropose message for unknown repair 
session {}, responding with failure");
+logger.debug("Received FinalizePropose message for unknown repair 
session {}, responding with failure", sessionID);
 sendMessage(from, new FailSession(sessionID));
 return;
 }
@@ -593,8 +600,18 @@ public class LocalSessions
 try
 {
 setStateAndSave(session, FINALIZE_PROMISED);
+
+/*
+ Flushing the repairs table here, *before* responding to the 
coordinator prevents a scenario where we respond
+ with a promise to the coordinator, but there is a failure before 
the commit log mutation with the
+ FINALIZE_PROMISED status is synced to disk. This could cause the 
state for this session to revert to an
+ earlier status on startup, which would prevent the failure 
recovery mechanism from ever being able to promote
+ this session to FINALIZED, likely creating inconsistencies in the 
repaired data sets across nodes.
+ */
+syncTable();
+
 sendMessage(from, new FinalizePromise(sessionID, 
getBroadcastAddress(), true));
-logger.debug("Received FinalizePropose message for incremental 
repair session {}, responded with FinalizePromise");
+logger.debug("Received FinalizePropose message for incremental 
repair session {}, responded with FinalizePromise", sessionID);
 }
 catch (IllegalArgumentException e)
 {


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13505) dtest failure in user_functions_test.TestUserFunctions.test_migration

2017-07-06 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076904#comment-16076904
 ] 

Ariel Weisberg commented on CASSANDRA-13505:


I checked and it's listed as failed once in the last 26 builds in 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-dtest-novnode/112/testReport/user_functions_test/TestUserFunctions/
 but so are all the tests which kind of suggests you are right and it was just 
a bad build.

So let's close and re-open if necessary.

> dtest failure in user_functions_test.TestUserFunctions.test_migration
> -
>
> Key: CASSANDRA-13505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Ariel Weisberg
>  Labels: dtest, test-failure
>
> {noformat}
> Failed 1 times in the last 10 runs. Flakiness: 11%, Stability: 90%
> Error Message
>  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: java.nio.file.NoSuchFileException: 
> /tmp/dtest-c0Kk_e/test/node3/data2/system_schema/keyspaces-abac5682dea631c5b535b3d6cffd0fb6/na_txn_flush_1304bca0-2b13-11e7-9307-c95a627b1fe3.log">
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-c0Kk_e
> dtest: DEBUG: Done setting configuration options:
> {   'enable_scripted_user_defined_functions': 'true',
> 'enable_user_defined_functions': 'true',
> 'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5}
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.policies: INFO: Using datacenter 'datacenter1' for 
> DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify 
> a local_dc to the constructor, or limit contact points to local cluster nodes
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/cassandra-dtest/user_functions_test.py",
>  line 47, in test_migration
> create_ks(schema_wait_session, 'ks', 1)
>   File 
> 

[jira] [Resolved] (CASSANDRA-13505) dtest failure in user_functions_test.TestUserFunctions.test_migration

2017-07-06 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg resolved CASSANDRA-13505.

Resolution: Not A Problem

> dtest failure in user_functions_test.TestUserFunctions.test_migration
> -
>
> Key: CASSANDRA-13505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Ariel Weisberg
>  Labels: dtest, test-failure
>
> {noformat}
> Failed 1 times in the last 10 runs. Flakiness: 11%, Stability: 90%
> Error Message
>  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: java.nio.file.NoSuchFileException: 
> /tmp/dtest-c0Kk_e/test/node3/data2/system_schema/keyspaces-abac5682dea631c5b535b3d6cffd0fb6/na_txn_flush_1304bca0-2b13-11e7-9307-c95a627b1fe3.log">
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-c0Kk_e
> dtest: DEBUG: Done setting configuration options:
> {   'enable_scripted_user_defined_functions': 'true',
> 'enable_user_defined_functions': 'true',
> 'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5}
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.policies: INFO: Using datacenter 'datacenter1' for 
> DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify 
> a local_dc to the constructor, or limit contact points to local cluster nodes
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/cassandra-dtest/user_functions_test.py",
>  line 47, in test_migration
> create_ks(schema_wait_session, 'ks', 1)
>   File 
> "/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/cassandra-dtest/dtest.py",
>  line 725, in create_ks
> session.execute(query % (name, "'class':'SimpleStrategy', 
> 'replication_factor':%d" % rf))
>   File 
> "/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/venv/src/cassandra-driver/cassandra/cluster.py",
>  line 2018, in execute
> return 

[jira] [Updated] (CASSANDRA-13664) RangeFetchMapCalculator should not try to optimise 'trivial' ranges

2017-07-06 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13664:
---
Reviewer: Ariel Weisberg

> RangeFetchMapCalculator should not try to optimise 'trivial' ranges
> ---
>
> Key: CASSANDRA-13664
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13664
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> RangeFetchMapCalculator (CASSANDRA-4650) tries to make the number of streams 
> out of each node as even as possible.
> In a typical multi-dc ring the nodes in the dcs are setup using token + 1, 
> creating many tiny ranges. If we only try to optimise over the number of 
> streams, it is likely that the amount of data streamed out of each node is 
> unbalanced.
> We should ignore those trivial ranges and only optimise the big ones, then 
> share the tiny ones over the nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13664) RangeFetchMapCalculator should not try to optimise 'trivial' ranges

2017-07-06 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076867#comment-16076867
 ] 

Ariel Weisberg commented on CASSANDRA-13664:


Changes makes sense to me. Can you run the dtests just to sanity check?

> RangeFetchMapCalculator should not try to optimise 'trivial' ranges
> ---
>
> Key: CASSANDRA-13664
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13664
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> RangeFetchMapCalculator (CASSANDRA-4650) tries to make the number of streams 
> out of each node as even as possible.
> In a typical multi-dc ring the nodes in the dcs are setup using token + 1, 
> creating many tiny ranges. If we only try to optimise over the number of 
> streams, it is likely that the amount of data streamed out of each node is 
> unbalanced.
> We should ignore those trivial ranges and only optimise the big ones, then 
> share the tiny ones over the nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13655) Range deletes in a CAS batch are ignored

2017-07-06 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076663#comment-16076663
 ] 

Sylvain Lebresne commented on CASSANDRA-13655:
--

I can indeed have a look, but it won't be before next week, so if there is any 
other taker in the meantime, I won't get mad.

> Range deletes in a CAS batch are ignored
> 
>
> Key: CASSANDRA-13655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13655
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Critical
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Range deletes in a CAS batch are ignored 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13069) Local batchlog for MV may not be correctly written on node movements

2017-07-06 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-13069:

Component/s: Materialized Views

> Local batchlog for MV may not be correctly written on node movements
> 
>
> Key: CASSANDRA-13069
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13069
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Sylvain Lebresne
>Assignee: Paulo Motta
>
> Unless I'm really reading this wrong, I think the code 
> [here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageProxy.java#L829-L843],
>  which comes from CASSANDRA-10674, isn't working properly.
> More precisely, I believe we can have both paired and unpaired mutations, so 
> that both {{if}} can be taken, but if that's the case, the 2nd write to the 
> batchlog will basically overwrite (remove) the batchlog write of the 1st 
> {{if}} and I don't think that's the intention. In practice, this means 
> "paired" mutation won't be in the batchlog, which mean they won't be replayed 
> at all if they fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13650) cql_tests:SlowQueryTester.local_query_test and cql_tests:SlowQueryTester.remote_query_test failed on trunk

2017-07-06 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13650:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

> cql_tests:SlowQueryTester.local_query_test and 
> cql_tests:SlowQueryTester.remote_query_test failed on trunk
> --
>
> Key: CASSANDRA-13650
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13650
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 4.x
>
>
> cql_tests.py:SlowQueryTester.local_query_test failed on trunk
> cql_tests.py:SlowQueryTester.remote_query_test failed on trunk
> SHA: fe3cfe3d7df296f022c50c9c0d22f91a0fc0a217
> It's due to the dtest unable to find {{'SELECT \* FROM ks.test1'}} pattern 
> from log.
> but in the log, following info is showed: 
> {{MonitoringTask.java:173 - 1 operations were slow in the last 10 msecs: 
> , time 102 msec - slow timeout 10 msec}}
> ColumnFilter.toString() should return {{*}}, but return normal column {{val}} 
> instead 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13650) cql_tests:SlowQueryTester.local_query_test and cql_tests:SlowQueryTester.remote_query_test failed on trunk

2017-07-06 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13650:

Status: Ready to Commit  (was: Patch Available)

> cql_tests:SlowQueryTester.local_query_test and 
> cql_tests:SlowQueryTester.remote_query_test failed on trunk
> --
>
> Key: CASSANDRA-13650
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13650
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 4.x
>
>
> cql_tests.py:SlowQueryTester.local_query_test failed on trunk
> cql_tests.py:SlowQueryTester.remote_query_test failed on trunk
> SHA: fe3cfe3d7df296f022c50c9c0d22f91a0fc0a217
> It's due to the dtest unable to find {{'SELECT \* FROM ks.test1'}} pattern 
> from log.
> but in the log, following info is showed: 
> {{MonitoringTask.java:173 - 1 operations were slow in the last 10 msecs: 
> , time 102 msec - slow timeout 10 msec}}
> ColumnFilter.toString() should return {{*}}, but return normal column {{val}} 
> instead 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13650) cql_tests:SlowQueryTester.local_query_test and cql_tests:SlowQueryTester.remote_query_test failed on trunk

2017-07-06 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076502#comment-16076502
 ] 

Alex Petrov commented on CASSANDRA-13650:
-

Thank you for the patch! Committed with a minor change: took the {{queried == 
null}} and put it to the ternary operator instead of the outer {{if}} (the way 
it used to be in pre-13004 code).

Committed to trunk with 
[9359e1e977361774daf27e80112774210e55baa4|https://github.com/apache/cassandra/commit/9359e1e977361774daf27e80112774210e55baa4]

> cql_tests:SlowQueryTester.local_query_test and 
> cql_tests:SlowQueryTester.remote_query_test failed on trunk
> --
>
> Key: CASSANDRA-13650
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13650
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 4.x
>
>
> cql_tests.py:SlowQueryTester.local_query_test failed on trunk
> cql_tests.py:SlowQueryTester.remote_query_test failed on trunk
> SHA: fe3cfe3d7df296f022c50c9c0d22f91a0fc0a217
> It's due to the dtest unable to find {{'SELECT \* FROM ks.test1'}} pattern 
> from log.
> but in the log, following info is showed: 
> {{MonitoringTask.java:173 - 1 operations were slow in the last 10 msecs: 
> , time 102 msec - slow timeout 10 msec}}
> ColumnFilter.toString() should return {{*}}, but return normal column {{val}} 
> instead 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Fix column filter creation for wildcard queries

2017-07-06 Thread ifesdjeen
Repository: cassandra
Updated Branches:
  refs/heads/trunk 613a8b43d -> 9359e1e97


Fix column filter creation for wildcard queries

Patch by Zhao Yang; reviewed by Alex Petrov for CASSANDRA-13650

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9359e1e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9359e1e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9359e1e9

Branch: refs/heads/trunk
Commit: 9359e1e977361774daf27e80112774210e55baa4
Parents: 613a8b4
Author: Zhao Yang 
Authored: Sat Jul 1 11:12:41 2017 +0800
Committer: Alex Petrov 
Committed: Thu Jul 6 15:28:33 2017 +0200

--
 CHANGES.txt |   1 +
 .../cassandra/db/filter/ColumnFilter.java   |  28 ++--
 .../cassandra/db/filter/ColumnFilterTest.java   | 137 ++-
 3 files changed, 144 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359e1e9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index aa98554..52bb6d2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Fix column filter creation for wildcard queries (CASSANDRA-13650)
  * Add 'nodetool getbatchlogreplaythrottle' and 'nodetool 
setbatchlogreplaythrottle' (CASSANDRA-13614)
  * fix race condition in PendingRepairManager (CASSANDRA-13659)
  * Allow noop incremental repair state transitions (CASSANDRA-13658)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359e1e9/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/ColumnFilter.java 
b/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
index 58c4cec..1d7d1c8 100644
--- a/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/ColumnFilter.java
@@ -69,11 +69,11 @@ public class ColumnFilter
 
 // True if _fetched_ includes all regular columns (and any static in 
_queried_), in which case metadata must not be
 // null. If false, then _fetched_ == _queried_ and we only store _queried_.
-public final boolean fetchAllRegulars;
+final boolean fetchAllRegulars;
 
-private final RegularAndStaticColumns fetched;
-private final RegularAndStaticColumns queried; // can be null if 
fetchAllRegulars, to represent a wildcard query (all
-   // static and regular 
columns are both _fetched_ and _queried_).
+final RegularAndStaticColumns fetched;
+final RegularAndStaticColumns queried; // can be null if fetchAllRegulars, 
to represent a wildcard query (all
+   // static and regular columns are 
both _fetched_ and _queried_).
 private final SortedSetMultimap 
subSelections; // can be null
 
 private ColumnFilter(boolean fetchAllRegulars,
@@ -88,23 +88,17 @@ public class ColumnFilter
 if (fetchAllRegulars)
 {
 RegularAndStaticColumns all = metadata.regularAndStaticColumns();
-if (queried == null)
-{
-this.fetched = this.queried = all;
-}
-else
-{
-this.fetched = all.statics.isEmpty()
-   ? all
-   : new RegularAndStaticColumns(queried.statics, 
all.regulars);
-this.queried = queried;
-}
+
+this.fetched = (all.statics.isEmpty() || queried == null)
+   ? all
+   : new RegularAndStaticColumns(queried.statics, 
all.regulars);
 }
 else
 {
-this.fetched = this.queried = queried;
+this.fetched = queried;
 }
 
+this.queried = queried;
 this.subSelections = subSelections;
 }
 
@@ -170,7 +164,7 @@ public class ColumnFilter
  */
 public RegularAndStaticColumns queriedColumns()
 {
-return queried;
+return queried == null ? fetched : queried;
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359e1e9/test/unit/org/apache/cassandra/db/filter/ColumnFilterTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/filter/ColumnFilterTest.java 
b/test/unit/org/apache/cassandra/db/filter/ColumnFilterTest.java
index fa08950..15bfb9c 100644
--- a/test/unit/org/apache/cassandra/db/filter/ColumnFilterTest.java
+++ b/test/unit/org/apache/cassandra/db/filter/ColumnFilterTest.java
@@ -18,10 +18,15 @@
 
 package 

[jira] [Updated] (CASSANDRA-13677) Make SASI timeouts easier to debug

2017-07-06 Thread Corentin Chary (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Corentin Chary updated CASSANDRA-13677:
---
Description: 
This would now give something like:
{code}
WARN  [ReadStage-15] 2017-06-08 12:47:57,799 
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
Thread[ReadStage-15,5,main]: {}
java.lang.RuntimeException: 
org.apache.cassandra.index.sasi.exceptions.TimeQuotaExceededException: Command 
'Read(biggraphite_metadata.directories columns=* rowfilter=component_0 = criteo 
limits=LIMIT 5000 range=(min(-9223372036854775808), min(-9223372036854775808)] 
pfilter=names(EMPTY))' took too long (100 > 100ms).
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2591)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_131]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
 ~[main/:na]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
 [main/:na]
at 
org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [main/:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: 
org.apache.cassandra.index.sasi.exceptions.TimeQuotaExceededException: Command 
'Read(biggraphite_metadata.directories columns=* rowfilter=component_0 = criteo 
limits=LIMIT 5000 range=(min(-9223372036854775808), min(-9223372036854775808)] 
pfilter=names(EMPTY))' took too long (100 > 100ms).
at 
org.apache.cassandra.index.sasi.plan.QueryController.checkpoint(QueryController.java:163)
 ~[main/:na]
at 
org.apache.cassandra.index.sasi.plan.QueryController.getPartition(QueryController.java:117)
 ~[main/:na]
at 
org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:116)
 ~[main/:na]
at 
org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:71)
 ~[main/:na]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[main/:na]
at 
org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:92)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:310)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
~[main/:na]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:333) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1884)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2587)
 ~[main/:na]
... 5 common frames omitted
{code}

Not having the query makes it super hard to debug. Even worse, because it stops 
potentially before the slow_query threshold, it won't appear as one.

  was:
This would now give something like:
{code}
WARN  [ReadStage-15] 2017-06-08 12:47:57,799 
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
Thread[ReadStage-15,5,main]: {}
java.lang.RuntimeException: 
org.apache.cassandra.index.sasi.exceptions.TimeQuotaExceededException: Command 
'Read(biggraphite_metadata.directories columns=* rowfilter=component_0 = criteo 
limits=LIMIT 5000 range=(min(-9223372036854775808), min(-9223372036854775808)] 
pfilter=names(EMPTY))' took too long (100 > 100ms).
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2591)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_131]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
 ~[main/:na]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
 [main/:na]
at 
org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [main/:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: 
org.apache.cassandra.index.sasi.exceptions.TimeQuotaExceededException: Command 
'Read(biggraphite_metadata.directories columns=* 

[jira] [Created] (CASSANDRA-13677) Make SASI timeouts easier to debug

2017-07-06 Thread Corentin Chary (JIRA)
Corentin Chary created CASSANDRA-13677:
--

 Summary: Make SASI timeouts easier to debug
 Key: CASSANDRA-13677
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13677
 Project: Cassandra
  Issue Type: Improvement
  Components: sasi
Reporter: Corentin Chary
Assignee: Corentin Chary
Priority: Minor
 Fix For: 4.x


This would now give something like:
{code}
WARN  [ReadStage-15] 2017-06-08 12:47:57,799 
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
Thread[ReadStage-15,5,main]: {}
java.lang.RuntimeException: 
org.apache.cassandra.index.sasi.exceptions.TimeQuotaExceededException: Command 
'Read(biggraphite_metadata.directories columns=* rowfilter=component_0 = criteo 
limits=LIMIT 5000 range=(min(-9223372036854775808), min(-9223372036854775808)] 
pfilter=names(EMPTY))' took too long (100 > 100ms).
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2591)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_131]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
 ~[main/:na]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
 [main/:na]
at 
org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [main/:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: 
org.apache.cassandra.index.sasi.exceptions.TimeQuotaExceededException: Command 
'Read(biggraphite_metadata.directories columns=* rowfilter=component_0 = criteo 
limits=LIMIT 5000 range=(min(-9223372036854775808), min(-9223372036854775808)] 
pfilter=names(EMPTY))' took too long (100 > 100ms).
at 
org.apache.cassandra.index.sasi.plan.QueryController.checkpoint(QueryController.java:163)
 ~[main/:na]
at 
org.apache.cassandra.index.sasi.plan.QueryController.getPartition(QueryController.java:117)
 ~[main/:na]
at 
org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:116)
 ~[main/:na]
at 
org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:71)
 ~[main/:na]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[main/:na]
at 
org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:92)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:310)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
~[main/:na]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:333) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1884)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2587)
 ~[main/:na]
... 5 common frames omitted
{code}

Not having the query makes it super hard to debug



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13677) Make SASI timeouts easier to debug

2017-07-06 Thread Corentin Chary (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Corentin Chary updated CASSANDRA-13677:
---
Attachment: 0001-SASI-Make-timeouts-easier-to-debug.patch

> Make SASI timeouts easier to debug
> --
>
> Key: CASSANDRA-13677
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13677
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 0001-SASI-Make-timeouts-easier-to-debug.patch
>
>
> This would now give something like:
> {code}
> WARN  [ReadStage-15] 2017-06-08 12:47:57,799 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-15,5,main]: {}
> java.lang.RuntimeException: 
> org.apache.cassandra.index.sasi.exceptions.TimeQuotaExceededException: 
> Command 'Read(biggraphite_metadata.directories columns=* 
> rowfilter=component_0 = criteo limits=LIMIT 5000 
> range=(min(-9223372036854775808), min(-9223372036854775808)] 
> pfilter=names(EMPTY))' took too long (100 > 100ms).
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2591)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_131]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  ~[main/:na]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
>  [main/:na]
> at 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [main/:na]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Caused by: 
> org.apache.cassandra.index.sasi.exceptions.TimeQuotaExceededException: 
> Command 'Read(biggraphite_metadata.directories columns=* 
> rowfilter=component_0 = criteo limits=LIMIT 5000 
> range=(min(-9223372036854775808), min(-9223372036854775808)] 
> pfilter=names(EMPTY))' took too long (100 > 100ms).
> at 
> org.apache.cassandra.index.sasi.plan.QueryController.checkpoint(QueryController.java:163)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.plan.QueryController.getPartition(QueryController.java:117)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:116)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:71)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:92)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:310)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:333) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1884)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2587)
>  ~[main/:na]
> ... 5 common frames omitted
> {code}
> Not having the query makes it super hard to debug



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13677) Make SASI timeouts easier to debug

2017-07-06 Thread Corentin Chary (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Corentin Chary updated CASSANDRA-13677:
---
Status: Patch Available  (was: Open)

> Make SASI timeouts easier to debug
> --
>
> Key: CASSANDRA-13677
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13677
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 0001-SASI-Make-timeouts-easier-to-debug.patch
>
>
> This would now give something like:
> {code}
> WARN  [ReadStage-15] 2017-06-08 12:47:57,799 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-15,5,main]: {}
> java.lang.RuntimeException: 
> org.apache.cassandra.index.sasi.exceptions.TimeQuotaExceededException: 
> Command 'Read(biggraphite_metadata.directories columns=* 
> rowfilter=component_0 = criteo limits=LIMIT 5000 
> range=(min(-9223372036854775808), min(-9223372036854775808)] 
> pfilter=names(EMPTY))' took too long (100 > 100ms).
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2591)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_131]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  ~[main/:na]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
>  [main/:na]
> at 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [main/:na]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Caused by: 
> org.apache.cassandra.index.sasi.exceptions.TimeQuotaExceededException: 
> Command 'Read(biggraphite_metadata.directories columns=* 
> rowfilter=component_0 = criteo limits=LIMIT 5000 
> range=(min(-9223372036854775808), min(-9223372036854775808)] 
> pfilter=names(EMPTY))' took too long (100 > 100ms).
> at 
> org.apache.cassandra.index.sasi.plan.QueryController.checkpoint(QueryController.java:163)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.plan.QueryController.getPartition(QueryController.java:117)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:116)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:71)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:92)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:310)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:333) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1884)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2587)
>  ~[main/:na]
> ... 5 common frames omitted
> {code}
> Not having the query makes it super hard to debug



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13614) Batchlog replay throttle should be dynamically configurable with jmx and possibly nodetool

2017-07-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-13614:
--
Fix Version/s: 4.x

> Batchlog replay throttle should be dynamically configurable with jmx and 
> possibly nodetool
> --
>
> Key: CASSANDRA-13614
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13614
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Configuration, Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
> Fix For: 4.x
>
>
> As it is said in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], 
> batchlog replay can be excessively throttled with materialized views. The 
> throttle is controlled by the property {{batchlog_replay_throttle_in_kb}}, 
> which is set by default to (only) 1024KB, and it can't be configured 
> dynamically. It would be useful to be able of modifying it dynamically with 
> JMX and possibly nodetool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13614) Batchlog replay throttle should be dynamically configurable with jmx and possibly nodetool

2017-07-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076382#comment-16076382
 ] 

Andrés de la Peña commented on CASSANDRA-13614:
---

Dtests committed as 
[8cd52d67587ddb5efc80366ff6c6a044c30b41d3|https://github.com/riptano/cassandra-dtest/commit/8cd52d67587ddb5efc80366ff6c6a044c30b41d3].

> Batchlog replay throttle should be dynamically configurable with jmx and 
> possibly nodetool
> --
>
> Key: CASSANDRA-13614
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13614
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Configuration, Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As it is said in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], 
> batchlog replay can be excessively throttled with materialized views. The 
> throttle is controlled by the property {{batchlog_replay_throttle_in_kb}}, 
> which is set by default to (only) 1024KB, and it can't be configured 
> dynamically. It would be useful to be able of modifying it dynamically with 
> JMX and possibly nodetool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13614) Batchlog replay throttle should be dynamically configurable with jmx and possibly nodetool

2017-07-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-13614:
--
Resolution: Done
Status: Resolved  (was: Ready to Commit)

> Batchlog replay throttle should be dynamically configurable with jmx and 
> possibly nodetool
> --
>
> Key: CASSANDRA-13614
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13614
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Configuration, Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As it is said in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], 
> batchlog replay can be excessively throttled with materialized views. The 
> throttle is controlled by the property {{batchlog_replay_throttle_in_kb}}, 
> which is set by default to (only) 1024KB, and it can't be configured 
> dynamically. It would be useful to be able of modifying it dynamically with 
> JMX and possibly nodetool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9909) Configuration is loaded too often during runtime

2017-07-06 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076327#comment-16076327
 ] 

Sylvain Lebresne commented on CASSANDRA-9909:
-

bq. For proposal #3, you assume the seeds are being written out to a file, 
which is not always the case

Pretty sure proposal #3 suggested to modify the reloading of the config within 
{{SimpleSeedProvider}} (to, say, replace the call to {{loadConfig}} by a way to 
reload the config only if it's not up-to-date and use that), so that wouldn't 
affect other seed provider in the least and is imo a perfectly valid option. In 
fact, I'd argue it's probably better than proposal #2.

> Configuration is loaded too often during runtime
> 
>
> Key: CASSANDRA-9909
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9909
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Priority: Minor
>
> Each call to {{SimpleSeedProvider.getSeeds()}} via 
> {{DatabaseDescriptor.getSeeds()}} loads the configuration file from disk.
> This is unnecessary in the vast majority of calls from {{Gossiper}} and 
> {{StorageService}}.
> Proposal:
> * Instantiate {{ConfigurationLoader}} once during init of DD (not every time 
> in {{loadConfig()}}}
> * Only load configuration once per time interval
> * Only load configuration if config file has changed (file modification 
> timestamp) - if applicable (URL resolves to a file)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13671) nodes compute their own gcBefore times for validation compactions

2017-07-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076267#comment-16076267
 ] 

Marcus Eriksson commented on CASSANDRA-13671:
-

+1

> nodes compute their own gcBefore times for validation compactions
> -
>
> Key: CASSANDRA-13671
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13671
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> {{doValidationCompaction}} computes {{gcBefore}} based on the time the method 
> is called. If different nodes start validation on different seconds, 
> tombstones might not be purged consistently, leading to over streaming.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13671) nodes compute their own gcBefore times for validation compactions

2017-07-06 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13671:

Status: Ready to Commit  (was: Patch Available)

> nodes compute their own gcBefore times for validation compactions
> -
>
> Key: CASSANDRA-13671
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13671
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> {{doValidationCompaction}} computes {{gcBefore}} based on the time the method 
> is called. If different nodes start validation on different seconds, 
> tombstones might not be purged consistently, leading to over streaming.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-07-06 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076258#comment-16076258
 ] 

Amitkumar Ghatwal commented on CASSANDRA-13615:
---

Hi [~jjirsa] - can you create a jenkins job on ppc64le for validating creation 
of this - "libsigar-ppc64le-linux.so" using below steps

$ git clone https://github.com/hyperic/sigar.git
$ cd sigar/bindings/java
$ build.xml
$ ant
$ ls -l sigar-bin/lib
total 740
rw-rr- 1 root root 1127 Jun 16 05:03 history.xml
rw-rr- 1 root root 313128 Jun 16 05:03 libsigar-ppc64le-linux.so
rw-rr- 1 root root 435772 Jun 16 05:03 sigar.jar

The included version of SIGAR ( sigar-1.6.4.jar ) does not support ppc64le. 
$ cp libsigar-ppc64le-linux.so $CASSANDRA_HOME/lib/sigar-bin


> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13614) Batchlog replay throttle should be dynamically configurable with jmx and possibly nodetool

2017-07-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076224#comment-16076224
 ] 

Andrés de la Peña commented on CASSANDRA-13614:
---

Committed to trunk as 
[613a8b43d2b5a425080653898b28bde6cd7eb9ba|https://github.com/apache/cassandra/commit/613a8b43d2b5a425080653898b28bde6cd7eb9ba].

Created [PR 1491 |https://github.com/riptano/cassandra-dtest/pull/1491] for 
dtests.

Thanks for the review!

> Batchlog replay throttle should be dynamically configurable with jmx and 
> possibly nodetool
> --
>
> Key: CASSANDRA-13614
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13614
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Configuration, Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As it is said in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], 
> batchlog replay can be excessively throttled with materialized views. The 
> throttle is controlled by the property {{batchlog_replay_throttle_in_kb}}, 
> which is set by default to (only) 1024KB, and it can't be configured 
> dynamically. It would be useful to be able of modifying it dynamically with 
> JMX and possibly nodetool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13072) Cassandra failed to run on Linux-aarch64

2017-07-06 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076220#comment-16076220
 ] 

Benjamin Lerer commented on CASSANDRA-13072:


I had a look at CASSANDRA-13300 which added support for {{jna-4.3.0}} in trunk 
and it seems, according to [~yukim] comment 
[here|https://issues.apache.org/jira/browse/CASSANDRA-13300?focusedCommentId=15896870=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15896870],
 that {{jna-4.2.2}} will work also fine with {{ppc64le}} machines.

I dig a bit and found out that {{jna-4.0.0}} has been bundled with C* since 
{{2.1 beta}}. So, I should not have upgraded {{jna}} in {{3.0}} (I should have 
follow your advise [~jjirsa] :-( ).

Now, the situation is a bit more tricky. If I rollback the jar to {{4.0.0}}, I 
will introduce a regression for {{Linux-aarch64}}. Due to that, it seems that 
my only safe option would be to change the JNA version to {{4.2.2}} as it seems 
to work for everybody. 

[~jjirsa] and [~jasobrown] what is your opinion?

> Cassandra failed to run on Linux-aarch64
> 
>
> Key: CASSANDRA-13072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Hardware: ARM aarch64
> OS: Ubuntu 16.04.1 LTS
>Reporter: Jun He
>Assignee: Benjamin Lerer
>  Labels: incompatible
> Fix For: 3.0.14, 3.11.0, 4.0
>
> Attachments: compat_report.html
>
>
> Steps to reproduce:
> 1. Download cassandra latest source
> 2. Build it with "ant"
> 3. Run with "./bin/cassandra". Daemon is crashed with following error message:
> {quote}
> INFO  05:30:21 Initializing system.schema_functions
> INFO  05:30:21 Initializing system.schema_aggregates
> ERROR 05:30:22 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:97) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:316)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:330)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:163) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:93)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:114)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:519)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:497)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:480)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.createFlushWriter(Memtable.java:439) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:371) 
> ~[main/:na]
> at org.apache.cassandra.db.Memtable.flush(Memtable.java:332) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_111]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
> {quote}
> Analyze:
> This issue is caused by bundled jna-4.0.0.jar which doesn't come with aarch64 
> native support. Replace lib/jna-4.0.0.jar with jna-4.2.0.jar from 
> http://central.maven.org/maven2/net/java/dev/jna/jna/4.2.0/ can fix this 
> problem.
> Attached is the binary compatibility report of jna.jar between 4.0 and 4.2. 
> The result is good (97.4%). So is there possibility to upgrade jna to 4.2.0 
> in upstream? Should there be any kind of tests to execute, please kindly 
> point me. 

cassandra git commit: Add 'nodetool getbatchlogreplaythrottle' and 'nodetool setbatchlogreplaythrottle'

2017-07-06 Thread adelapena
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6a7fad601 -> 613a8b43d


Add 'nodetool getbatchlogreplaythrottle' and 'nodetool 
setbatchlogreplaythrottle'

patch by Andres de la Peña; reviewed by Paulo Motta for CASSANDRA-13614


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/613a8b43
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/613a8b43
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/613a8b43

Branch: refs/heads/trunk
Commit: 613a8b43d2b5a425080653898b28bde6cd7eb9ba
Parents: 6a7fad6
Author: Andrés de la Peña 
Authored: Thu Jul 6 10:09:29 2017 +0100
Committer: Andrés de la Peña 
Committed: Thu Jul 6 10:09:29 2017 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/batchlog/BatchlogManager.java | 26 --
 .../cassandra/config/DatabaseDescriptor.java|  9 +++--
 .../cassandra/service/StorageService.java   | 11 ++
 .../cassandra/service/StorageServiceMBean.java  |  3 ++
 .../org/apache/cassandra/tools/NodeProbe.java   | 10 ++
 .../org/apache/cassandra/tools/NodeTool.java|  2 ++
 .../nodetool/GetBatchlogReplayTrottle.java  | 33 +
 .../nodetool/SetBatchlogReplayThrottle.java | 37 
 9 files changed, 128 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/613a8b43/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 98c9cad..aa98554 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Add 'nodetool getbatchlogreplaythrottle' and 'nodetool 
setbatchlogreplaythrottle' (CASSANDRA-13614)
  * fix race condition in PendingRepairManager (CASSANDRA-13659)
  * Allow noop incremental repair state transitions (CASSANDRA-13658)
  * Run repair with down replicas (CASSANDRA-10446)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/613a8b43/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/batchlog/BatchlogManager.java 
b/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
index 321fca6..9ca7acf 100644
--- a/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
@@ -76,6 +76,8 @@ public class BatchlogManager implements BatchlogManagerMBean
 // Single-thread executor service for scheduling and serializing log 
replay.
 private final ScheduledExecutorService batchlogTasks;
 
+private final RateLimiter rateLimiter = 
RateLimiter.create(Double.MAX_VALUE);
+
 public BatchlogManager()
 {
 ScheduledThreadPoolExecutor executor = new 
DebuggableScheduledThreadPoolExecutor("BatchlogTasks");
@@ -194,8 +196,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 logger.trace("Replay cancelled as there are no peers in the 
ring.");
 return;
 }
-int throttleInKB = DatabaseDescriptor.getBatchlogReplayThrottleInKB() 
/ endpointsCount;
-RateLimiter rateLimiter = RateLimiter.create(throttleInKB == 0 ? 
Double.MAX_VALUE : throttleInKB * 1024);
+setRate(DatabaseDescriptor.getBatchlogReplayThrottleInKB());
 
 UUID limitUuid = UUIDGen.maxTimeUUID(System.currentTimeMillis() - 
getBatchlogTimeout());
 ColumnFamilyStore store = 
Keyspace.open(SchemaConstants.SYSTEM_KEYSPACE_NAME).getColumnFamilyStore(SystemKeyspace.BATCHES);
@@ -212,6 +213,27 @@ public class BatchlogManager implements 
BatchlogManagerMBean
 logger.trace("Finished replayFailedBatches");
 }
 
+/**
+ * Sets the rate for the current rate limiter. When {@code throttleInKB} 
is 0, this sets the rate to
+ * {@link Double#MAX_VALUE} bytes per second.
+ *
+ * @param throttleInKB throughput to set in KB per second
+ */
+public void setRate(final int throttleInKB)
+{
+int endpointsCount = 
StorageService.instance.getTokenMetadata().getSizeOfAllEndpoints();
+if (endpointsCount > 0)
+{
+int endpointThrottleInKB = throttleInKB / endpointsCount;
+double throughput = endpointThrottleInKB == 0 ? Double.MAX_VALUE : 
endpointThrottleInKB * 1024.0;
+if (rateLimiter.getRate() != throughput)
+{
+logger.debug("Updating batchlog replay throttle to {} KB/s, {} 
KB/s per endpoint", throttleInKB, endpointThrottleInKB);
+rateLimiter.setRate(throughput);
+}
+}
+}
+
 // read less rows (batches) per page if they are very large
 static int calculatePageSize(ColumnFamilyStore store)
 {


[jira] [Commented] (CASSANDRA-13573) sstabledump doesn't print out tombstone information for frozen set collection

2017-07-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076149#comment-16076149
 ] 

ZhaoYang commented on CASSANDRA-13573:
--

There are some issues in sstabledump

1. as you reported,  frozen collections

2. non-frozen UDT


{quote}
Exception in thread "main" java.lang.ClassCastException: 
org.apache.cassandra.db.marshal.UserType cannot be cast to 
org.apache.cassandra.db.marshal.CollectionType
at 
org.apache.cassandra.tools.JsonTransformer.serializeCell(JsonTransformer.java:413)
at 
org.apache.cassandra.tools.JsonTransformer.serializeColumnData(JsonTransformer.java:396)
at 
org.apache.cassandra.tools.JsonTransformer.serializeRow(JsonTransformer.java:276)
at 
org.apache.cassandra.tools.JsonTransformer.serializePartition(JsonTransformer.java:210)
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at 
java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at 
org.apache.cassandra.tools.JsonTransformer.toJson(JsonTransformer.java:100)
at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:236)
{quote}

> sstabledump doesn't print out tombstone information for frozen set collection
> -
>
> Key: CASSANDRA-13573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13573
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefano Ortolani
>Assignee: ZhaoYang
>
> Schema and data"
> {noformat}
> CREATE TABLE ks.cf (
> hash blob,
> report_id timeuuid,
> subject_ids frozen,
> PRIMARY KEY (hash, report_id)
> ) WITH CLUSTERING ORDER BY (report_id DESC);
> INSERT INTO ks.cf (hash, report_id, subject_ids) VALUES (0x1213, now(), 
> {1,2,4,5});
> {noformat}
> sstabledump output is:
> {noformat}
> sstabledump mc-1-big-Data.db 
> [
>   {
> "partition" : {
>   "key" : [ "1213" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 16,
> "clustering" : [ "ec01eed0-49d9-11e7-b39a-97a96f529c02" ],
> "liveness_info" : { "tstamp" : "2017-06-05T10:29:57.434856Z" },
> "cells" : [
>   { "name" : "subject_ids", "value" : "" }
> ]
>   }
> ]
>   }
> ]
> {noformat}
> While the values are really there:
> {noformat}
> cqlsh:ks> select * from cf ;
>  hash   | report_id| subject_ids
> +--+-
>  0x1213 | 02bafff0-49d9-11e7-b39a-97a96f529c02 |   {1, 2, 4}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13272) "nodetool bootstrap resume" does not exit

2017-07-06 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13272:
---
Since Version: 2.2.0 beta 1

> "nodetool bootstrap resume" does not exit
> -
>
> Key: CASSANDRA-13272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13272
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Streaming and Messaging
>Reporter: Tom van der Woerdt
>Assignee: Tim Lamballais
>  Labels: lhf
>
> I have a script that calls "nodetool bootstrap resume" after a failed join 
> (in my environment some streams sometimes fail due to mis-tuning of stream 
> bandwidth settings). However, if the streams fail again, nodetool won't exit.
> Last lines before it just hangs forever :
> {noformat}
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:59,843] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12671-big-Data.db
>  (progress: 1112%)
> [2017-02-26 09:25:51,000] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:33:45,017] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:39:27,216] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:53:33,084] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:55:07,115] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:06:49,557] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:40:55,880] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 11:09:21,025] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:44:35,755] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:49:18,867] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,611] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,612] Stream failed
> {noformat}
> At that point ("Stream failed") I would expect nodetool to exit with a 
> non-zero exit code. Instead, it just wants me to ^C it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13272) "nodetool bootstrap resume" does not exit

2017-07-06 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076144#comment-16076144
 ] 

Benjamin Lerer commented on CASSANDRA-13272:


I had a look at the sources and I am not sure to understand the purpose of the 
original code. Everywhere else in {{StorageService}} the full chain of 
Exceptions is being logged.

If the wrapping Exception was created without a message it will automatically 
use the message of the wrapped exception. Due to that, we should probably use:
{code}
@Override
public void onFailure(Throwable e)
{
String message = "Error during bootstrap: " + 
e.getMessage();
logger.error(message, e);
progressSupport.progress("bootstrap", new 
ProgressEvent(ProgressEventType.ERROR, 1, 1, message));
progressSupport.progress("bootstrap", new 
ProgressEvent(ProgressEventType.COMPLETE, 1, 1, "Resume bootstrap complete"));
}
{code}

[~yukim] Do you remember why you used the exception cause instead of the 
exception?

> "nodetool bootstrap resume" does not exit
> -
>
> Key: CASSANDRA-13272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13272
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Streaming and Messaging
>Reporter: Tom van der Woerdt
>Assignee: Tim Lamballais
>  Labels: lhf
>
> I have a script that calls "nodetool bootstrap resume" after a failed join 
> (in my environment some streams sometimes fail due to mis-tuning of stream 
> bandwidth settings). However, if the streams fail again, nodetool won't exit.
> Last lines before it just hangs forever :
> {noformat}
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:59,843] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12671-big-Data.db
>  (progress: 1112%)
> [2017-02-26 09:25:51,000] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:33:45,017] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:39:27,216] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:53:33,084] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:55:07,115] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:06:49,557] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:40:55,880] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 11:09:21,025] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:44:35,755] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:49:18,867] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,611] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,612] Stream failed
> {noformat}
> At that point ("Stream failed") I would expect nodetool to exit with a 
> non-zero exit code. Instead, it just wants me to ^C it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9736) Add alter statement for MV

2017-07-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076117#comment-16076117
 ] 

ZhaoYang commented on CASSANDRA-9736:
-

Semantic for {{alter view}} :

# support alter view
##  if view is wildcard, eg. {{select *}}
###  view cannot add column. all columns are selected
###  view can drop column. eg. change from {{*}} to {{a,b}} but it may confuse 
user later when adding new column to base and not included in view.
###  view primary key cannot be rename, pls rename base table instead
##  if view is not wildcard, eg. {{select a,b,c}}
###  view can drop columns except for last remaining column
###  view can add columns except for base's static column
##  cannot drop restricted column(eg. {{where a=2}}) on view, because that 
needs rebuild view data. dropping view and recreating view is better
##  support altering view's {{table_options}}, eg. compaction, etc.
##   view primary key cannot be altered
# support drop column from base if view doesn't select or restrict those columns

> Add alter statement for MV
> --
>
> Key: CASSANDRA-9736
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9736
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Carl Yeksigian
>Assignee: ZhaoYang
>  Labels: materializedviews
> Fix For: 4.x
>
>
> {{ALTER MV}} would allow us to drop columns in the base table without first 
> dropping the materialized views, since we'd be able to later drop columns in 
> the MV.
> Also, we should be able to add new columns to the MV; a new builder would 
> have to run to copy the values for these additional columns.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13672) incremental repair prepare phase can cause nodetool to hang in some failure scenarios

2017-07-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076082#comment-16076082
 ] 

Marcus Eriksson commented on CASSANDRA-13672:
-

+1

> incremental repair prepare phase can cause nodetool to hang in some failure 
> scenarios
> -
>
> Key: CASSANDRA-13672
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13672
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> Also doesn't log anything helpful



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13672) incremental repair prepare phase can cause nodetool to hang in some failure scenarios

2017-07-06 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13672:

Status: Ready to Commit  (was: Patch Available)

> incremental repair prepare phase can cause nodetool to hang in some failure 
> scenarios
> -
>
> Key: CASSANDRA-13672
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13672
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> Also doesn't log anything helpful



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13673) Incremental repair coordinator sometimes doesn't send commit messages

2017-07-06 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13673:

Status: Ready to Commit  (was: Patch Available)

> Incremental repair coordinator sometimes doesn't send commit messages
> -
>
> Key: CASSANDRA-13673
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13673
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13673) Incremental repair coordinator sometimes doesn't send commit messages

2017-07-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076079#comment-16076079
 ] 

Marcus Eriksson commented on CASSANDRA-13673:
-

+1

> Incremental repair coordinator sometimes doesn't send commit messages
> -
>
> Key: CASSANDRA-13673
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13673
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13620) Don't skip corrupt sstables on startup

2017-07-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076038#comment-16076038
 ] 

Marcus Eriksson commented on CASSANDRA-13620:
-

https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/116/
Only running for 3.0 so far, if they look good I'll trigger for the other 
branches

> Don't skip corrupt sstables on startup
> --
>
> Key: CASSANDRA-13620
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13620
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> If we get an IOException when opening an sstable on startup, we just 
> [skip|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L563-L567]
>  it and continue starting
> we should use the DiskFailurePolicy and never explicitly catch an IOException 
> here



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13664) RangeFetchMapCalculator should not try to optimise 'trivial' ranges

2017-07-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076030#comment-16076030
 ] 

Marcus Eriksson commented on CASSANDRA-13664:
-

bq. isn't the issue here that the streams aren't weighted
Yes, that would be a nicer solution. It wasn't obvious to me how to do maximum 
bipartite matching with weighted edges though so I went with the easy solution 
(I guess having an edge for each token would be a way, but that would be quite 
silly). Also have to say I didn't spend very much time trying to figure it out, 
so if you have an idea, please let me know.

> RangeFetchMapCalculator should not try to optimise 'trivial' ranges
> ---
>
> Key: CASSANDRA-13664
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13664
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> RangeFetchMapCalculator (CASSANDRA-4650) tries to make the number of streams 
> out of each node as even as possible.
> In a typical multi-dc ring the nodes in the dcs are setup using token + 1, 
> creating many tiny ranges. If we only try to optimise over the number of 
> streams, it is likely that the amount of data streamed out of each node is 
> unbalanced.
> We should ignore those trivial ranges and only optimise the big ones, then 
> share the tiny ones over the nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13594) Use an ExecutorService for repair commands instead of new Thread(..).start()

2017-07-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076012#comment-16076012
 ] 

Marcus Eriksson commented on CASSANDRA-13594:
-

https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/119/

> Use an ExecutorService for repair commands instead of new Thread(..).start()
> 
>
> Key: CASSANDRA-13594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13594
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Currently when starting a new repair, we create a new Thread and start it 
> immediately
> It would be nice to be able to 1) limit the number of threads and 2) reject 
> starting new repair commands if we are already running too many.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13583) test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test

2017-07-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076010#comment-16076010
 ] 

Marcus Eriksson commented on CASSANDRA-13583:
-

bq. Do I have it right?
yes

running dtests here: 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/118/

> test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test
> --
>
> Key: CASSANDRA-13583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13583
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Hamm
>Assignee: Marcus Eriksson
>  Labels: dtest, test-failure
> Fix For: 4.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/524/testReport/rebuild_test/TestRebuild/disallow_rebuild_from_nonreplica_test
> {noformat}
> Error Message
> ToolError not raised
>  >> begin captured logging << 
> dtest: DEBUG: Python driver version in use: 3.10
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-0tUjhX
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.cluster: INFO: New Cassandra host  discovered
> cassandra.cluster: INFO: New Cassandra host  discovered
> - >> end captured logging << -
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrappedtestrebuild
> f(obj)
>   File "/home/automaton/cassandra-dtest/rebuild_test.py", line 357, in 
> disallow_rebuild_from_nonreplica_test
> node1.nodetool('rebuild -ks ks1 -ts (%s,%s] -s %s' % (node3_token, 
> node1_token, node3_address))
>   File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
> "{0} not raised".format(exc_name))
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13526) nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-07-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075925#comment-16075925
 ] 

ZhaoYang edited comment on CASSANDRA-13526 at 7/6/17 6:22 AM:
--

| [trunk|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13526] | 
[dtest-source|https://github.com/riptano/cassandra-dtest/commits/CASSANDRA-13526]
 |
| [unit|https://circleci.com/gh/jasonstack/cassandra/106] | dtest: 
{{cql_tests.py:SlowQueryTester.local_query_test}}{{cql_tests.py:SlowQueryTester.remote_query_test}}[known|https://issues.apache.org/jira/browse/CASSANDRA-13592]
{{bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test}}[known|https://issues.apache.org/jira/browse/CASSANDRA-13576]
 |

when no local range && node has joined token ring,  clean up will remove all 
base local sstables.  


was (Author: jasonstack):
| [trunk|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13526] | 
[dtest-source|https://github.com/riptano/cassandra-dtest/commits/CASSANDRA-13526]
 |

when no local range && node has joined token ring,  clean up will remove all 
base local sstables.  

> nodetool cleanup on KS with no replicas should remove old data, not silently 
> complete
> -
>
> Key: CASSANDRA-13526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: ZhaoYang
>  Labels: usability
>
> From the user list:
> https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E
> If you have a multi-dc cluster, but some keyspaces not replicated to a given 
> DC, you'll be unable to run cleanup on those keyspaces in that DC, because 
> [the cleanup code will see no ranges and exit 
> early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13676) Some serializers depend on Stream-specific methods

2017-07-06 Thread Hao Zhong (JIRA)
Hao Zhong created CASSANDRA-13676:
-

 Summary: Some serializers depend on Stream-specific methods
 Key: CASSANDRA-13676
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13676
 Project: Cassandra
  Issue Type: Bug
Reporter: Hao Zhong


When fixing CASSANDRA-2382, Jonathan Ellis complained that some serializers did 
(do?) depend on Stream-specific methods. The buggy code is as follow:
{code}
public static class EstimatedHistogramSerializer implements 
ICompactSerializer
{
public void serialize(EstimatedHistogram eh, DataOutputStream dos) 
throws IOException
{
long[] offsets = eh.getBucketOffsets();
long[] buckets = eh.getBuckets(false);
dos.writeInt(buckets.length);
for (int i = 0; i < buckets.length; i++)
{
dos.writeLong(offsets[i == 0 ? 0 : i - 1]);
dos.writeLong(buckets[i]);
}
}

public EstimatedHistogram deserialize(DataInputStream dis) throws 
IOException
{
int size = dis.readInt();
long[] offsets = new long[size - 1];
long[] buckets = new long[size];

for (int i = 0; i < size; i++) {
offsets[i == 0 ? 0 : i - 1] = dis.readLong();
buckets[i] = dis.readLong();
}
return new EstimatedHistogram(offsets, buckets);
}
}
{code}
The fixed code is:
{code}
public static class EstimatedHistogramSerializer implements 
ICompactSerializer2
{
public void serialize(EstimatedHistogram eh, DataOutput dos) throws 
IOException
{
long[] offsets = eh.getBucketOffsets();
long[] buckets = eh.getBuckets(false);
dos.writeInt(buckets.length);
for (int i = 0; i < buckets.length; i++)
{
dos.writeLong(offsets[i == 0 ? 0 : i - 1]);
dos.writeLong(buckets[i]);
}
}

public EstimatedHistogram deserialize(DataInput dis) throws IOException
{
int size = dis.readInt();
long[] offsets = new long[size - 1];
long[] buckets = new long[size];

for (int i = 0; i < size; i++) {
offsets[i == 0 ? 0 : i - 1] = dis.readLong();
buckets[i] = dis.readLong();
}
return new EstimatedHistogram(offsets, buckets);
}
}
{code}
I notice that some serializers still depend on Stream-specific methods. For 
example, the IndexSummary_deserialize method has the following code:
{code}
 public IndexSummary deserialize(DataInputStream in, IPartitioner partitioner, 
int expectedMinIndexInterval, int maxIndexInterval) throws IOException
{
int minIndexInterval = in.readInt();
if (minIndexInterval != expectedMinIndexInterval)
{
throw new IOException(String.format("Cannot read index summary 
because min_index_interval changed from %d to %d.",
minIndexInterval, 
expectedMinIndexInterval));
}

int offsetCount = in.readInt();
long offheapSize = in.readLong();
int samplingLevel = in.readInt();
int fullSamplingSummarySize = in.readInt();

int effectiveIndexInterval = (int) Math.ceil((BASE_SAMPLING_LEVEL / 
(double) samplingLevel) * minIndexInterval);
if (effectiveIndexInterval > maxIndexInterval)
{
throw new IOException(String.format("Rebuilding index summary 
because the effective index interval (%d) is higher than" +
" the current max index 
interval (%d)", effectiveIndexInterval, maxIndexInterval));
}

Memory offsets = Memory.allocate(offsetCount * 4);
Memory entries = Memory.allocate(offheapSize - offsets.size());
try
{
FBUtilities.copy(in, new MemoryOutputStream(offsets), 
offsets.size());
FBUtilities.copy(in, new MemoryOutputStream(entries), 
entries.size());
}
catch (IOException ioe)
{
offsets.free();
entries.free();
throw ioe;
}
// our on-disk representation treats the offsets and the summary 
data as one contiguous structure,
// in which the offsets are based from the start of the structure. 
i.e., if the offsets occupy
// X bytes, the value of the first offset will be X. In memory we 
split the two regions up, so that
// the summary values are indexed from zero, so we apply a 
correction to the offsets when de/serializing.
// In this case subtracting X from each of the offsets.
for (int i = 0 ; i < offsets.size() ; 

[jira] [Updated] (CASSANDRA-13676) Some serializers depend on Stream-specific methods

2017-07-06 Thread Hao Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hao Zhong updated CASSANDRA-13676:
--
Description: 
When fixing CASSANDRA-2382, Jonathan Ellis complained that some serializers did 
(do?) depend on Stream-specific methods. The buggy code is as follow:
{code}
public static class EstimatedHistogramSerializer implements 
ICompactSerializer
{
public void serialize(EstimatedHistogram eh, DataOutputStream dos) 
throws IOException
{
long[] offsets = eh.getBucketOffsets();
long[] buckets = eh.getBuckets(false);
dos.writeInt(buckets.length);
for (int i = 0; i < buckets.length; i++)
{
dos.writeLong(offsets[i == 0 ? 0 : i - 1]);
dos.writeLong(buckets[i]);
}
}

public EstimatedHistogram deserialize(DataInputStream dis) throws 
IOException
{
int size = dis.readInt();
long[] offsets = new long[size - 1];
long[] buckets = new long[size];

for (int i = 0; i < size; i++) {
offsets[i == 0 ? 0 : i - 1] = dis.readLong();
buckets[i] = dis.readLong();
}
return new EstimatedHistogram(offsets, buckets);
}
}
{code}
The fixed code is:
{code}
public static class EstimatedHistogramSerializer implements 
ICompactSerializer2
{
public void serialize(EstimatedHistogram eh, DataOutput dos) throws 
IOException
{
long[] offsets = eh.getBucketOffsets();
long[] buckets = eh.getBuckets(false);
dos.writeInt(buckets.length);
for (int i = 0; i < buckets.length; i++)
{
dos.writeLong(offsets[i == 0 ? 0 : i - 1]);
dos.writeLong(buckets[i]);
}
}

public EstimatedHistogram deserialize(DataInput dis) throws IOException
{
int size = dis.readInt();
long[] offsets = new long[size - 1];
long[] buckets = new long[size];

for (int i = 0; i < size; i++) {
offsets[i == 0 ? 0 : i - 1] = dis.readLong();
buckets[i] = dis.readLong();
}
return new EstimatedHistogram(offsets, buckets);
}
}
{code}
I notice that some serializers still depend on Stream-specific methods. For 
example, the IndexSummary_deserialize method has the following code:
{code}
 public IndexSummary deserialize(DataInputStream in, IPartitioner partitioner, 
int expectedMinIndexInterval, int maxIndexInterval) throws IOException
{
int minIndexInterval = in.readInt();
if (minIndexInterval != expectedMinIndexInterval)
{
throw new IOException(String.format("Cannot read index summary 
because min_index_interval changed from %d to %d.",
minIndexInterval, 
expectedMinIndexInterval));
}

int offsetCount = in.readInt();
long offheapSize = in.readLong();
int samplingLevel = in.readInt();
int fullSamplingSummarySize = in.readInt();

int effectiveIndexInterval = (int) Math.ceil((BASE_SAMPLING_LEVEL / 
(double) samplingLevel) * minIndexInterval);
if (effectiveIndexInterval > maxIndexInterval)
{
throw new IOException(String.format("Rebuilding index summary 
because the effective index interval (%d) is higher than" +
" the current max index 
interval (%d)", effectiveIndexInterval, maxIndexInterval));
}

Memory offsets = Memory.allocate(offsetCount * 4);
Memory entries = Memory.allocate(offheapSize - offsets.size());
try
{
FBUtilities.copy(in, new MemoryOutputStream(offsets), 
offsets.size());
FBUtilities.copy(in, new MemoryOutputStream(entries), 
entries.size());
}
catch (IOException ioe)
{
offsets.free();
entries.free();
throw ioe;
}
// our on-disk representation treats the offsets and the summary 
data as one contiguous structure,
// in which the offsets are based from the start of the structure. 
i.e., if the offsets occupy
// X bytes, the value of the first offset will be X. In memory we 
split the two regions up, so that
// the summary values are indexed from zero, so we apply a 
correction to the offsets when de/serializing.
// In this case subtracting X from each of the offsets.
for (int i = 0 ; i < offsets.size() ; i += 4)
offsets.setInt(i, (int) (offsets.getInt(i) - offsets.size()));
return new 

[jira] [Updated] (CASSANDRA-13657) Materialized Views: Index MV on TTL'ed column produces orphanized view entry if another column keeps entry live

2017-07-06 Thread Krishna Dattu Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krishna Dattu Koneru updated CASSANDRA-13657:
-
Labels: materializedviews ttl  (was: )
Status: Patch Available  (was: In Progress)

Patch for trunk. Will do patches for other branches if this looks okay.

> Materialized Views: Index MV on TTL'ed column produces orphanized view entry 
> if another column keeps entry live
> ---
>
> Key: CASSANDRA-13657
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13657
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Fridtjof Sander
>Assignee: Krishna Dattu Koneru
>  Labels: materializedviews, ttl
>
> {noformat}
> CREATE TABLE t (k int, a int, b int, PRIMARY KEY (k));
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (a, k);
> INSERT INTO t (k) VALUES (1);
> UPDATE t USING TTL 5 SET a = 10 WHERE k = 1;
> UPDATE t SET b = 100 WHERE k = 1;
> SELECT * from t; SELECT * from mv;
>  k | a  | b
> ---++-
>  1 | 10 | 100
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- 5 seconds later
> SELECT * from t; SELECT * from mv;
>  k | a| b
> ---+--+-
>  1 | null | 100
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- that view entry's liveness-info is (probably) dead, but the entry is kept 
> alive by b=100
> DELETE b FROM t WHERE k=1;
> SELECT * from t; SELECT * from mv;
>  k | a| b
> ---+--+--
>  1 | null | null
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> DELETE FROM t WHERE k=1;
> cqlsh:test> SELECT * from t; SELECT * from mv;
>  k | a | b
> ---+---+---
> (0 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- deleting the base-entry doesn't help, because the view-key can not be 
> constructed anymore (a=10 already expired)
> {noformat}
> The problem here is that although the view-entry's liveness-info (probably) 
> expired correctly a regular column (`b`) keeps the view-entry live. It should 
> have disappeared since it's indexed column (`a`) expired in the corresponding 
> base-row. This is pretty severe, since that view-entry is now orphanized.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13657) Materialized Views: Index MV on TTL'ed column produces orphanized view entry if another column keeps entry live

2017-07-06 Thread Krishna Dattu Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075985#comment-16075985
 ] 

Krishna Dattu Koneru commented on CASSANDRA-13657:
--

This happens because of how the row expiry works . Row exists as long as cells 
(even non-pk) are live.
For example, below table (MV is also a table, just maintained by cassandra 
automatically.)

 {code}
 CREATE TABLE t (k int, a int, b int, PRIMARY KEY (k,a));
 insert into t (k,a) VALUES (1,1) using ttl 10;
 update t using ttl 60 set b=1 where k=1 and a=1;

--- wait for 10 seconds.

select k,a,b,ttl(b) from t;

 k | a | b | ttl(b)
---+---+---+
 1 | 1 | 1 | 45

(1 rows)
 {code}

Row does not expire as {{b}} is still alive. 


This causes problem for materialized views.column from base table expires and 
view row still exists. View row should expire because, base row will no longer 
match the mandatory {{IS NOT NULL}} filter on PK columns.

I made a patch to make sure that non-primary key columns don't outlive view pk 
columns.


||Patch||Circleci||
|[trunk|https://github.com/apache/cassandra/compare/trunk...krishna-koneru:CASSANDRA-13657-trunk]|[test|https://circleci.com/gh/krishna-koneru/cassandra/24]|

Comments appreciated. 

( I am not sure if there is a way to fix already orphanized view entries )

> Materialized Views: Index MV on TTL'ed column produces orphanized view entry 
> if another column keeps entry live
> ---
>
> Key: CASSANDRA-13657
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13657
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Fridtjof Sander
>Assignee: Krishna Dattu Koneru
>
> {noformat}
> CREATE TABLE t (k int, a int, b int, PRIMARY KEY (k));
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (a, k);
> INSERT INTO t (k) VALUES (1);
> UPDATE t USING TTL 5 SET a = 10 WHERE k = 1;
> UPDATE t SET b = 100 WHERE k = 1;
> SELECT * from t; SELECT * from mv;
>  k | a  | b
> ---++-
>  1 | 10 | 100
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- 5 seconds later
> SELECT * from t; SELECT * from mv;
>  k | a| b
> ---+--+-
>  1 | null | 100
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- that view entry's liveness-info is (probably) dead, but the entry is kept 
> alive by b=100
> DELETE b FROM t WHERE k=1;
> SELECT * from t; SELECT * from mv;
>  k | a| b
> ---+--+--
>  1 | null | null
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> DELETE FROM t WHERE k=1;
> cqlsh:test> SELECT * from t; SELECT * from mv;
>  k | a | b
> ---+---+---
> (0 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- deleting the base-entry doesn't help, because the view-key can not be 
> constructed anymore (a=10 already expired)
> {noformat}
> The problem here is that although the view-entry's liveness-info (probably) 
> expired correctly a regular column (`b`) keeps the view-entry live. It should 
> have disappeared since it's indexed column (`a`) expired in the corresponding 
> base-row. This is pretty severe, since that view-entry is now orphanized.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org