[jira] [Updated] (CASSANDRA-11628) Fix the regression to CASSANDRA-3983 that got introduced by CASSANDRA-10679

2016-04-21 Thread Wei Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Deng updated CASSANDRA-11628:
-
Status: Patch Available  (was: Awaiting Feedback)

> Fix the regression to CASSANDRA-3983 that got introduced by CASSANDRA-10679
> ---
>
> Key: CASSANDRA-11628
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11628
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Wei Deng
>Assignee: Wei Deng
>
> It appears that the commit from CASSANDRA-10679 accidentally cancelled out 
> the effect that was originally intended by CASSANDRA-3983. In this case, we 
> would like to address the following situation:
> When you already have a C* package installed (which will deploy a file as 
> /usr/share/cassandra/cassandra.in.sh), but also attempt to run from a binary 
> download from http://cassandra.apache.org/download/, many tools like 
> cassandra-stress, sstablescrub, etal. will search the packaged dir 
> (/usr/share/cassandra/cassandra.in.sh) for 'cassandra.in.sh' before searching 
> the dir in your binary download or source build. We should reverse the order 
> of that search so it checks locally first. Otherwise you will encounter some 
> error like the following:
> {noformat}
> root@node0:~/apache-cassandra-3.6-SNAPSHOT# tools/bin/cassandra-stress -h
> Error: Could not find or load main class org.apache.cassandra.stress.Stress
> {noformat}
> {noformat}
> root@node0:~/apache-cassandra-3.6-SNAPSHOT# bin/sstableverify -h
> Error: Could not find or load main class 
> org.apache.cassandra.tools.StandaloneVerifier
> {noformat}
> The goal for CASSANDRA-10679 is still a good one: "For the most part all of 
> our shell scripts do the same thing, load the cassandra.in.sh and then call 
> something out of a jar. They should all look the same." But in this case, we 
> should correct them all to look the same but making them to look local dir 
> first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11628) Fix the regression to CASSANDRA-3983 that got introduced by CASSANDRA-10679

2016-04-21 Thread Wei Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Deng updated CASSANDRA-11628:
-
Reproduced In: 3.6
Since Version: 3.2
   Status: Awaiting Feedback  (was: In Progress)

> Fix the regression to CASSANDRA-3983 that got introduced by CASSANDRA-10679
> ---
>
> Key: CASSANDRA-11628
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11628
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Wei Deng
>Assignee: Wei Deng
>
> It appears that the commit from CASSANDRA-10679 accidentally cancelled out 
> the effect that was originally intended by CASSANDRA-3983. In this case, we 
> would like to address the following situation:
> When you already have a C* package installed (which will deploy a file as 
> /usr/share/cassandra/cassandra.in.sh), but also attempt to run from a binary 
> download from http://cassandra.apache.org/download/, many tools like 
> cassandra-stress, sstablescrub, etal. will search the packaged dir 
> (/usr/share/cassandra/cassandra.in.sh) for 'cassandra.in.sh' before searching 
> the dir in your binary download or source build. We should reverse the order 
> of that search so it checks locally first. Otherwise you will encounter some 
> error like the following:
> {noformat}
> root@node0:~/apache-cassandra-3.6-SNAPSHOT# tools/bin/cassandra-stress -h
> Error: Could not find or load main class org.apache.cassandra.stress.Stress
> {noformat}
> {noformat}
> root@node0:~/apache-cassandra-3.6-SNAPSHOT# bin/sstableverify -h
> Error: Could not find or load main class 
> org.apache.cassandra.tools.StandaloneVerifier
> {noformat}
> The goal for CASSANDRA-10679 is still a good one: "For the most part all of 
> our shell scripts do the same thing, load the cassandra.in.sh and then call 
> something out of a jar. They should all look the same." But in this case, we 
> should correct them all to look the same but making them to look local dir 
> first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11479) BatchlogManager unit tests failing on truncate race condition

2016-04-21 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253216#comment-15253216
 ] 

Yuki Morishita commented on CASSANDRA-11479:


||branch||testall||dtest||
|[11479-2.2|https://github.com/yukim/cassandra/tree/11479-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-11479-2.2-testall/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-11479-2.2-dtest/lastCompletedBuild/testReport/]|

I created patch for 2.2 to see if this works.
Basically added one more condition to {{isCompacting}} to check if table is in 
{{compactingCF}}.
Table is added to {{compactingCF}} in {{submitBackground}} and removed at the 
end of {{BackgroundCompactionCandidate#run}}.

> BatchlogManager unit tests failing on truncate race condition
> -
>
> Key: CASSANDRA-11479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11479
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Joel Knighton
>Assignee: Yuki Morishita
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 
> TEST-org.apache.cassandra.batchlog.BatchlogManagerTest.log
>
>
> Example on CI 
> [here|http://cassci.datastax.com/job/trunk_testall/818/testReport/junit/org.apache.cassandra.batchlog/BatchlogManagerTest/testLegacyReplay_compression/].
>  This seems to have only started happening relatively recently (within the 
> last month or two).
> As far as I can tell, this is only showing up on BatchlogManagerTests purely 
> because it is an aggressive user of truncate. The assertion is hit in the 
> setUp method, so it can happen before any of the test methods. The assertion 
> occurs because a compaction is happening when truncate wants to discard 
> SSTables; trace level logs suggest that this compaction is submitted after 
> the pause on the CompactionStrategyManager.
> This should be reproducible by running BatchlogManagerTest in a loop - it 
> takes up to half an hour in my experience. A trace-level log from such a run 
> is attached - grep for my added log message "SSTABLES COMPACTING WHEN 
> DISCARDING" to find when the assert is hit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11630) Make cython optional in pylib/setup.py

2016-04-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253192#comment-15253192
 ] 

Stefania commented on CASSANDRA-11630:
--

This simple [patch|https://github.com/stef1927/cassandra/commits/11630-2.1] 
should do the trick if we need this, waiting to hear from [~mshuler].

> Make cython optional in pylib/setup.py
> --
>
> Key: CASSANDRA-11630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11630
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> When building deb packages, we currently run [this 
> line|https://github.com/apache/cassandra/blob/trunk/debian/rules#L33-L34]:
> {code}
> cd pylib && python setup.py install --no-compile --install-layout deb \
>   --root $(CURDIR)/debian/cassandra
> {code}
> Since CASSANDRA-11053 was introduced, this will build the cython extensions 
> for _copyutil.py_.
> We should change _setup.py_ so that when we specify {{--no-compile}} then the 
> cython extensions are not built, in a similar way to what is done for the 
> Python driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10853) deb package migration to dh_python2

2016-04-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253177#comment-15253177
 ] 

Stefania commented on CASSANDRA-10853:
--

I've opened CASSANDRA-11630 to make the dependency on Cython optional, and not 
necessary when the {{--no-compile}} flag is passed to _setup.py_. 

The intention in CASSANDRA-11053 was to keep the package arch independent and 
let people build the Cython extensions manually for performance reasons (see 
this 
[blog|http://www.datastax.com/dev/blog/six-parameters-affecting-cqlsh-copy-from-performance]).
 The performance improvement of compiling _copyutil.py_ is minimal (5%), what 
really helps is compiling the driver, which we don't do at the moment. However, 
this may well change in future if we optimize _copyutil.py__ further. So let me 
know what you prefer, to tackle the problem now and keep the extensions in the 
package, or make them optional.

> deb package migration to dh_python2
> ---
>
> Key: CASSANDRA-10853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10853
> Project: Cassandra
>  Issue Type: Task
>  Components: Packaging
>Reporter: Michael Shuler
>Assignee: Michael Shuler
> Fix For: 3.0.x, 3.x
>
> Attachments: 10853_minimal_wip.txt
>
>
> I'm working on a deb job in jenkins, and I had forgotten to open a bug for 
> this. There is no urgent need, since {{python-support}} is in Jessie, but 
> this package is currently in transition to be removed.
> http://deb.li/dhs2p
> During deb build:
> {noformat}
> dh_pysupport: This program is deprecated, you should use dh_python2 instead. 
> Migration guide: http://deb.li/dhs2p
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11574) clqsh: COPY FROM throws TypeError with Cython extensions enabled

2016-04-21 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11574:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   3.0.6
   3.6
   2.2.6
   Status: Resolved  (was: Ready to Commit)

Note: 2.1.15 release tag not yet available so leaving it as 2.1.x

> clqsh: COPY FROM throws TypeError with Cython extensions enabled
> 
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.2.6, 3.6, 2.1.x, 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11574) clqsh: COPY FROM throws TypeError with Cython extensions enabled

2016-04-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253157#comment-15253157
 ] 

Stefania commented on CASSANDRA-11574:
--

Thanks everyone for your input and for the review Tyler.

I've created [CSTAR-505|https://datastax.jira.com/browse/CSTAR-505](private) in 
order to test cqlsh copy with Cython extensions built.

I've created CASSANDRA-11630 in order to modify _pylib/setup.py_ to exclude 
Cython extensions during packaging. I don't think we need the extensions by 
default. This was never the intention of CASSANDRA-11053. The intention was for 
people to build them manually when needed for performance reasons, as explained 
in [this 
blog|http://www.datastax.com/dev/blog/six-parameters-affecting-cqlsh-copy-from-performance].

I've committed this patch to 2.1 as 666bee6125af04602a61d4781435598085a503d5 
and merged upwards.

> clqsh: COPY FROM throws TypeError with Cython extensions enabled
> 
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[04/10] cassandra git commit: clqsh: COPY FROM throws TypeError with Cython extensions enabled

2016-04-21 Thread stefania
clqsh: COPY FROM throws TypeError with Cython extensions enabled

patch by Stefania Alborghetti; reviewed by Tyler Hobbs for CASSANDRA-11574


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/666bee61
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/666bee61
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/666bee61

Branch: refs/heads/trunk
Commit: 666bee6125af04602a61d4781435598085a503d5
Parents: c8914c0
Author: Stefania Alborghetti 
Authored: Thu Apr 21 17:33:53 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Apr 22 09:42:06 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/666bee61/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 73780de..2a93e9a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.15
+ * clqsh: COPY FROM throws TypeError with Cython extensions enabled 
(CASSANDRA-11574)
  * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 
 2.1.14

http://git-wip-us.apache.org/repos/asf/cassandra/blob/666bee61/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index b6e0cff..16bdf02 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -315,7 +315,7 @@ class CopyTask(object):
 copy_options['decimalsep'] = opts.pop('decimalsep', '.')
 copy_options['thousandssep'] = opts.pop('thousandssep', '')
 copy_options['boolstyle'] = [s.strip() for s in opts.pop('boolstyle', 
'True, False').split(',')]
-copy_options['numprocesses'] = int(opts.pop('numprocesses', 
self.get_num_processes(cap=16)))
+copy_options['numprocesses'] = int(opts.pop('numprocesses', 
self.get_num_processes(16)))
 copy_options['begintoken'] = opts.pop('begintoken', '')
 copy_options['endtoken'] = opts.pop('endtoken', '')
 copy_options['maxrows'] = int(opts.pop('maxrows', '-1'))



[10/10] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-21 Thread stefania
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a18402ca
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a18402ca
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a18402ca

Branch: refs/heads/trunk
Commit: a18402ca089b3a94dd7d1b22c9b97bd85657e0cc
Parents: de1a96c f050133
Author: Stefania Alborghetti 
Authored: Fri Apr 22 09:44:37 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Apr 22 09:44:37 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a18402ca/CHANGES.txt
--
diff --cc CHANGES.txt
index 0704c57,fbccb6c..0c86ff7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -75,14 -19,10 +75,15 @@@ Merged from 2.2
   * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
   * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
  Merged from 2.1:
+  * clqsh: COPY FROM throws TypeError with Cython extensions enabled 
(CASSANDRA-11574)
   * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
  
 -3.0.5
 +3.5
 + * StaticTokenTreeBuilder should respect posibility of duplicate tokens 
(CASSANDRA-11525)
 + * Correctly fix potential assertion error during compaction (CASSANDRA-11353)
 + * Avoid index segment stitching in RAM which lead to OOM on big SSTable 
files (CASSANDRA-11383)
 + * Fix clustering and row filters for LIKE queries on clustering columns 
(CASSANDRA-11397)
 +Merged from 3.0:
   * Fix rare NPE on schema upgrade from 2.x to 3.x (CASSANDRA-10943)
   * Improve backoff policy for cqlsh COPY FROM (CASSANDRA-11320)
   * Improve IF NOT EXISTS check in CREATE INDEX (CASSANDRA-11131)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a18402ca/pylib/cqlshlib/copyutil.py
--



[02/10] cassandra git commit: clqsh: COPY FROM throws TypeError with Cython extensions enabled

2016-04-21 Thread stefania
clqsh: COPY FROM throws TypeError with Cython extensions enabled

patch by Stefania Alborghetti; reviewed by Tyler Hobbs for CASSANDRA-11574


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/666bee61
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/666bee61
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/666bee61

Branch: refs/heads/cassandra-2.2
Commit: 666bee6125af04602a61d4781435598085a503d5
Parents: c8914c0
Author: Stefania Alborghetti 
Authored: Thu Apr 21 17:33:53 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Apr 22 09:42:06 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/666bee61/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 73780de..2a93e9a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.15
+ * clqsh: COPY FROM throws TypeError with Cython extensions enabled 
(CASSANDRA-11574)
  * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 
 2.1.14

http://git-wip-us.apache.org/repos/asf/cassandra/blob/666bee61/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index b6e0cff..16bdf02 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -315,7 +315,7 @@ class CopyTask(object):
 copy_options['decimalsep'] = opts.pop('decimalsep', '.')
 copy_options['thousandssep'] = opts.pop('thousandssep', '')
 copy_options['boolstyle'] = [s.strip() for s in opts.pop('boolstyle', 
'True, False').split(',')]
-copy_options['numprocesses'] = int(opts.pop('numprocesses', 
self.get_num_processes(cap=16)))
+copy_options['numprocesses'] = int(opts.pop('numprocesses', 
self.get_num_processes(16)))
 copy_options['begintoken'] = opts.pop('begintoken', '')
 copy_options['endtoken'] = opts.pop('endtoken', '')
 copy_options['maxrows'] = int(opts.pop('maxrows', '-1'))



[09/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-21 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f050133f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f050133f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f050133f

Branch: refs/heads/trunk
Commit: f050133f20d06bcc2947e436c0cf55e99c59481c
Parents: a4e1182 7713c82
Author: Stefania Alborghetti 
Authored: Fri Apr 22 09:44:19 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Apr 22 09:44:19 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f050133f/CHANGES.txt
--
diff --cc CHANGES.txt
index 0ec6aef,bd31b4b..fbccb6c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -18,26 -15,6 +18,27 @@@ Merged from 2.2
   * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
   * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
   * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 +Merged from 2.1:
++ * clqsh: COPY FROM throws TypeError with Cython extensions enabled 
(CASSANDRA-11574)
 + * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 +
 +3.0.5
 + * Fix rare NPE on schema upgrade from 2.x to 3.x (CASSANDRA-10943)
 + * Improve backoff policy for cqlsh COPY FROM (CASSANDRA-11320)
 + * Improve IF NOT EXISTS check in CREATE INDEX (CASSANDRA-11131)
 + * Upgrade ohc to 0.4.3
 + * Enable SO_REUSEADDR for JMX RMI server sockets (CASSANDRA-11093)
 + * Allocate merkletrees with the correct size (CASSANDRA-11390)
 + * Support streaming pre-3.0 sstables (CASSANDRA-10990)
 + * Add backpressure to compressed commit log (CASSANDRA-10971)
 + * SSTableExport supports secondary index tables (CASSANDRA-11330)
 + * Fix sstabledump to include missing info in debug output (CASSANDRA-11321)
 + * Establish and implement canonical bulk reading workload(s) 
(CASSANDRA-10331)
 + * Fix paging for IN queries on tables without clustering columns 
(CASSANDRA-11208)
 + * Remove recursive call from CompositesSearcher (CASSANDRA-11304)
 + * Fix filtering on non-primary key columns for queries without index 
(CASSANDRA-6377)
 + * Fix sstableloader fail when using materialized view (CASSANDRA-11275)
 +Merged from 2.2:
   * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
   * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
   * Add cassandra-stress keystore option (CASSANDRA-9325)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f050133f/pylib/cqlshlib/copyutil.py
--



[07/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-21 Thread stefania
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7713c821
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7713c821
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7713c821

Branch: refs/heads/trunk
Commit: 7713c821cab2bef0c6e2260f1e4c59ed5cde5237
Parents: c42f5f6 666bee6
Author: Stefania Alborghetti 
Authored: Fri Apr 22 09:42:57 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Apr 22 09:42:57 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7713c821/CHANGES.txt
--
diff --cc CHANGES.txt
index d16f6f6,2a93e9a..bd31b4b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,65 -1,16 +1,66 @@@
 -2.1.15
 +2.2.7
 + * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
 + * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
 +  report errors correctly if workers processes crash on 
initialization (CASSANDRA-11474)
 + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 +Merged from 2.1:
+  * clqsh: COPY FROM throws TypeError with Cython extensions enabled 
(CASSANDRA-11574)
   * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 -
 -2.1.14
   * (cqlsh) Fix potential COPY deadlock when parent process is terminating 
child
 processes (CASSANDRA-11505)
 - * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
 +
 +
 +2.2.6
 + * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
 + * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 + * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
 + * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
 + * Add cassandra-stress keystore option (CASSANDRA-9325)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
 + * Dont mark sstables as repairing with sub range repairs (CASSANDRA-11451)
 + * Fix use of NullUpdater for 2i during compaction (CASSANDRA-11450)
 + * Notify when sstables change after cancelling compaction (CASSANDRA-11373)
 + * cqlsh: COPY FROM should check that explicit column names are valid 
(CASSANDRA-11333)
 + * Add -Dcassandra.start_gossip startup option (CASSANDRA-10809)
 + * Fix UTF8Validator.validate() for modified UTF-8 (CASSANDRA-10748)
 + * Clarify that now() function is calculated on the coordinator node in CQL 
documentation (CASSANDRA-10900)
 + * Fix bloom filter sizing with LCS (CASSANDRA-11344)
 + * (cqlsh) Fix error when result is 0 rows with EXPAND ON (CASSANDRA-11092)
 + * Fix intra-node serialization issue for multicolumn-restrictions 
(CASSANDRA-11196)
 + * Non-obsoleting compaction operations over compressed files can impose rate 
limit on normal reads (CASSANDRA-11301)
 + * Add missing newline at end of bin/cqlsh (CASSANDRA-11325)
 + * Fix AE in nodetool cfstats (backport CASSANDRA-10859) (CASSANDRA-11297)
 + * Unresolved hostname leads to replace being ignored (CASSANDRA-11210)
 + * Fix filtering on non-primary key columns for thrift static column families
 +   (CASSANDRA-6377)
 + * Only log yaml config once, at startup (CASSANDRA-11217)
 + * Preserve order for preferred SSL cipher suites (CASSANDRA-11164)
 + * Reference leak with parallel repairs on the same table (CASSANDRA-11215)
 + * Range.compareTo() violates the contract of Comparable (CASSANDRA-11216)
 + * Avoid NPE when serializing ErrorMessage with null message (CASSANDRA-11167)
 + * Replacing an aggregate with a new version doesn't reset INITCOND 
(CASSANDRA-10840)
 + * (cqlsh) cqlsh cannot be called through symlink (CASSANDRA-11037)
 + * fix ohc and java-driver pom dependencies in build.xml (CASSANDRA-10793)
 + * Protect from keyspace dropped during repair (CASSANDRA-11065)
 + * Handle adding fields to a UDT in SELECT JSON and toJson() (CASSANDRA-11146)
 + * Better error message for cleanup (CASSANDRA-10991)
 + * cqlsh pg-style-strings broken if line ends with ';' (CASSANDRA-11123)
 + * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
 +   (CASSANDRA-10736)
 + * Always persist upsampled index summaries (CASSANDRA-10512)
 + * (cqlsh) Fix 

[05/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-21 Thread stefania
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7713c821
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7713c821
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7713c821

Branch: refs/heads/cassandra-3.0
Commit: 7713c821cab2bef0c6e2260f1e4c59ed5cde5237
Parents: c42f5f6 666bee6
Author: Stefania Alborghetti 
Authored: Fri Apr 22 09:42:57 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Apr 22 09:42:57 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7713c821/CHANGES.txt
--
diff --cc CHANGES.txt
index d16f6f6,2a93e9a..bd31b4b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,65 -1,16 +1,66 @@@
 -2.1.15
 +2.2.7
 + * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
 + * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
 +  report errors correctly if workers processes crash on 
initialization (CASSANDRA-11474)
 + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 +Merged from 2.1:
+  * clqsh: COPY FROM throws TypeError with Cython extensions enabled 
(CASSANDRA-11574)
   * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 -
 -2.1.14
   * (cqlsh) Fix potential COPY deadlock when parent process is terminating 
child
 processes (CASSANDRA-11505)
 - * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
 +
 +
 +2.2.6
 + * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
 + * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 + * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
 + * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
 + * Add cassandra-stress keystore option (CASSANDRA-9325)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
 + * Dont mark sstables as repairing with sub range repairs (CASSANDRA-11451)
 + * Fix use of NullUpdater for 2i during compaction (CASSANDRA-11450)
 + * Notify when sstables change after cancelling compaction (CASSANDRA-11373)
 + * cqlsh: COPY FROM should check that explicit column names are valid 
(CASSANDRA-11333)
 + * Add -Dcassandra.start_gossip startup option (CASSANDRA-10809)
 + * Fix UTF8Validator.validate() for modified UTF-8 (CASSANDRA-10748)
 + * Clarify that now() function is calculated on the coordinator node in CQL 
documentation (CASSANDRA-10900)
 + * Fix bloom filter sizing with LCS (CASSANDRA-11344)
 + * (cqlsh) Fix error when result is 0 rows with EXPAND ON (CASSANDRA-11092)
 + * Fix intra-node serialization issue for multicolumn-restrictions 
(CASSANDRA-11196)
 + * Non-obsoleting compaction operations over compressed files can impose rate 
limit on normal reads (CASSANDRA-11301)
 + * Add missing newline at end of bin/cqlsh (CASSANDRA-11325)
 + * Fix AE in nodetool cfstats (backport CASSANDRA-10859) (CASSANDRA-11297)
 + * Unresolved hostname leads to replace being ignored (CASSANDRA-11210)
 + * Fix filtering on non-primary key columns for thrift static column families
 +   (CASSANDRA-6377)
 + * Only log yaml config once, at startup (CASSANDRA-11217)
 + * Preserve order for preferred SSL cipher suites (CASSANDRA-11164)
 + * Reference leak with parallel repairs on the same table (CASSANDRA-11215)
 + * Range.compareTo() violates the contract of Comparable (CASSANDRA-11216)
 + * Avoid NPE when serializing ErrorMessage with null message (CASSANDRA-11167)
 + * Replacing an aggregate with a new version doesn't reset INITCOND 
(CASSANDRA-10840)
 + * (cqlsh) cqlsh cannot be called through symlink (CASSANDRA-11037)
 + * fix ohc and java-driver pom dependencies in build.xml (CASSANDRA-10793)
 + * Protect from keyspace dropped during repair (CASSANDRA-11065)
 + * Handle adding fields to a UDT in SELECT JSON and toJson() (CASSANDRA-11146)
 + * Better error message for cleanup (CASSANDRA-10991)
 + * cqlsh pg-style-strings broken if line ends with ';' (CASSANDRA-11123)
 + * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
 +   (CASSANDRA-10736)
 + * Always persist upsampled index summaries (CASSANDRA-10512)
 + * (cqlsh) 

[08/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-21 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f050133f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f050133f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f050133f

Branch: refs/heads/cassandra-3.0
Commit: f050133f20d06bcc2947e436c0cf55e99c59481c
Parents: a4e1182 7713c82
Author: Stefania Alborghetti 
Authored: Fri Apr 22 09:44:19 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Apr 22 09:44:19 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f050133f/CHANGES.txt
--
diff --cc CHANGES.txt
index 0ec6aef,bd31b4b..fbccb6c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -18,26 -15,6 +18,27 @@@ Merged from 2.2
   * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
   * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
   * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 +Merged from 2.1:
++ * clqsh: COPY FROM throws TypeError with Cython extensions enabled 
(CASSANDRA-11574)
 + * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 +
 +3.0.5
 + * Fix rare NPE on schema upgrade from 2.x to 3.x (CASSANDRA-10943)
 + * Improve backoff policy for cqlsh COPY FROM (CASSANDRA-11320)
 + * Improve IF NOT EXISTS check in CREATE INDEX (CASSANDRA-11131)
 + * Upgrade ohc to 0.4.3
 + * Enable SO_REUSEADDR for JMX RMI server sockets (CASSANDRA-11093)
 + * Allocate merkletrees with the correct size (CASSANDRA-11390)
 + * Support streaming pre-3.0 sstables (CASSANDRA-10990)
 + * Add backpressure to compressed commit log (CASSANDRA-10971)
 + * SSTableExport supports secondary index tables (CASSANDRA-11330)
 + * Fix sstabledump to include missing info in debug output (CASSANDRA-11321)
 + * Establish and implement canonical bulk reading workload(s) 
(CASSANDRA-10331)
 + * Fix paging for IN queries on tables without clustering columns 
(CASSANDRA-11208)
 + * Remove recursive call from CompositesSearcher (CASSANDRA-11304)
 + * Fix filtering on non-primary key columns for queries without index 
(CASSANDRA-6377)
 + * Fix sstableloader fail when using materialized view (CASSANDRA-11275)
 +Merged from 2.2:
   * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
   * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
   * Add cassandra-stress keystore option (CASSANDRA-9325)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f050133f/pylib/cqlshlib/copyutil.py
--



[03/10] cassandra git commit: clqsh: COPY FROM throws TypeError with Cython extensions enabled

2016-04-21 Thread stefania
clqsh: COPY FROM throws TypeError with Cython extensions enabled

patch by Stefania Alborghetti; reviewed by Tyler Hobbs for CASSANDRA-11574


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/666bee61
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/666bee61
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/666bee61

Branch: refs/heads/cassandra-3.0
Commit: 666bee6125af04602a61d4781435598085a503d5
Parents: c8914c0
Author: Stefania Alborghetti 
Authored: Thu Apr 21 17:33:53 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Apr 22 09:42:06 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/666bee61/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 73780de..2a93e9a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.15
+ * clqsh: COPY FROM throws TypeError with Cython extensions enabled 
(CASSANDRA-11574)
  * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 
 2.1.14

http://git-wip-us.apache.org/repos/asf/cassandra/blob/666bee61/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index b6e0cff..16bdf02 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -315,7 +315,7 @@ class CopyTask(object):
 copy_options['decimalsep'] = opts.pop('decimalsep', '.')
 copy_options['thousandssep'] = opts.pop('thousandssep', '')
 copy_options['boolstyle'] = [s.strip() for s in opts.pop('boolstyle', 
'True, False').split(',')]
-copy_options['numprocesses'] = int(opts.pop('numprocesses', 
self.get_num_processes(cap=16)))
+copy_options['numprocesses'] = int(opts.pop('numprocesses', 
self.get_num_processes(16)))
 copy_options['begintoken'] = opts.pop('begintoken', '')
 copy_options['endtoken'] = opts.pop('endtoken', '')
 copy_options['maxrows'] = int(opts.pop('maxrows', '-1'))



[06/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-21 Thread stefania
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7713c821
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7713c821
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7713c821

Branch: refs/heads/cassandra-2.2
Commit: 7713c821cab2bef0c6e2260f1e4c59ed5cde5237
Parents: c42f5f6 666bee6
Author: Stefania Alborghetti 
Authored: Fri Apr 22 09:42:57 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Apr 22 09:42:57 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7713c821/CHANGES.txt
--
diff --cc CHANGES.txt
index d16f6f6,2a93e9a..bd31b4b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,65 -1,16 +1,66 @@@
 -2.1.15
 +2.2.7
 + * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
 + * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
 +  report errors correctly if workers processes crash on 
initialization (CASSANDRA-11474)
 + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 +Merged from 2.1:
+  * clqsh: COPY FROM throws TypeError with Cython extensions enabled 
(CASSANDRA-11574)
   * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 -
 -2.1.14
   * (cqlsh) Fix potential COPY deadlock when parent process is terminating 
child
 processes (CASSANDRA-11505)
 - * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
 +
 +
 +2.2.6
 + * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
 + * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 + * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
 + * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
 + * Add cassandra-stress keystore option (CASSANDRA-9325)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
 + * Dont mark sstables as repairing with sub range repairs (CASSANDRA-11451)
 + * Fix use of NullUpdater for 2i during compaction (CASSANDRA-11450)
 + * Notify when sstables change after cancelling compaction (CASSANDRA-11373)
 + * cqlsh: COPY FROM should check that explicit column names are valid 
(CASSANDRA-11333)
 + * Add -Dcassandra.start_gossip startup option (CASSANDRA-10809)
 + * Fix UTF8Validator.validate() for modified UTF-8 (CASSANDRA-10748)
 + * Clarify that now() function is calculated on the coordinator node in CQL 
documentation (CASSANDRA-10900)
 + * Fix bloom filter sizing with LCS (CASSANDRA-11344)
 + * (cqlsh) Fix error when result is 0 rows with EXPAND ON (CASSANDRA-11092)
 + * Fix intra-node serialization issue for multicolumn-restrictions 
(CASSANDRA-11196)
 + * Non-obsoleting compaction operations over compressed files can impose rate 
limit on normal reads (CASSANDRA-11301)
 + * Add missing newline at end of bin/cqlsh (CASSANDRA-11325)
 + * Fix AE in nodetool cfstats (backport CASSANDRA-10859) (CASSANDRA-11297)
 + * Unresolved hostname leads to replace being ignored (CASSANDRA-11210)
 + * Fix filtering on non-primary key columns for thrift static column families
 +   (CASSANDRA-6377)
 + * Only log yaml config once, at startup (CASSANDRA-11217)
 + * Preserve order for preferred SSL cipher suites (CASSANDRA-11164)
 + * Reference leak with parallel repairs on the same table (CASSANDRA-11215)
 + * Range.compareTo() violates the contract of Comparable (CASSANDRA-11216)
 + * Avoid NPE when serializing ErrorMessage with null message (CASSANDRA-11167)
 + * Replacing an aggregate with a new version doesn't reset INITCOND 
(CASSANDRA-10840)
 + * (cqlsh) cqlsh cannot be called through symlink (CASSANDRA-11037)
 + * fix ohc and java-driver pom dependencies in build.xml (CASSANDRA-10793)
 + * Protect from keyspace dropped during repair (CASSANDRA-11065)
 + * Handle adding fields to a UDT in SELECT JSON and toJson() (CASSANDRA-11146)
 + * Better error message for cleanup (CASSANDRA-10991)
 + * cqlsh pg-style-strings broken if line ends with ';' (CASSANDRA-11123)
 + * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
 +   (CASSANDRA-10736)
 + * Always persist upsampled index summaries (CASSANDRA-10512)
 + * (cqlsh) 

[01/10] cassandra git commit: clqsh: COPY FROM throws TypeError with Cython extensions enabled

2016-04-21 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 c8914c072 -> 666bee612
  refs/heads/cassandra-2.2 c42f5f685 -> 7713c821c
  refs/heads/cassandra-3.0 a4e118281 -> f050133f2
  refs/heads/trunk de1a96c8d -> a18402ca0


clqsh: COPY FROM throws TypeError with Cython extensions enabled

patch by Stefania Alborghetti; reviewed by Tyler Hobbs for CASSANDRA-11574


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/666bee61
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/666bee61
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/666bee61

Branch: refs/heads/cassandra-2.1
Commit: 666bee6125af04602a61d4781435598085a503d5
Parents: c8914c0
Author: Stefania Alborghetti 
Authored: Thu Apr 21 17:33:53 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Apr 22 09:42:06 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/666bee61/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 73780de..2a93e9a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.15
+ * clqsh: COPY FROM throws TypeError with Cython extensions enabled 
(CASSANDRA-11574)
  * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 
 2.1.14

http://git-wip-us.apache.org/repos/asf/cassandra/blob/666bee61/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index b6e0cff..16bdf02 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -315,7 +315,7 @@ class CopyTask(object):
 copy_options['decimalsep'] = opts.pop('decimalsep', '.')
 copy_options['thousandssep'] = opts.pop('thousandssep', '')
 copy_options['boolstyle'] = [s.strip() for s in opts.pop('boolstyle', 
'True, False').split(',')]
-copy_options['numprocesses'] = int(opts.pop('numprocesses', 
self.get_num_processes(cap=16)))
+copy_options['numprocesses'] = int(opts.pop('numprocesses', 
self.get_num_processes(16)))
 copy_options['begintoken'] = opts.pop('begintoken', '')
 copy_options['endtoken'] = opts.pop('endtoken', '')
 copy_options['maxrows'] = int(opts.pop('maxrows', '-1'))



[jira] [Updated] (CASSANDRA-11574) clqsh: COPY FROM throws TypeError with Cython extensions enabled

2016-04-21 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11574:
-
Summary: clqsh: COPY FROM throws TypeError with Cython extensions enabled  
(was: COPY FROM throws TypeError with Cython extensions enabled)

> clqsh: COPY FROM throws TypeError with Cython extensions enabled
> 
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11574) COPY FROM throws TypeError with Cython extensions enabled

2016-04-21 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11574:
-
Summary: COPY FROM throws TypeError with Cython extensions enabled  (was: 
COPY FROM command in cqlsh throws error)

> COPY FROM throws TypeError with Cython extensions enabled
> -
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11630) Make cython optional in pylib/setup.py

2016-04-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253143#comment-15253143
 ] 

Stefania commented on CASSANDRA-11630:
--

I should add that the intention of CASSANDRA-11053 was for people to build the 
Cython extensions manually, for performance reasons only, as explained in [this 
blog|http://www.datastax.com/dev/blog/six-parameters-affecting-cqlsh-copy-from-performance].

> Make cython optional in pylib/setup.py
> --
>
> Key: CASSANDRA-11630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11630
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> When building deb packages, we currently run [this 
> line|https://github.com/apache/cassandra/blob/trunk/debian/rules#L33-L34]:
> {code}
> cd pylib && python setup.py install --no-compile --install-layout deb \
>   --root $(CURDIR)/debian/cassandra
> {code}
> Since CASSANDRA-11053 was introduced, this will build the cython extensions 
> for _copyutil.py_.
> We should change _setup.py_ so that when we specify {{--no-compile}} then the 
> cython extensions are not built, in a similar way to what is done for the 
> Python driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11630) Make cython optional in pylib/setup.py

2016-04-21 Thread Stefania (JIRA)
Stefania created CASSANDRA-11630:


 Summary: Make cython optional in pylib/setup.py
 Key: CASSANDRA-11630
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11630
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Stefania
Assignee: Stefania
 Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x


When building deb packages, we currently run [this 
line|https://github.com/apache/cassandra/blob/trunk/debian/rules#L33-L34]:

{code}
cd pylib && python setup.py install --no-compile --install-layout deb \
--root $(CURDIR)/debian/cassandra
{code}

Since CASSANDRA-11053 was introduced, this will build the cython extensions for 
_copyutil.py_.

We should change _setup.py_ so that when we specify {{--no-compile}} then the 
cython extensions are not built, in a similar way to what is done for the 
Python driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11547) Add background thread to check for clock drift

2016-04-21 Thread Richard Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253057#comment-15253057
 ] 

Richard Low commented on CASSANDRA-11547:
-

Given how critical clocks are to Cassandra I think it is definitely Cassandra's 
business to report on this. It's not actually doing anything, just warning. 
You'd need to have a 5 minute GC pause for it to fire spuriously with the 
default.

> Add background thread to check for clock drift
> --
>
> Key: CASSANDRA-11547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11547
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: clocks, time
>
> The system clock has the potential to drift while a system is running. As a 
> simple way to check if this occurs, we can run a background thread that wakes 
> up every n seconds, reads the system clock, and checks to see if, indeed, n 
> seconds have passed. 
> * If the clock's current time is less than the last recorded time (captured n 
> seconds in the past), we know the clock has jumped backward.
> * If n seconds have not elapsed, we know the system clock is running slow or 
> has moved backward (by a value less than n)
> * If (n + a small offset) seconds have elapsed, we can assume we are within 
> an acceptable window of clock movement. Reasons for including an offset are 
> the clock checking thread might not have been scheduled on time, or garbage 
> collection, and so on.
> * If the clock is greater than (n + a small offset) seconds, we can assume 
> the clock jumped forward.
> In the unhappy cases, we can write a message to the log and increment some 
> metric that the user's monitoring systems can trigger/alert on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251530#comment-15251530
 ] 

Stefania edited comment on CASSANDRA-11574 at 4/22/16 12:06 AM:


You're welcome, thanks for helping us debug this.

-.pyc- .c and .so are actually Cython extensions and they must be deleted 
manually or else changing the .py has not effect. I had no idea they would be 
present in package deployments. CASSANDRA-11053, which was delivered in 3.0.5, 
modified _pylib/setup.py_ so that people can create these files manually for 
added performance by typing {{python setup.py build_ext --inplace}} in the 
pylib folder. They must be generated somewhere when the package is created. 
[~mshuler]: do you know how the packaging process uses _pylib/setup.py_?

The initial solution, {{copy_options\['numprocesses'\] = 
int(opts.pop('numprocesses', self.get_num_processes(16)))}} is actually 
correct, provided the -.pyc- .c and .so are regenerated by running {{python 
setup.py build_ext --inplace}} or, as you already pointed out, if they are 
removed (in which case the original code also works).

I will prepare a patch so that the Cython extensions also work.


was (Author: stefania):
You're welcome, thanks for helping us debug this.

.pyc and .so are actually Cython extensions and they must be deleted manually 
or else changing the .py has not effect. I had no idea they would be present in 
package deployments. CASSANDRA-11053, which was delivered in 3.0.5, modified 
_pylib/setup.py_ so that people can create these files manually for added 
performance by typing {{python setup.py build_ext --inplace}} in the pylib 
folder. They must be generated somewhere when the package is created. 
[~mshuler]: do you know how the packaging process uses _pylib/setup.py_?

The initial solution, {{copy_options\['numprocesses'\] = 
int(opts.pop('numprocesses', self.get_num_processes(16)))}} is actually 
correct, provided the .pyc and .so are regenerated by running {{python setup.py 
build_ext --inplace}} or, as you already pointed out, if they are removed (in 
which case the original code also works).

I will prepare a patch so that the Cython extensions also work.

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11162) upgrade tests failing with UDF configuration error

2016-04-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch resolved CASSANDRA-11162.

Resolution: Not A Problem

> upgrade tests failing with UDF configuration error
> --
>
> Key: CASSANDRA-11162
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11162
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
>
> Some of the upgrade dtests have a configuration error that causes them to 
> fail when creating a UDF. The error message is
> {code}
> User-defined-functions are disabled in cassandra.yaml - set 
> enable_user_defined_functions=true to enable if you are aware of the security 
> risks
> {code}
> Though it varies slightly between C* versions.
> Fixing this is probably a matter of setting this value in {{cassandra.yaml}} 
> on certain upgrade paths.
> The tests failing can be found here:
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD/parallel_upgrade_with_internode_ssl_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD/parallel_upgrade_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_RandomPartitioner_Skip_2_2_EndsAt_Trunk_HEAD/parallel_upgrade_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_Skip_2_2_EndsAt_Trunk_HEAD/parallel_upgrade_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_RandomPartitioner_Skip_2_2_EndsAt_Trunk_HEAD/parallel_upgrade_with_internode_ssl_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_Skip_2_2_EndsAt_Trunk_HEAD/parallel_upgrade_with_internode_ssl_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/parallel_upgrade_with_internode_ssl_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/parallel_upgrade_test/
> The tests' names are
> {code}
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD.parallel_upgrade_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_RandomPartitioner_Skip_2_2_EndsAt_Trunk_HEAD.parallel_upgrade_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_Skip_2_2_EndsAt_Trunk_HEAD.parallel_upgrade_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_RandomPartitioner_Skip_2_2_EndsAt_Trunk_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_Skip_2_2_EndsAt_Trunk_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD.parallel_upgrade_test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11162) upgrade tests failing with UDF configuration error

2016-04-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253039#comment-15253039
 ] 

Russ Hatch commented on CASSANDRA-11162:


Though I don't remember doing anything specific to fix these, the tests listed 
above have all been passing for more than a month, so I think we're good to 
close this.

> upgrade tests failing with UDF configuration error
> --
>
> Key: CASSANDRA-11162
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11162
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: DS Test Eng
>  Labels: dtest
>
> Some of the upgrade dtests have a configuration error that causes them to 
> fail when creating a UDF. The error message is
> {code}
> User-defined-functions are disabled in cassandra.yaml - set 
> enable_user_defined_functions=true to enable if you are aware of the security 
> risks
> {code}
> Though it varies slightly between C* versions.
> Fixing this is probably a matter of setting this value in {{cassandra.yaml}} 
> on certain upgrade paths.
> The tests failing can be found here:
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD/parallel_upgrade_with_internode_ssl_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD/parallel_upgrade_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_RandomPartitioner_Skip_2_2_EndsAt_Trunk_HEAD/parallel_upgrade_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_Skip_2_2_EndsAt_Trunk_HEAD/parallel_upgrade_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_RandomPartitioner_Skip_2_2_EndsAt_Trunk_HEAD/parallel_upgrade_with_internode_ssl_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_Skip_2_2_EndsAt_Trunk_HEAD/parallel_upgrade_with_internode_ssl_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/parallel_upgrade_with_internode_ssl_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/parallel_upgrade_test/
> The tests' names are
> {code}
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD.parallel_upgrade_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_RandomPartitioner_Skip_2_2_EndsAt_Trunk_HEAD.parallel_upgrade_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_Skip_2_2_EndsAt_Trunk_HEAD.parallel_upgrade_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_RandomPartitioner_Skip_2_2_EndsAt_Trunk_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_Skip_2_2_EndsAt_Trunk_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD.parallel_upgrade_test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11162) upgrade tests failing with UDF configuration error

2016-04-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11162:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> upgrade tests failing with UDF configuration error
> --
>
> Key: CASSANDRA-11162
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11162
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
>
> Some of the upgrade dtests have a configuration error that causes them to 
> fail when creating a UDF. The error message is
> {code}
> User-defined-functions are disabled in cassandra.yaml - set 
> enable_user_defined_functions=true to enable if you are aware of the security 
> risks
> {code}
> Though it varies slightly between C* versions.
> Fixing this is probably a matter of setting this value in {{cassandra.yaml}} 
> on certain upgrade paths.
> The tests failing can be found here:
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD/parallel_upgrade_with_internode_ssl_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD/parallel_upgrade_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_RandomPartitioner_Skip_2_2_EndsAt_Trunk_HEAD/parallel_upgrade_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_Skip_2_2_EndsAt_Trunk_HEAD/parallel_upgrade_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_RandomPartitioner_Skip_2_2_EndsAt_Trunk_HEAD/parallel_upgrade_with_internode_ssl_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_Skip_2_2_EndsAt_Trunk_HEAD/parallel_upgrade_with_internode_ssl_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/parallel_upgrade_with_internode_ssl_test/
> http://cassci.datastax.com/job/upgrade_tests-all/2/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/parallel_upgrade_test/
> The tests' names are
> {code}
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD.parallel_upgrade_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_RandomPartitioner_Skip_2_2_EndsAt_Trunk_HEAD.parallel_upgrade_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_Skip_2_2_EndsAt_Trunk_HEAD.parallel_upgrade_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_RandomPartitioner_Skip_2_2_EndsAt_Trunk_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_Skip_2_2_EndsAt_Trunk_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_tests.upgrade_through_versions_test.py:ProtoV3Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD.parallel_upgrade_test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11496) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy

2016-04-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch resolved CASSANDRA-11496.

Resolution: Cannot Reproduce

> dtest failure in 
> consistency_test.TestAvailability.test_network_topology_strategy
> -
>
> Key: CASSANDRA-11496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11496
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_large_dtest/2/testReport/consistency_test/TestAvailability/test_network_topology_strategy
> Failed on CassCI build cassandra-2.1_large_dtest #2
> Configured for m3.2xlarge instance - OOM starting node9:
> {noformat}
> Error Message
> Error starting node9.
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-SEXEwJ
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 316, in 
> test_network_topology_strategy
> self._start_cluster()
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 95, in 
> _start_cluster
> cluster.populate(nodes).start(wait_for_binary_proto=True, 
> wait_other_notice=True)
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 360, in start
> raise NodeError("Error starting {0}.".format(node.name), p)
> "Error starting node9.\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-SEXEwJ\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> Standard Output
> [node9 ERROR] java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.start(OutboundTcpConnectionPool.java:174)
>   at 
> org.apache.cassandra.net.MessagingService.getConnectionPool(MessagingService.java:548)
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:557)
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:706)
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:675)
>   at org.apache.cassandra.gms.Gossiper.sendGossip(Gossiper.java:637)
>   at org.apache.cassandra.gms.Gossiper.doGossipToSeed(Gossiper.java:678)
>   at org.apache.cassandra.gms.Gossiper.access$700(Gossiper.java:66)
>   at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:178)
>   at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11496) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy

2016-04-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252881#comment-15252881
 ] 

Russ Hatch commented on CASSANDRA-11496:


bulk run looks ok, 100 iterations with no recurrence. closing this.

> dtest failure in 
> consistency_test.TestAvailability.test_network_topology_strategy
> -
>
> Key: CASSANDRA-11496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11496
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_large_dtest/2/testReport/consistency_test/TestAvailability/test_network_topology_strategy
> Failed on CassCI build cassandra-2.1_large_dtest #2
> Configured for m3.2xlarge instance - OOM starting node9:
> {noformat}
> Error Message
> Error starting node9.
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-SEXEwJ
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 316, in 
> test_network_topology_strategy
> self._start_cluster()
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 95, in 
> _start_cluster
> cluster.populate(nodes).start(wait_for_binary_proto=True, 
> wait_other_notice=True)
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 360, in start
> raise NodeError("Error starting {0}.".format(node.name), p)
> "Error starting node9.\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-SEXEwJ\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> Standard Output
> [node9 ERROR] java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.start(OutboundTcpConnectionPool.java:174)
>   at 
> org.apache.cassandra.net.MessagingService.getConnectionPool(MessagingService.java:548)
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:557)
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:706)
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:675)
>   at org.apache.cassandra.gms.Gossiper.sendGossip(Gossiper.java:637)
>   at org.apache.cassandra.gms.Gossiper.doGossipToSeed(Gossiper.java:678)
>   at org.apache.cassandra.gms.Gossiper.access$700(Gossiper.java:66)
>   at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:178)
>   at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11629) java.lang.UnsupportedOperationException when selecting rows with counters

2016-04-21 Thread Arnd Hannemann (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arnd Hannemann updated CASSANDRA-11629:
---
Labels: 3.0.5  (was: )

> java.lang.UnsupportedOperationException when selecting rows with counters
> -
>
> Key: CASSANDRA-11629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11629
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 16.04 LTS
> Cassandra 3.0.5 Community Edition
>Reporter: Arnd Hannemann
>  Labels: 3.0.5
>
> When selecting a non empty set of rows with counters a exception occurs:
> {code}
> WARN  [SharedPool-Worker-2] 2016-04-21 23:47:47,542 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2449)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.0.5.jar:3.0.5]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.5.jar:3.0.5]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Caused by: java.lang.UnsupportedOperationException: null
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareCustom(AbstractType.java:172)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compare(AbstractType.java:158) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareForCQL(AbstractType.java:202)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.cql3.Operator.isSatisfiedBy(Operator.java:169) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:619)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:258)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:246)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:236)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1792)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2445)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> ... 5 

[jira] [Created] (CASSANDRA-11629) java.lang.UnsupportedOperationException when selecting rows with counters

2016-04-21 Thread Arnd Hannemann (JIRA)
Arnd Hannemann created CASSANDRA-11629:
--

 Summary: java.lang.UnsupportedOperationException when selecting 
rows with counters
 Key: CASSANDRA-11629
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11629
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 16.04 LTS
Cassandra 3.0.5 Community Edition
Reporter: Arnd Hannemann


When selecting a non empty set of rows with counters a exception occurs:

{code}
WARN  [SharedPool-Worker-2] 2016-04-21 23:47:47,542 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-2,5,main]: {}
java.lang.RuntimeException: java.lang.UnsupportedOperationException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2449)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_45]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.0.5.jar:3.0.5]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.5.jar:3.0.5]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: java.lang.UnsupportedOperationException: null
at 
org.apache.cassandra.db.marshal.AbstractType.compareCustom(AbstractType.java:172)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.marshal.AbstractType.compare(AbstractType.java:158) 
~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.marshal.AbstractType.compareForCQL(AbstractType.java:202)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at org.apache.cassandra.cql3.Operator.isSatisfiedBy(Operator.java:169) 
~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:619)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:258)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
~[apache-cassandra-3.0.5.jar:3.0.5]
at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102) 
~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:246)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:236)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1792)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2445)
 ~[apache-cassandra-3.0.5.jar:3.0.5]
... 5 common frames omitted

{code}


Steps to reproduce:
{code}
arnd@kallisto:~$ cqlsh
Warning: Timezone defined and 'pytz' module for timezone conversion not 
installed. Timestamps will be displayed in UTC timezone.

Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.5 | CQL spec 3.4.0 | Native protocol v4]
Use HELP for help.

cqlsh>  CREATE KEYSPACE ks_counters WITH replication = {'class': 
'SimpleStrategy', 

[jira] [Updated] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11574:

Status: Ready to Commit  (was: Patch Available)

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252798#comment-15252798
 ] 

Tyler Hobbs commented on CASSANDRA-11574:
-

[~Stefania] +1 to the patch, and +1 on running the copy tests with and without 
Cython extensions enabled

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11574:

Reviewer: Tyler Hobbs

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11627) Streaming and other ops should filter out all LocalStrategy keyspaces, not just system keyspaces

2016-04-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-11627.
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.0.6
   3.6

Thanks for the quick turnaround, [~bdeggleston]!

The CI runs look good overall.  The unit tests that timed out don't appear 
related, and pass locally for me.  The failing dtests are either known 
failures, or appear related to recent cqlsh changes.

Committed as {{a4e1182816909761c98355b1079b2f9de8efc4bd}} to 3.0 and merged to 
trunk.

> Streaming and other ops should filter out all LocalStrategy keyspaces, not 
> just system keyspaces
> 
>
> Key: CASSANDRA-11627
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11627
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.6, 3.0.6
>
>
> Streaming operations currently filter ignore system keyspaces (at least, all 
> system keyspaces that use LocalStrategy), but they technically need to ignore 
> _all_ LocalStrategy keyspaces, not just system ones.  There are also a few 
> non-streaming operations that need to do the same thing: cleanup, key range 
> sampling, and nodetool status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11496) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy

2016-04-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252698#comment-15252698
 ] 

Russ Hatch edited comment on CASSANDRA-11496 at 4/21/16 9:19 PM:
-

running a bulk job over here to see if this repros: 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/80/


was (Author: rhatch):
running a bulk job over here to see if this repros: 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/79/

> dtest failure in 
> consistency_test.TestAvailability.test_network_topology_strategy
> -
>
> Key: CASSANDRA-11496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11496
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_large_dtest/2/testReport/consistency_test/TestAvailability/test_network_topology_strategy
> Failed on CassCI build cassandra-2.1_large_dtest #2
> Configured for m3.2xlarge instance - OOM starting node9:
> {noformat}
> Error Message
> Error starting node9.
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-SEXEwJ
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 316, in 
> test_network_topology_strategy
> self._start_cluster()
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 95, in 
> _start_cluster
> cluster.populate(nodes).start(wait_for_binary_proto=True, 
> wait_other_notice=True)
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 360, in start
> raise NodeError("Error starting {0}.".format(node.name), p)
> "Error starting node9.\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-SEXEwJ\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> Standard Output
> [node9 ERROR] java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.start(OutboundTcpConnectionPool.java:174)
>   at 
> org.apache.cassandra.net.MessagingService.getConnectionPool(MessagingService.java:548)
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:557)
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:706)
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:675)
>   at org.apache.cassandra.gms.Gossiper.sendGossip(Gossiper.java:637)
>   at org.apache.cassandra.gms.Gossiper.doGossipToSeed(Gossiper.java:678)
>   at org.apache.cassandra.gms.Gossiper.access$700(Gossiper.java:66)
>   at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:178)
>   at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at 

[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-21 Thread tylerhobbs
Merge branch 'cassandra-3.0' into trunk

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/service/StorageService.java
src/java/org/apache/cassandra/service/StorageServiceMBean.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de1a96c8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de1a96c8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de1a96c8

Branch: refs/heads/trunk
Commit: de1a96c8d401f706558e5841dfce5da85b408b26
Parents: 848352f a4e1182
Author: Tyler Hobbs 
Authored: Thu Apr 21 16:18:06 2016 -0500
Committer: Tyler Hobbs 
Committed: Thu Apr 21 16:18:06 2016 -0500

--
 CHANGES.txt |  2 ++
 .../org/apache/cassandra/config/Schema.java | 13 ++
 src/java/org/apache/cassandra/db/Keyspace.java  |  5 
 .../cassandra/db/SizeEstimatesRecorder.java |  2 +-
 .../org/apache/cassandra/dht/BootStrapper.java  |  3 +--
 .../service/PendingRangeCalculatorService.java  |  6 +++--
 .../cassandra/service/StorageService.java   | 27 +++-
 .../cassandra/service/StorageServiceMBean.java  |  2 ++
 .../org/apache/cassandra/tools/NodeProbe.java   |  5 
 .../org/apache/cassandra/tools/NodeTool.java| 21 ---
 .../cassandra/tools/nodetool/Cleanup.java   |  2 +-
 .../apache/cassandra/tools/nodetool/Repair.java |  2 +-
 .../apache/cassandra/dht/BootStrapperTest.java  |  2 +-
 .../cassandra/locator/SimpleStrategyTest.java   |  4 +--
 .../service/LeaveAndBootstrapTest.java  |  4 +--
 .../org/apache/cassandra/service/MoveTest.java  |  4 +--
 16 files changed, 75 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de1a96c8/CHANGES.txt
--
diff --cc CHANGES.txt
index a0c7df6,0ec6aef..0704c57
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,61 -1,6 +1,63 @@@
 -3.0.6
 +3.6
 + * Support large partitions on the 3.0 sstable format (CASSANDRA-11206)
 + * JSON datetime formatting needs timezone (CASSANDRA-11137)
 + * Add support to rebuild from specific range (CASSANDRA-10409)
 + * Optimize the overlapping lookup by calculating all the
 +   bounds in advance (CASSANDRA-11571)
 + * Support json/yaml output in noetool tablestats (CASSANDRA-5977)
 + * (stress) Add datacenter option to -node options (CASSANDRA-11591)
 + * Fix handling of empty slices (CASSANDRA-11513)
 + * Make number of cores used by cqlsh COPY visible to testing code 
(CASSANDRA-11437)
 + * Allow filtering on clustering columns for queries without secondary 
indexes (CASSANDRA-11310)
 + * Refactor Restriction hierarchy (CASSANDRA-11354)
 + * Eliminate allocations in R/W path (CASSANDRA-11421)
 + * Update Netty to 4.0.36 (CASSANDRA-11567)
 + * Fix PER PARTITION LIMIT for queries requiring post-query ordering 
(CASSANDRA-11556)
 + * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818)
 + * Support UDT in CQLSSTableWriter (CASSANDRA-10624)
 + * Support for non-frozen user-defined types, updating
 +   individual fields of user-defined types (CASSANDRA-7423)
 + * Make LZ4 compression level configurable (CASSANDRA-11051)
 + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017)
 + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295)
 + * Improve field-checking and error reporting in cassandra.yaml 
(CASSANDRA-10649)
 + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507)
 + * More user friendly error when providing an invalid token to nodetool 
(CASSANDRA-9348)
 + * Add static column support to SASI index (CASSANDRA-11183)
 + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization 
(CASSANDRA-11434)
 + * Support LIKE operator in prepared statements (CASSANDRA-11456)
 + * Add a command to see if a Materialized View has finished building 
(CASSANDRA-9967)
 + * Log endpoint and port associated with streaming operation (CASSANDRA-8777)
 + * Print sensible units for all log messages (CASSANDRA-9692)
 + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096)
 + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372)
 + * Compress only inter-dc traffic by default (CASSANDRA-)
 + * Add metrics to track write amplification (CASSANDRA-11420)
 + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739)
 + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411)
 + * Add require_endpoint_verification opt for internode encryption 
(CASSANDRA-9220)
 + * Add auto import java.util for UDF code block (CASSANDRA-11392)
 + * Add --hex-format option to nodetool getsstables (CASSANDRA-11337)
 + * sstablemetadata should print sstable min/max token (CASSANDRA-7159)
 + * Do not 

[1/2] cassandra git commit: Filter out all LocalStrat keyspaces for streaming

2016-04-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 848352fae -> de1a96c8d


Filter out all LocalStrat keyspaces for streaming

Patch by Tyler Hobbs; reviewed by Blake Eggleston for CASSANDRA-11627


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a4e11828
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a4e11828
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a4e11828

Branch: refs/heads/trunk
Commit: a4e1182816909761c98355b1079b2f9de8efc4bd
Parents: d8d920d
Author: Tyler Hobbs 
Authored: Thu Apr 21 16:15:50 2016 -0500
Committer: Tyler Hobbs 
Committed: Thu Apr 21 16:15:50 2016 -0500

--
 CHANGES.txt |  2 ++
 .../org/apache/cassandra/config/Schema.java | 13 ++
 src/java/org/apache/cassandra/db/Keyspace.java  |  5 
 .../cassandra/db/SizeEstimatesRecorder.java |  2 +-
 .../org/apache/cassandra/dht/BootStrapper.java  |  3 +--
 .../service/PendingRangeCalculatorService.java  |  6 +++--
 .../cassandra/service/StorageService.java   | 27 +++-
 .../cassandra/service/StorageServiceMBean.java  |  2 ++
 .../org/apache/cassandra/tools/NodeProbe.java   |  5 
 .../org/apache/cassandra/tools/NodeTool.java| 21 ---
 .../cassandra/tools/nodetool/Cleanup.java   |  2 +-
 .../apache/cassandra/tools/nodetool/Repair.java |  2 +-
 .../apache/cassandra/dht/BootStrapperTest.java  |  2 +-
 .../cassandra/locator/SimpleStrategyTest.java   |  4 +--
 .../service/LeaveAndBootstrapTest.java  |  4 +--
 .../org/apache/cassandra/service/MoveTest.java  |  4 +--
 16 files changed, 75 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a4e11828/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index eb2405c..0ec6aef 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 3.0.6
+ * Ignore all LocalStrategy keyspaces for streaming and other related
+   operations (CASSANDRA-11627)
  * Ensure columnfilter covers indexed columns for thrift 2i queries 
(CASSANDRA-11523)
  * Only open one sstable scanner per sstable (CASSANDRA-11412)
  * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a4e11828/src/java/org/apache/cassandra/config/Schema.java
--
diff --git a/src/java/org/apache/cassandra/config/Schema.java 
b/src/java/org/apache/cassandra/config/Schema.java
index 3fd9f11..03d8e8b 100644
--- a/src/java/org/apache/cassandra/config/Schema.java
+++ b/src/java/org/apache/cassandra/config/Schema.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.config;
 import java.security.MessageDigest;
 import java.security.NoSuchAlgorithmException;
 import java.util.*;
+import java.util.stream.Collectors;
 
 import com.google.common.collect.ImmutableList;
 import com.google.common.collect.ImmutableSet;
@@ -38,6 +39,7 @@ import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.UserType;
 import org.apache.cassandra.index.Index;
 import org.apache.cassandra.io.sstable.Descriptor;
+import org.apache.cassandra.locator.LocalStrategy;
 import org.apache.cassandra.repair.SystemDistributedKeyspace;
 import org.apache.cassandra.schema.*;
 import org.apache.cassandra.service.MigrationManager;
@@ -348,6 +350,17 @@ public class Schema
 }
 
 /**
+ * @return a collection of keyspaces that do not use LocalStrategy for 
replication
+ */
+public List getNonLocalStrategyKeyspaces()
+{
+return keyspaces.values().stream()
+.filter(keyspace -> keyspace.params.replication.klass != 
LocalStrategy.class)
+.map(keyspace -> keyspace.name)
+.collect(Collectors.toList());
+}
+
+/**
  * @return collection of the user defined keyspaces
  */
 public List getUserKeyspaces()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a4e11828/src/java/org/apache/cassandra/db/Keyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/Keyspace.java 
b/src/java/org/apache/cassandra/db/Keyspace.java
index 2b62f0e..5783b41 100644
--- a/src/java/org/apache/cassandra/db/Keyspace.java
+++ b/src/java/org/apache/cassandra/db/Keyspace.java
@@ -621,6 +621,11 @@ public class Keyspace
 return Iterables.transform(Schema.instance.getNonSystemKeyspaces(), 
keyspaceTransformer);
 }
 
+public static Iterable nonLocalStrategy()
+{
+return 
Iterables.transform(Schema.instance.getNonLocalStrategyKeyspaces(), 
keyspaceTransformer);
+}
+
 

cassandra git commit: Filter out all LocalStrat keyspaces for streaming

2016-04-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 d8d920dce -> a4e118281


Filter out all LocalStrat keyspaces for streaming

Patch by Tyler Hobbs; reviewed by Blake Eggleston for CASSANDRA-11627


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a4e11828
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a4e11828
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a4e11828

Branch: refs/heads/cassandra-3.0
Commit: a4e1182816909761c98355b1079b2f9de8efc4bd
Parents: d8d920d
Author: Tyler Hobbs 
Authored: Thu Apr 21 16:15:50 2016 -0500
Committer: Tyler Hobbs 
Committed: Thu Apr 21 16:15:50 2016 -0500

--
 CHANGES.txt |  2 ++
 .../org/apache/cassandra/config/Schema.java | 13 ++
 src/java/org/apache/cassandra/db/Keyspace.java  |  5 
 .../cassandra/db/SizeEstimatesRecorder.java |  2 +-
 .../org/apache/cassandra/dht/BootStrapper.java  |  3 +--
 .../service/PendingRangeCalculatorService.java  |  6 +++--
 .../cassandra/service/StorageService.java   | 27 +++-
 .../cassandra/service/StorageServiceMBean.java  |  2 ++
 .../org/apache/cassandra/tools/NodeProbe.java   |  5 
 .../org/apache/cassandra/tools/NodeTool.java| 21 ---
 .../cassandra/tools/nodetool/Cleanup.java   |  2 +-
 .../apache/cassandra/tools/nodetool/Repair.java |  2 +-
 .../apache/cassandra/dht/BootStrapperTest.java  |  2 +-
 .../cassandra/locator/SimpleStrategyTest.java   |  4 +--
 .../service/LeaveAndBootstrapTest.java  |  4 +--
 .../org/apache/cassandra/service/MoveTest.java  |  4 +--
 16 files changed, 75 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a4e11828/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index eb2405c..0ec6aef 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 3.0.6
+ * Ignore all LocalStrategy keyspaces for streaming and other related
+   operations (CASSANDRA-11627)
  * Ensure columnfilter covers indexed columns for thrift 2i queries 
(CASSANDRA-11523)
  * Only open one sstable scanner per sstable (CASSANDRA-11412)
  * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a4e11828/src/java/org/apache/cassandra/config/Schema.java
--
diff --git a/src/java/org/apache/cassandra/config/Schema.java 
b/src/java/org/apache/cassandra/config/Schema.java
index 3fd9f11..03d8e8b 100644
--- a/src/java/org/apache/cassandra/config/Schema.java
+++ b/src/java/org/apache/cassandra/config/Schema.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.config;
 import java.security.MessageDigest;
 import java.security.NoSuchAlgorithmException;
 import java.util.*;
+import java.util.stream.Collectors;
 
 import com.google.common.collect.ImmutableList;
 import com.google.common.collect.ImmutableSet;
@@ -38,6 +39,7 @@ import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.UserType;
 import org.apache.cassandra.index.Index;
 import org.apache.cassandra.io.sstable.Descriptor;
+import org.apache.cassandra.locator.LocalStrategy;
 import org.apache.cassandra.repair.SystemDistributedKeyspace;
 import org.apache.cassandra.schema.*;
 import org.apache.cassandra.service.MigrationManager;
@@ -348,6 +350,17 @@ public class Schema
 }
 
 /**
+ * @return a collection of keyspaces that do not use LocalStrategy for 
replication
+ */
+public List getNonLocalStrategyKeyspaces()
+{
+return keyspaces.values().stream()
+.filter(keyspace -> keyspace.params.replication.klass != 
LocalStrategy.class)
+.map(keyspace -> keyspace.name)
+.collect(Collectors.toList());
+}
+
+/**
  * @return collection of the user defined keyspaces
  */
 public List getUserKeyspaces()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a4e11828/src/java/org/apache/cassandra/db/Keyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/Keyspace.java 
b/src/java/org/apache/cassandra/db/Keyspace.java
index 2b62f0e..5783b41 100644
--- a/src/java/org/apache/cassandra/db/Keyspace.java
+++ b/src/java/org/apache/cassandra/db/Keyspace.java
@@ -621,6 +621,11 @@ public class Keyspace
 return Iterables.transform(Schema.instance.getNonSystemKeyspaces(), 
keyspaceTransformer);
 }
 
+public static Iterable nonLocalStrategy()
+{
+return 
Iterables.transform(Schema.instance.getNonLocalStrategyKeyspaces(), 
keyspaceTransformer);

[jira] [Commented] (CASSANDRA-11496) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy

2016-04-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252698#comment-15252698
 ] 

Russ Hatch commented on CASSANDRA-11496:


running a bulk job over here to see if this repros: 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/79/

> dtest failure in 
> consistency_test.TestAvailability.test_network_topology_strategy
> -
>
> Key: CASSANDRA-11496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11496
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_large_dtest/2/testReport/consistency_test/TestAvailability/test_network_topology_strategy
> Failed on CassCI build cassandra-2.1_large_dtest #2
> Configured for m3.2xlarge instance - OOM starting node9:
> {noformat}
> Error Message
> Error starting node9.
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-SEXEwJ
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 316, in 
> test_network_topology_strategy
> self._start_cluster()
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 95, in 
> _start_cluster
> cluster.populate(nodes).start(wait_for_binary_proto=True, 
> wait_other_notice=True)
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 360, in start
> raise NodeError("Error starting {0}.".format(node.name), p)
> "Error starting node9.\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-SEXEwJ\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> Standard Output
> [node9 ERROR] java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.start(OutboundTcpConnectionPool.java:174)
>   at 
> org.apache.cassandra.net.MessagingService.getConnectionPool(MessagingService.java:548)
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:557)
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:706)
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:675)
>   at org.apache.cassandra.gms.Gossiper.sendGossip(Gossiper.java:637)
>   at org.apache.cassandra.gms.Gossiper.doGossipToSeed(Gossiper.java:678)
>   at org.apache.cassandra.gms.Gossiper.access$700(Gossiper.java:66)
>   at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:178)
>   at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11628) Fix the regression to CASSANDRA-3983 that got introduced by CASSANDRA-10679

2016-04-21 Thread Wei Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Deng reassigned CASSANDRA-11628:


Assignee: Wei Deng

> Fix the regression to CASSANDRA-3983 that got introduced by CASSANDRA-10679
> ---
>
> Key: CASSANDRA-11628
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11628
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Wei Deng
>Assignee: Wei Deng
>
> It appears that the commit from CASSANDRA-10679 accidentally cancelled out 
> the effect that was originally intended by CASSANDRA-3983. In this case, we 
> would like to address the following situation:
> When you already have a C* package installed (which will deploy a file as 
> /usr/share/cassandra/cassandra.in.sh), but also attempt to run from a binary 
> download from http://cassandra.apache.org/download/, many tools like 
> cassandra-stress, sstablescrub, etal. will search the packaged dir 
> (/usr/share/cassandra/cassandra.in.sh) for 'cassandra.in.sh' before searching 
> the dir in your binary download or source build. We should reverse the order 
> of that search so it checks locally first. Otherwise you will encounter some 
> error like the following:
> {noformat}
> root@node0:~/apache-cassandra-3.6-SNAPSHOT# tools/bin/cassandra-stress -h
> Error: Could not find or load main class org.apache.cassandra.stress.Stress
> {noformat}
> {noformat}
> root@node0:~/apache-cassandra-3.6-SNAPSHOT# bin/sstableverify -h
> Error: Could not find or load main class 
> org.apache.cassandra.tools.StandaloneVerifier
> {noformat}
> The goal for CASSANDRA-10679 is still a good one: "For the most part all of 
> our shell scripts do the same thing, load the cassandra.in.sh and then call 
> something out of a jar. They should all look the same." But in this case, we 
> should correct them all to look the same but making them to look local dir 
> first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11628) Fix the regression to CASSANDRA-3983 that got introduced by CASSANDRA-10679

2016-04-21 Thread Wei Deng (JIRA)
Wei Deng created CASSANDRA-11628:


 Summary: Fix the regression to CASSANDRA-3983 that got introduced 
by CASSANDRA-10679
 Key: CASSANDRA-11628
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11628
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Wei Deng


It appears that the commit from CASSANDRA-10679 accidentally cancelled out the 
effect that was originally intended by CASSANDRA-3983. In this case, we would 
like to address the following situation:

When you already have a C* package installed (which will deploy a file as 
/usr/share/cassandra/cassandra.in.sh), but also attempt to run from a binary 
download from http://cassandra.apache.org/download/, many tools like 
cassandra-stress, sstablescrub, etal. will search the packaged dir 
(/usr/share/cassandra/cassandra.in.sh) for 'cassandra.in.sh' before searching 
the dir in your binary download or source build. We should reverse the order of 
that search so it checks locally first. Otherwise you will encounter some error 
like the following:

{noformat}
root@node0:~/apache-cassandra-3.6-SNAPSHOT# tools/bin/cassandra-stress -h
Error: Could not find or load main class org.apache.cassandra.stress.Stress
{noformat}

{noformat}
root@node0:~/apache-cassandra-3.6-SNAPSHOT# bin/sstableverify -h
Error: Could not find or load main class 
org.apache.cassandra.tools.StandaloneVerifier
{noformat}

The goal for CASSANDRA-10679 is still a good one: "For the most part all of our 
shell scripts do the same thing, load the cassandra.in.sh and then call 
something out of a jar. They should all look the same." But in this case, we 
should correct them all to look the same but making them to look local dir 
first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Ninja: use logger.trace() when we only intend to log at trace

2016-04-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk bc8a56d24 -> 848352fae


Ninja: use logger.trace() when we only intend to log at trace


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c42f5f68
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c42f5f68
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c42f5f68

Branch: refs/heads/trunk
Commit: c42f5f68532a9e42e3f228b9dba5fdc9de5c37ca
Parents: 3244774
Author: Tyler Hobbs 
Authored: Thu Apr 21 15:31:40 2016 -0500
Committer: Tyler Hobbs 
Committed: Thu Apr 21 15:31:40 2016 -0500

--
 src/java/org/apache/cassandra/gms/FailureDetector.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c42f5f68/src/java/org/apache/cassandra/gms/FailureDetector.java
--
diff --git a/src/java/org/apache/cassandra/gms/FailureDetector.java 
b/src/java/org/apache/cassandra/gms/FailureDetector.java
index a0754b1..b9b7944 100644
--- a/src/java/org/apache/cassandra/gms/FailureDetector.java
+++ b/src/java/org/apache/cassandra/gms/FailureDetector.java
@@ -269,7 +269,7 @@ public class FailureDetector implements IFailureDetector, 
FailureDetectorMBean
 }
 
 if (logger.isTraceEnabled() && heartbeatWindow != null)
-logger.info("Average for {} is {}", ep, heartbeatWindow.mean());
+logger.trace("Average for {} is {}", ep, heartbeatWindow.mean());
 }
 
 public void interpret(InetAddress ep)



[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-21 Thread tylerhobbs
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/848352fa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/848352fa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/848352fa

Branch: refs/heads/trunk
Commit: 848352fae35df84d1d5bdf6c01f9fb925847a044
Parents: bc8a56d d8d920d
Author: Tyler Hobbs 
Authored: Thu Apr 21 15:33:33 2016 -0500
Committer: Tyler Hobbs 
Committed: Thu Apr 21 15:33:33 2016 -0500

--
 src/java/org/apache/cassandra/gms/FailureDetector.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/848352fa/src/java/org/apache/cassandra/gms/FailureDetector.java
--



[2/3] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-21 Thread tylerhobbs
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d8d920dc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d8d920dc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d8d920dc

Branch: refs/heads/trunk
Commit: d8d920dce9044a064691ee30ef7de3bd4cad
Parents: caae987 c42f5f6
Author: Tyler Hobbs 
Authored: Thu Apr 21 15:32:51 2016 -0500
Committer: Tyler Hobbs 
Committed: Thu Apr 21 15:32:51 2016 -0500

--
 src/java/org/apache/cassandra/gms/FailureDetector.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[1/2] cassandra git commit: Ninja: use logger.trace() when we only intend to log at trace

2016-04-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 caae9870d -> d8d920dce


Ninja: use logger.trace() when we only intend to log at trace


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c42f5f68
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c42f5f68
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c42f5f68

Branch: refs/heads/cassandra-3.0
Commit: c42f5f68532a9e42e3f228b9dba5fdc9de5c37ca
Parents: 3244774
Author: Tyler Hobbs 
Authored: Thu Apr 21 15:31:40 2016 -0500
Committer: Tyler Hobbs 
Committed: Thu Apr 21 15:31:40 2016 -0500

--
 src/java/org/apache/cassandra/gms/FailureDetector.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c42f5f68/src/java/org/apache/cassandra/gms/FailureDetector.java
--
diff --git a/src/java/org/apache/cassandra/gms/FailureDetector.java 
b/src/java/org/apache/cassandra/gms/FailureDetector.java
index a0754b1..b9b7944 100644
--- a/src/java/org/apache/cassandra/gms/FailureDetector.java
+++ b/src/java/org/apache/cassandra/gms/FailureDetector.java
@@ -269,7 +269,7 @@ public class FailureDetector implements IFailureDetector, 
FailureDetectorMBean
 }
 
 if (logger.isTraceEnabled() && heartbeatWindow != null)
-logger.info("Average for {} is {}", ep, heartbeatWindow.mean());
+logger.trace("Average for {} is {}", ep, heartbeatWindow.mean());
 }
 
 public void interpret(InetAddress ep)



[2/2] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-21 Thread tylerhobbs
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d8d920dc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d8d920dc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d8d920dc

Branch: refs/heads/cassandra-3.0
Commit: d8d920dce9044a064691ee30ef7de3bd4cad
Parents: caae987 c42f5f6
Author: Tyler Hobbs 
Authored: Thu Apr 21 15:32:51 2016 -0500
Committer: Tyler Hobbs 
Committed: Thu Apr 21 15:32:51 2016 -0500

--
 src/java/org/apache/cassandra/gms/FailureDetector.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




cassandra git commit: Ninja: use logger.trace() when we only intend to log at trace

2016-04-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 324477457 -> c42f5f685


Ninja: use logger.trace() when we only intend to log at trace


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c42f5f68
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c42f5f68
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c42f5f68

Branch: refs/heads/cassandra-2.2
Commit: c42f5f68532a9e42e3f228b9dba5fdc9de5c37ca
Parents: 3244774
Author: Tyler Hobbs 
Authored: Thu Apr 21 15:31:40 2016 -0500
Committer: Tyler Hobbs 
Committed: Thu Apr 21 15:31:40 2016 -0500

--
 src/java/org/apache/cassandra/gms/FailureDetector.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c42f5f68/src/java/org/apache/cassandra/gms/FailureDetector.java
--
diff --git a/src/java/org/apache/cassandra/gms/FailureDetector.java 
b/src/java/org/apache/cassandra/gms/FailureDetector.java
index a0754b1..b9b7944 100644
--- a/src/java/org/apache/cassandra/gms/FailureDetector.java
+++ b/src/java/org/apache/cassandra/gms/FailureDetector.java
@@ -269,7 +269,7 @@ public class FailureDetector implements IFailureDetector, 
FailureDetectorMBean
 }
 
 if (logger.isTraceEnabled() && heartbeatWindow != null)
-logger.info("Average for {} is {}", ep, heartbeatWindow.mean());
+logger.trace("Average for {} is {}", ep, heartbeatWindow.mean());
 }
 
 public void interpret(InetAddress ep)



[jira] [Commented] (CASSANDRA-11627) Streaming and other ops should filter out all LocalStrategy keyspaces, not just system keyspaces

2016-04-21 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252582#comment-15252582
 ] 

Blake Eggleston commented on CASSANDRA-11627:
-

+1, pending successful CI runs

> Streaming and other ops should filter out all LocalStrategy keyspaces, not 
> just system keyspaces
> 
>
> Key: CASSANDRA-11627
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11627
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> Streaming operations currently filter ignore system keyspaces (at least, all 
> system keyspaces that use LocalStrategy), but they technically need to ignore 
> _all_ LocalStrategy keyspaces, not just system ones.  There are also a few 
> non-streaming operations that need to do the same thing: cleanup, key range 
> sampling, and nodetool status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11627) Streaming and other ops should filter out all LocalStrategy keyspaces, not just system keyspaces

2016-04-21 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-11627:

Reviewer: Blake Eggleston

> Streaming and other ops should filter out all LocalStrategy keyspaces, not 
> just system keyspaces
> 
>
> Key: CASSANDRA-11627
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11627
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> Streaming operations currently filter ignore system keyspaces (at least, all 
> system keyspaces that use LocalStrategy), but they technically need to ignore 
> _all_ LocalStrategy keyspaces, not just system ones.  There are also a few 
> non-streaming operations that need to do the same thing: cleanup, key range 
> sampling, and nodetool status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11627) Streaming and other ops should filter out all LocalStrategy keyspaces, not just system keyspaces

2016-04-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252485#comment-15252485
 ] 

Tyler Hobbs edited comment on CASSANDRA-11627 at 4/21/16 7:14 PM:
--

The patch is pretty straightfoward.  There were still a few uses of 
{{getNonSystemKeyspaces()}} that made sense with the current behavior, so I 
simply added a new {{getNonLocalStrategyKeyspaces()}} method and switched most 
users over to that.

Patches and pending CI tests (the trunk patch only had minor conflicts when 
merging):
||branch||testall||dtest||
|[CASSANDRA-11627-3.0|https://github.com/thobbs/cassandra/tree/CASSANDRA-11627-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11627-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11627-3.0-dtest]|
|[CASSANDRA-11627-trunk|https://github.com/thobbs/cassandra/tree/CASSANDRA-11627-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11627-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11627-trunk-dtest]|


was (Author: thobbs):
The patch is pretty straightfoward.  There were still a few uses of 
{{getNonSystemKeyspaces()}} that made sense with the current behavior, so I 
simply added a new {{getNonLocalStrategyKeyspaces()}} method and switched most 
users over to that.

Patches and pending CI tests (the trunk patch only had minor conflicts when 
merging):
||branch||testall||dtest||
|[CASSANDRA-11672-3.0|https://github.com/thobbs/cassandra/tree/CASSANDRA-11672-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11672-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11672-3.0-dtest]|
|[CASSANDRA-11672-trunk|https://github.com/thobbs/cassandra/tree/CASSANDRA-11672-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11672-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11672-trunk-dtest]|

> Streaming and other ops should filter out all LocalStrategy keyspaces, not 
> just system keyspaces
> 
>
> Key: CASSANDRA-11627
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11627
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> Streaming operations currently filter ignore system keyspaces (at least, all 
> system keyspaces that use LocalStrategy), but they technically need to ignore 
> _all_ LocalStrategy keyspaces, not just system ones.  There are also a few 
> non-streaming operations that need to do the same thing: cleanup, key range 
> sampling, and nodetool status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11627) Streaming and other ops should filter out all LocalStrategy keyspaces, not just system keyspaces

2016-04-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252485#comment-15252485
 ] 

Tyler Hobbs commented on CASSANDRA-11627:
-

The patch is pretty straightfoward.  There were still a few uses of 
{{getNonSystemKeyspaces()}} that made sense with the current behavior, so I 
simply added a new {{getNonLocalStrategyKeyspaces()}} method and switched most 
users over to that.

Patches and pending CI tests (the trunk patch only had minor conflicts when 
merging):
||branch||testall||dtest||
|[CASSANDRA-11672-3.0|https://github.com/thobbs/cassandra/tree/CASSANDRA-11672-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11672-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11672-3.0-dtest]|
|[CASSANDRA-11672-trunk|https://github.com/thobbs/cassandra/tree/CASSANDRA-11672-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11672-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11672-trunk-dtest]|

> Streaming and other ops should filter out all LocalStrategy keyspaces, not 
> just system keyspaces
> 
>
> Key: CASSANDRA-11627
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11627
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> Streaming operations currently filter ignore system keyspaces (at least, all 
> system keyspaces that use LocalStrategy), but they technically need to ignore 
> _all_ LocalStrategy keyspaces, not just system ones.  There are also a few 
> non-streaming operations that need to do the same thing: cleanup, key range 
> sampling, and nodetool status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11496) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy

2016-04-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11496:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> dtest failure in 
> consistency_test.TestAvailability.test_network_topology_strategy
> -
>
> Key: CASSANDRA-11496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11496
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_large_dtest/2/testReport/consistency_test/TestAvailability/test_network_topology_strategy
> Failed on CassCI build cassandra-2.1_large_dtest #2
> Configured for m3.2xlarge instance - OOM starting node9:
> {noformat}
> Error Message
> Error starting node9.
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-SEXEwJ
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 316, in 
> test_network_topology_strategy
> self._start_cluster()
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 95, in 
> _start_cluster
> cluster.populate(nodes).start(wait_for_binary_proto=True, 
> wait_other_notice=True)
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 360, in start
> raise NodeError("Error starting {0}.".format(node.name), p)
> "Error starting node9.\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-SEXEwJ\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> Standard Output
> [node9 ERROR] java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.start(OutboundTcpConnectionPool.java:174)
>   at 
> org.apache.cassandra.net.MessagingService.getConnectionPool(MessagingService.java:548)
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:557)
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:706)
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:675)
>   at org.apache.cassandra.gms.Gossiper.sendGossip(Gossiper.java:637)
>   at org.apache.cassandra.gms.Gossiper.doGossipToSeed(Gossiper.java:678)
>   at org.apache.cassandra.gms.Gossiper.access$700(Gossiper.java:66)
>   at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:178)
>   at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11627) Streaming and other ops should filter out all LocalStrategy keyspaces, not just system keyspaces

2016-04-21 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-11627:
---

 Summary: Streaming and other ops should filter out all 
LocalStrategy keyspaces, not just system keyspaces
 Key: CASSANDRA-11627
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11627
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 3.0.x, 3.x


Streaming operations currently filter ignore system keyspaces (at least, all 
system keyspaces that use LocalStrategy), but they technically need to ignore 
_all_ LocalStrategy keyspaces, not just system ones.  There are also a few 
non-streaming operations that need to do the same thing: cleanup, key range 
sampling, and nodetool status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-21 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252426#comment-15252426
 ] 

Alex Petrov commented on CASSANDRA-11621:
-

Yup, it was opened, I just forgot to pin it here: 
https://github.com/riptano/cassandra-dtest/pull/945

> Stack Overflow inserting value with many columns
> 
>
> Key: CASSANDRA-11621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11621
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQL 3
> C* 2.2.5
>Reporter: Andrew Jefferson
>Assignee: Alex Petrov
> Fix For: 2.2.7
>
>
> I am using CQL to insert into a table that has ~4000 columns
> {code}
>   TABLE_DEFINITION = "
>   id uuid,
>   "dimension_n" for n in _.range(N_DIMENSIONS)
>   ETAG timeuuid,
>   PRIMARY KEY (id)
> "
> {code}
> I am using the node.js library from Datastax to execute CQL. This creates a 
> prepared statement and then uses it to perform an insert. It works fine on C* 
> 2.1 but after upgrading to C* 2.2.5 I get the stack overflow below.
> I know enough Java to think that recursing an iterator is bad form and should 
> be easy to fix.
> {code}
> ERROR 14:59:01 Unexpected exception during request; channel = [id: 
> 0xaac42a5d, /10.0.7.182:58736 => /10.0.0.87:9042]
> java.lang.StackOverflowError: null
>   at 
> com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> ...
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:168)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:223)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:257) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:242) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:123)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> 

[jira] [Commented] (CASSANDRA-11199) rolling_upgrade_with_internode_ssl_test flaps, timing out waiting for node to start

2016-04-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252423#comment-15252423
 ] 

Russ Hatch commented on CASSANDRA-11199:


looks like it's been about 6 weeks since the last failure, this this is ok to 
close [~mambocab]?

> rolling_upgrade_with_internode_ssl_test flaps, timing out waiting for node to 
> start
> ---
>
> Key: CASSANDRA-11199
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11199
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: DS Test Eng
>  Labels: dtest
>
> Here's an example of this failure:
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/junit/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_with_internode_ssl_test/
> And here are the two particular test I've seen flap:
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_with_internode_ssl_test/history/
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_with_internode_ssl_test/history/
> I haven't reproduced this locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11199) rolling_upgrade_with_internode_ssl_test flaps, timing out waiting for node to start

2016-04-21 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11199:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> rolling_upgrade_with_internode_ssl_test flaps, timing out waiting for node to 
> start
> ---
>
> Key: CASSANDRA-11199
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11199
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
>
> Here's an example of this failure:
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/junit/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_with_internode_ssl_test/
> And here are the two particular test I've seen flap:
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_with_internode_ssl_test/history/
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_with_internode_ssl_test/history/
> I haven't reproduced this locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11579) remove DatabaseDescriptor dependency from SequentialWriter

2016-04-21 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252357#comment-15252357
 ] 

Yuki Morishita commented on CASSANDRA-11579:


Updated branch and tests were run.

* Renamed methods/class in {{SequentialWriterOption}}
* Move option to constant in 
{{ChecksummedSequentialWriter}}/{{OnDiskIndexBuilder}}
* Changed to hold ref to {{SequentialWriterOption}} in {{SequentialWriter}} 
instead of copying values.

I kept {{Optional}} for {{digestFile}} since the value can be null.

{{SSTablesIteratedTest}} was failed in testall, but in local, the test runs 
fine.

bq. Regarding your comment for trickleFsync in the description, I don't see the 
actual change. Can you give me a heads-up?

Right now, when {{tricleFsync}} is on, all writes using {{SequentialWriter}} 
like CRC can trickle also. But writing CRC is small so you don't need to 
{{trickleFsync}}. I left setting {{trickleFsync}} in new 
{{SequentialWriterOption}} in those situation.


> remove DatabaseDescriptor dependency from SequentialWriter
> --
>
> Key: CASSANDRA-11579
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11579
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Minor
>
> {{SequentialWriter}} and its subclass is widely used in Cassandra, mainly 
> from SSTable. Removing dependency to {{DatabaseDescriptor}} improve 
> reusability of this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Carlos Rolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252348#comment-15252348
 ] 

Carlos Rolo commented on CASSANDRA-11574:
-

Same issue in 3.5, available patch fixes it.

+1 to this!

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-21 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-11621:

   Resolution: Fixed
Fix Version/s: (was: 2.2.x)
   2.2.7
   Status: Resolved  (was: Patch Available)

Thanks, the dtest failures look unrelated and are passing locally for me. 
Committed to 2.2 in {{3244774572c56400ed96da4d57912779878c16e5}} and merged to 
3.0 & trunk with {{-s ours}}
Could you open a dtest PR for the new test please?


> Stack Overflow inserting value with many columns
> 
>
> Key: CASSANDRA-11621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11621
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQL 3
> C* 2.2.5
>Reporter: Andrew Jefferson
>Assignee: Alex Petrov
> Fix For: 2.2.7
>
>
> I am using CQL to insert into a table that has ~4000 columns
> {code}
>   TABLE_DEFINITION = "
>   id uuid,
>   "dimension_n" for n in _.range(N_DIMENSIONS)
>   ETAG timeuuid,
>   PRIMARY KEY (id)
> "
> {code}
> I am using the node.js library from Datastax to execute CQL. This creates a 
> prepared statement and then uses it to perform an insert. It works fine on C* 
> 2.1 but after upgrading to C* 2.2.5 I get the stack overflow below.
> I know enough Java to think that recursing an iterator is bad form and should 
> be easy to fix.
> {code}
> ERROR 14:59:01 Unexpected exception during request; channel = [id: 
> 0xaac42a5d, /10.0.7.182:58736 => /10.0.0.87:9042]
> java.lang.StackOverflowError: null
>   at 
> com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> ...
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:168)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:223)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:257) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:242) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:123)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  

[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252316#comment-15252316
 ] 

Michael Shuler commented on CASSANDRA-11574:


The deb rules builds the .so with a patch I am working on for CASSANDRA-10853, 
since cython is a new dependency. I assume Jake has cython installed by chance, 
otherwise the deb build exits.
https://github.com/apache/cassandra/blob/trunk/debian/rules#L33-L34

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-21 Thread samt
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/caae9870
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/caae9870
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/caae9870

Branch: refs/heads/trunk
Commit: caae9870dd19c38251d14a3bcb7b5b2b9839ff79
Parents: bb68078 3244774
Author: Sam Tunnicliffe 
Authored: Thu Apr 21 18:42:43 2016 +0100
Committer: Sam Tunnicliffe 
Committed: Thu Apr 21 18:42:43 2016 +0100

--

--




[1/6] cassandra git commit: Avoid possible stack overflow in ModificationStatement::getFunctions

2016-04-21 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 b80ff541e -> 324477457
  refs/heads/cassandra-3.0 bb68078e7 -> caae9870d
  refs/heads/trunk ef5bbedd6 -> bc8a56d24


Avoid possible stack overflow in ModificationStatement::getFunctions

Patch by Alex Petrov; reviewed by Sam Tunnicliffe for CASSANDRA-11621


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32447745
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32447745
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32447745

Branch: refs/heads/cassandra-2.2
Commit: 3244774572c56400ed96da4d57912779878c16e5
Parents: b80ff54
Author: Alex Petrov 
Authored: Thu Apr 21 11:46:09 2016 +0200
Committer: Sam Tunnicliffe 
Committed: Thu Apr 21 18:41:43 2016 +0100

--
 CHANGES.txt|  1 +
 .../cql3/statements/ModificationStatement.java | 13 ++---
 2 files changed, 7 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/32447745/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e51e6d2..d16f6f6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.7
+ * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
  * cqlsh: COPY FROM should use regular inserts for single statement batches and
   report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
  * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/32447745/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index fbdfc0c..059d113 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -99,24 +99,23 @@ public abstract class ModificationStatement implements 
CQLStatement
 
 public Iterable getFunctions()
 {
-Iterable functions = attrs.getFunctions();
-
+List iterables = new LinkedList<>();
 for (Restriction restriction : processedKeys.values())
-functions = Iterables.concat(functions, 
restriction.getFunctions());
+iterables.add(restriction.getFunctions());
 
 if (columnOperations != null)
 for (Operation operation : columnOperations)
-functions = Iterables.concat(functions, 
operation.getFunctions());
+iterables.add(operation.getFunctions());
 
 if (columnConditions != null)
 for (ColumnCondition condition : columnConditions)
-functions = Iterables.concat(functions, 
condition.getFunctions());
+iterables.add(condition.getFunctions());
 
 if (staticConditions != null)
 for (ColumnCondition condition : staticConditions)
-functions = Iterables.concat(functions, 
condition.getFunctions());
+iterables.add(condition.getFunctions());
 
-return functions;
+return Iterables.concat(iterables);
 }
 
 public abstract boolean requireFullClusteringKey();



[2/6] cassandra git commit: Avoid possible stack overflow in ModificationStatement::getFunctions

2016-04-21 Thread samt
Avoid possible stack overflow in ModificationStatement::getFunctions

Patch by Alex Petrov; reviewed by Sam Tunnicliffe for CASSANDRA-11621


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32447745
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32447745
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32447745

Branch: refs/heads/cassandra-3.0
Commit: 3244774572c56400ed96da4d57912779878c16e5
Parents: b80ff54
Author: Alex Petrov 
Authored: Thu Apr 21 11:46:09 2016 +0200
Committer: Sam Tunnicliffe 
Committed: Thu Apr 21 18:41:43 2016 +0100

--
 CHANGES.txt|  1 +
 .../cql3/statements/ModificationStatement.java | 13 ++---
 2 files changed, 7 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/32447745/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e51e6d2..d16f6f6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.7
+ * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
  * cqlsh: COPY FROM should use regular inserts for single statement batches and
   report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
  * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/32447745/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index fbdfc0c..059d113 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -99,24 +99,23 @@ public abstract class ModificationStatement implements 
CQLStatement
 
 public Iterable getFunctions()
 {
-Iterable functions = attrs.getFunctions();
-
+List iterables = new LinkedList<>();
 for (Restriction restriction : processedKeys.values())
-functions = Iterables.concat(functions, 
restriction.getFunctions());
+iterables.add(restriction.getFunctions());
 
 if (columnOperations != null)
 for (Operation operation : columnOperations)
-functions = Iterables.concat(functions, 
operation.getFunctions());
+iterables.add(operation.getFunctions());
 
 if (columnConditions != null)
 for (ColumnCondition condition : columnConditions)
-functions = Iterables.concat(functions, 
condition.getFunctions());
+iterables.add(condition.getFunctions());
 
 if (staticConditions != null)
 for (ColumnCondition condition : staticConditions)
-functions = Iterables.concat(functions, 
condition.getFunctions());
+iterables.add(condition.getFunctions());
 
-return functions;
+return Iterables.concat(iterables);
 }
 
 public abstract boolean requireFullClusteringKey();



[3/6] cassandra git commit: Avoid possible stack overflow in ModificationStatement::getFunctions

2016-04-21 Thread samt
Avoid possible stack overflow in ModificationStatement::getFunctions

Patch by Alex Petrov; reviewed by Sam Tunnicliffe for CASSANDRA-11621


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32447745
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32447745
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32447745

Branch: refs/heads/trunk
Commit: 3244774572c56400ed96da4d57912779878c16e5
Parents: b80ff54
Author: Alex Petrov 
Authored: Thu Apr 21 11:46:09 2016 +0200
Committer: Sam Tunnicliffe 
Committed: Thu Apr 21 18:41:43 2016 +0100

--
 CHANGES.txt|  1 +
 .../cql3/statements/ModificationStatement.java | 13 ++---
 2 files changed, 7 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/32447745/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e51e6d2..d16f6f6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.7
+ * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
  * cqlsh: COPY FROM should use regular inserts for single statement batches and
   report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
  * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/32447745/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index fbdfc0c..059d113 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -99,24 +99,23 @@ public abstract class ModificationStatement implements 
CQLStatement
 
 public Iterable getFunctions()
 {
-Iterable functions = attrs.getFunctions();
-
+List iterables = new LinkedList<>();
 for (Restriction restriction : processedKeys.values())
-functions = Iterables.concat(functions, 
restriction.getFunctions());
+iterables.add(restriction.getFunctions());
 
 if (columnOperations != null)
 for (Operation operation : columnOperations)
-functions = Iterables.concat(functions, 
operation.getFunctions());
+iterables.add(operation.getFunctions());
 
 if (columnConditions != null)
 for (ColumnCondition condition : columnConditions)
-functions = Iterables.concat(functions, 
condition.getFunctions());
+iterables.add(condition.getFunctions());
 
 if (staticConditions != null)
 for (ColumnCondition condition : staticConditions)
-functions = Iterables.concat(functions, 
condition.getFunctions());
+iterables.add(condition.getFunctions());
 
-return functions;
+return Iterables.concat(iterables);
 }
 
 public abstract boolean requireFullClusteringKey();



[6/6] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-21 Thread samt
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc8a56d2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc8a56d2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc8a56d2

Branch: refs/heads/trunk
Commit: bc8a56d24da4c5e851952dfcd422d34c6130ce0d
Parents: ef5bbed caae987
Author: Sam Tunnicliffe 
Authored: Thu Apr 21 18:44:39 2016 +0100
Committer: Sam Tunnicliffe 
Committed: Thu Apr 21 18:44:39 2016 +0100

--

--




[5/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-21 Thread samt
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/caae9870
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/caae9870
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/caae9870

Branch: refs/heads/cassandra-3.0
Commit: caae9870dd19c38251d14a3bcb7b5b2b9839ff79
Parents: bb68078 3244774
Author: Sam Tunnicliffe 
Authored: Thu Apr 21 18:42:43 2016 +0100
Committer: Sam Tunnicliffe 
Committed: Thu Apr 21 18:42:43 2016 +0100

--

--




[jira] [Updated] (CASSANDRA-11479) BatchlogManager unit tests failing on truncate race condition

2016-04-21 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-11479:
---
Fix Version/s: 3.x
   3.0.x
   2.2.x
  Component/s: Compaction

> BatchlogManager unit tests failing on truncate race condition
> -
>
> Key: CASSANDRA-11479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11479
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Joel Knighton
>Assignee: Yuki Morishita
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 
> TEST-org.apache.cassandra.batchlog.BatchlogManagerTest.log
>
>
> Example on CI 
> [here|http://cassci.datastax.com/job/trunk_testall/818/testReport/junit/org.apache.cassandra.batchlog/BatchlogManagerTest/testLegacyReplay_compression/].
>  This seems to have only started happening relatively recently (within the 
> last month or two).
> As far as I can tell, this is only showing up on BatchlogManagerTests purely 
> because it is an aggressive user of truncate. The assertion is hit in the 
> setUp method, so it can happen before any of the test methods. The assertion 
> occurs because a compaction is happening when truncate wants to discard 
> SSTables; trace level logs suggest that this compaction is submitted after 
> the pause on the CompactionStrategyManager.
> This should be reproducible by running BatchlogManagerTest in a loop - it 
> takes up to half an hour in my experience. A trace-level log from such a run 
> is attached - grep for my added log message "SSTABLES COMPACTING WHEN 
> DISCARDING" to find when the assert is hit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11352) Include units of metrics in the cassandra-stress tool

2016-04-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11352:

Reviewer: Tyler Hobbs

> Include units of metrics in the cassandra-stress tool 
> --
>
> Key: CASSANDRA-11352
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11352
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Rajath Subramanyam
>Assignee: Giampaolo
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 
> cassandra-11352-trunk-giampaolo-trapasso@radicalbit-io.patch
>
>
> cassandra-stress in the Results section can have units for the metrics as an 
> improvement to make the tool more usable. 
> Results:
> op rate   : 14668 [READ:7334, WRITE:7334]
> partition rate: 14668 [READ:7334, WRITE:7334]
> row rate  : 14668 [READ:7334, WRITE:7334]
> latency mean  : 0.7 [READ:0.7, WRITE:0.7]
> latency median: 0.6 [READ:0.6, WRITE:0.6]
> latency 95th percentile   : 0.8 [READ:0.8, WRITE:0.8]
> latency 99th percentile   : 1.2 [READ:1.2, WRITE:1.2]
> latency 99.9th percentile : 8.8 [READ:8.9, WRITE:9.0]
> latency max   : 448.7 [READ:162.3, WRITE:448.7]
> Total partitions  : 105612753 [READ:52805915, WRITE:52806838]
> Total errors  : 0 [READ:0, WRITE:0]
> total gc count: 0
> total gc mb   : 0
> total gc time (s) : 0
> avg gc time(ms)   : NaN
> stdev gc time(ms) : 0
> Total operation time  : 02:00:00
> END



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-21 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-11621:

Reviewer: Sam Tunnicliffe

> Stack Overflow inserting value with many columns
> 
>
> Key: CASSANDRA-11621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11621
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQL 3
> C* 2.2.5
>Reporter: Andrew Jefferson
>Assignee: Alex Petrov
> Fix For: 2.2.x
>
>
> I am using CQL to insert into a table that has ~4000 columns
> {code}
>   TABLE_DEFINITION = "
>   id uuid,
>   "dimension_n" for n in _.range(N_DIMENSIONS)
>   ETAG timeuuid,
>   PRIMARY KEY (id)
> "
> {code}
> I am using the node.js library from Datastax to execute CQL. This creates a 
> prepared statement and then uses it to perform an insert. It works fine on C* 
> 2.1 but after upgrading to C* 2.2.5 I get the stack overflow below.
> I know enough Java to think that recursing an iterator is bad form and should 
> be easy to fix.
> {code}
> ERROR 14:59:01 Unexpected exception during request; channel = [id: 
> 0xaac42a5d, /10.0.7.182:58736 => /10.0.0.87:9042]
> java.lang.StackOverflowError: null
>   at 
> com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> ...
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:168)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:223)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:257) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:242) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:123)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> 

[jira] [Created] (CASSANDRA-11626) cqlsh fails and exists on non-ascii chars

2016-04-21 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-11626:


 Summary: cqlsh fails and exists on non-ascii chars
 Key: CASSANDRA-11626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11626
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Stupp
Priority: Minor


Just seen on cqlsh on current trunk:

To repro, copy {{ä}} (german umlaut) to cqlsh and press return.
cqlsh errors out and immediately exits.

{code}
$ bin/cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.1.13-SNAPSHOT | CQL spec 3.2.1 | Native protocol v3]
Use HELP for help.
cqlsh> ä
Invalid syntax at line 1, char 1
Traceback (most recent call last):
  File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2636, in 
main(*read_options(sys.argv[1:], os.environ))
  File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2625, in main
shell.cmdloop()
  File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 1114, in cmdloop
if self.onecmd(self.statement.getvalue()):
  File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 1139, in onecmd
self.printerr('  %s' % statementline)
  File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2314, in printerr
self.writeresult(text, color, newline=newline, out=sys.stderr)
  File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2303, in 
writeresult
out.write(self.applycolor(str(text), color) + ('\n' if newline else ''))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 2: 
ordinal not in range(128)
$ 
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11625) CFS.CANONICAL_SSTABLES adds compacting sstables without checking if they are still live

2016-04-21 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252025#comment-15252025
 ] 

Marcus Eriksson commented on CASSANDRA-11625:
-

cc [~pauloricardomg]

> CFS.CANONICAL_SSTABLES adds compacting sstables without checking if they are 
> still live
> ---
>
> Key: CASSANDRA-11625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11625
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
> Fix For: 2.1.x, 2.2.x
>
>
> In 2.1 and 2.2 we blindly add all compacting sstables to the 
> ColumnFamilyStore.CANONICAL_SSTABLES
> This could cause issues as we unmark compacting after removing sstables from 
> the tracker and compaction strategies. For example, when creating scanners 
> for validation with LCS we might get overlap within a level as both the old 
> sstables and the new ones could be in CANONICAL_SSTABLES
> What we need to do is to get the *version* of the sstable from the compacting 
> set as it holds the original sstable without moved starts etc (that is what 
> we do in 3.0+)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11625) CFS.CANONICAL_SSTABLES adds compacting sstables without checking if they are still live

2016-04-21 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-11625:
---

 Summary: CFS.CANONICAL_SSTABLES adds compacting sstables without 
checking if they are still live
 Key: CASSANDRA-11625
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11625
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
 Fix For: 2.1.x, 2.2.x


In 2.1 and 2.2 we blindly add all compacting sstables to the 
ColumnFamilyStore.CANONICAL_SSTABLES

This could cause issues as we unmark compacting after removing sstables from 
the tracker and compaction strategies. For example, when creating scanners for 
validation with LCS we might get overlap within a level as both the old 
sstables and the new ones could be in CANONICAL_SSTABLES

What we need to do is to get the *version* of the sstable from the compacting 
set as it holds the original sstable without moved starts etc (that is what we 
do in 3.0+)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-21 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-11621:

Status: Patch Available  (was: Open)

>From Guava iterators documentation: 

bq. the current implementation is not suitable for nested concatenated 
iterators, i.e. the following should be avoided when in a loop: {{iterator = 
Iterators.concat(iterator, suffix)}}, since iteration over the
 resulting iterator has a cubic complexity to the depth of the nesting.

Patch proposes concatenating iterables all together at once. {{3.x}} and 
{{trunk}} use same schema as proposed in this PR, although since all fields are 
final there in {{ModificationStatement}}, there's no need to do the null 
checks.  There's also 
[#11593|https://issues.apache.org/jira/browse/CASSANDRA-11593] which improves 
the situation with iterables even further.

Code:

|[trunk|https://github.com/ifesdjeen/cassandra/tree/11621-2.2]|[dtest|https://github.com/ifesdjeen/cassandra-dtest/tree/11621-2.2]|

dtests (for 2.2, 3.0 and trunk, to make sure new dtests are passing for the 
existing code):

|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11621-2.2-dtest/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11621-3.0-dtest/]
 
|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11621-trunk-dtest/]
 |

Unit tests (only 2.2 code was modified)

|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11621-2.2-testall/]|


> Stack Overflow inserting value with many columns
> 
>
> Key: CASSANDRA-11621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11621
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQL 3
> C* 2.2.5
>Reporter: Andrew Jefferson
>Assignee: Alex Petrov
> Fix For: 2.2.x
>
>
> I am using CQL to insert into a table that has ~4000 columns
> {code}
>   TABLE_DEFINITION = "
>   id uuid,
>   "dimension_n" for n in _.range(N_DIMENSIONS)
>   ETAG timeuuid,
>   PRIMARY KEY (id)
> "
> {code}
> I am using the node.js library from Datastax to execute CQL. This creates a 
> prepared statement and then uses it to perform an insert. It works fine on C* 
> 2.1 but after upgrading to C* 2.2.5 I get the stack overflow below.
> I know enough Java to think that recursing an iterator is bad form and should 
> be easy to fix.
> {code}
> ERROR 14:59:01 Unexpected exception during request; channel = [id: 
> 0xaac42a5d, /10.0.7.182:58736 => /10.0.0.87:9042]
> java.lang.StackOverflowError: null
>   at 
> com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> ...
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> 

[jira] [Commented] (CASSANDRA-11258) Repair scheduling - Resource locking API

2016-04-21 Thread Marcus Olsson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252011#comment-15252011
 ] 

Marcus Olsson commented on CASSANDRA-11258:
---

bq. From the code it seems that when an LWT insert timeouts, the CasLockFactory 
assumes the lock was not acquired, but maybe the operation succeeded and there 
was a timeout, so we will not be able to re-acquire the lock before it expires. 
So we should perform a read at SERIAL level in this situation to make sure any 
previous in-progress operations are committed and we get the most recent value.
Good catch, I'll add that. 

bq. Is the sufficientNodesForLocking check necessary?
It is mostly to avoid trying to do CAS operations that we know will fail, 
however that check would be done later down in StorageProxy, so it might be 
redundant.

bq. I noticed that we are doing non-LWT reads at ONE, but we should use QUORUM 
instead and that check will be automatically done when reading or writing.
I'll change that.

bq. I think we should adjust our nomenclature and mindset from distributed 
locks to expiring leases, since this is what we are doing rather than 
distributed locking. If you agree, can you rename classes to reflect that?
I agree, leases seems to be a more reasonable term for it.

{quote}
When renewing the lease we should also insert the current lease holder priority 
into the resource_lock_priority table, otherwise other nodes might try to 
acquire the lease while it's being hold (the operation will fail, but the load 
on the system will be higher due to LWT).

We should also probably let lease holders renew leases explicitly rather than 
auto-renewing leases at the lease service, so for example the job scheduler can 
abort the job if it cannot renew the lease. For that matter, we should probably 
extend the DistributedLease interface with methods to renew the lease and/or 
check if it's still valid (perhaps we should have a look at the JINI lease spec 
for inspiration, although it looks a bit verbose).
{quote}
I've taken a look at the JINI lease spec and I think there are some parts of it 
that we wouldn't need, for instance {{setSerialFormat()}} and {{canBatch()}}. 
But the interface could perhaps look like this instead:
{code}
interface Lease {
 long getExpiration();
 void renew(long duration) throws LeaseException;
 void cancel(); throws LeaseException;
 boolean valid();
}

interface LeaseGrantor { // Or LeaseFactory
 Lease newLease(long duration, String resource, int priority, Map metadata); throws LeaseException
}
{code}
I think the {{LeaseMap}}(mentioned in the JINI lease spec) or a similar 
interface will be useful for locking multiple data centers. Maybe it's enough 
to create some kind of {{LeaseCollection}} that bundles the leases together and 
performs renew()/cancel() on all underlying leases?

--
I'll also change the keyspace name to {{system_leases}} and the tables to 
{{resource_lease}} and {{resource_lease_priority}}.

> Repair scheduling - Resource locking API
> 
>
> Key: CASSANDRA-11258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11258
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
>Priority: Minor
>
> Create a resource locking API & implementation that is able to lock a 
> resource in a specified data center. It should handle priorities to avoid 
> node starvation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11206) Support large partitions on the 3.0 sstable format

2016-04-21 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11206:
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Patch Available)

Thanks for the review!

Fixed a last issue that would have broken CASSANDRA-11183. Finally, CI looks 
good after that fix.

Committed as ef5bbedd687d75923e9a20fde9d2f78b4535241d to trunk.

> Support large partitions on the 3.0 sstable format
> --
>
> Key: CASSANDRA-11206
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11206
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
>  Labels: docs-impacting
> Fix For: 3.6
>
> Attachments: 11206-gc.png, trunk-gc.png
>
>
> Cassandra saves a sample of IndexInfo objects that store the offset within 
> each partition of every 64KB (by default) range of rows.  To find a row, we 
> binary search this sample, then scan the partition of the appropriate range.
> The problem is that this scales poorly as partitions grow: on a cache miss, 
> we deserialize the entire set of IndexInfo, which both creates a lot of GC 
> overhead (as noted in CASSANDRA-9754) but is also non-negligible i/o activity 
> (relative to reading a single 64KB row range) as partitions get truly large.
> We introduced an "offset map" in CASSANDRA-10314 that allows us to perform 
> the IndexInfo bsearch while only deserializing IndexInfo that we need to 
> compare against, i.e. log(N) deserializations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Support large partitions on the 3.0 sstable format

2016-04-21 Thread snazy
http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef5bbedd/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
index 296d142..5ee46da 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
@@ -18,22 +18,23 @@
 package org.apache.cassandra.db.columniterator;
 
 import java.io.IOException;
+import java.util.Comparator;
 import java.util.Iterator;
-import java.util.List;
 import java.util.NoSuchElementException;
 
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.filter.ColumnFilter;
 import org.apache.cassandra.db.rows.*;
+import org.apache.cassandra.io.sstable.IndexInfo;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
 import org.apache.cassandra.io.sstable.CorruptSSTableException;
-import org.apache.cassandra.io.sstable.IndexHelper;
 import org.apache.cassandra.io.util.FileDataInput;
 import org.apache.cassandra.io.util.DataPosition;
+import org.apache.cassandra.io.util.SegmentedFile;
 import org.apache.cassandra.utils.ByteBufferUtil;
 
-abstract class AbstractSSTableIterator implements UnfilteredRowIterator
+public abstract class AbstractSSTableIterator implements UnfilteredRowIterator
 {
 protected final SSTableReader sstable;
 protected final DecoratedKey key;
@@ -46,6 +47,8 @@ abstract class AbstractSSTableIterator implements 
UnfilteredRowIterator
 
 private final boolean isForThrift;
 
+protected final SegmentedFile ifile;
+
 private boolean isClosed;
 
 protected final Slices slices;
@@ -59,9 +62,11 @@ abstract class AbstractSSTableIterator implements 
UnfilteredRowIterator
   RowIndexEntry indexEntry,
   Slices slices,
   ColumnFilter columnFilter,
-  boolean isForThrift)
+  boolean isForThrift,
+  SegmentedFile ifile)
 {
 this.sstable = sstable;
+this.ifile = ifile;
 this.key = key;
 this.columns = columnFilter;
 this.slices = slices;
@@ -434,13 +439,13 @@ abstract class AbstractSSTableIterator implements 
UnfilteredRowIterator
 }
 
 // Used by indexed readers to store where they are of the index.
-protected static class IndexState
+public static class IndexState implements AutoCloseable
 {
 private final Reader reader;
 private final ClusteringComparator comparator;
 
 private final RowIndexEntry indexEntry;
-private final List indexes;
+private final RowIndexEntry.IndexInfoRetriever indexInfoRetriever;
 private final boolean reversed;
 
 private int currentIndexIdx;
@@ -448,43 +453,43 @@ abstract class AbstractSSTableIterator implements 
UnfilteredRowIterator
 // Marks the beginning of the block corresponding to currentIndexIdx.
 private DataPosition mark;
 
-public IndexState(Reader reader, ClusteringComparator comparator, 
RowIndexEntry indexEntry, boolean reversed)
+public IndexState(Reader reader, ClusteringComparator comparator, 
RowIndexEntry indexEntry, boolean reversed, SegmentedFile indexFile)
 {
 this.reader = reader;
 this.comparator = comparator;
 this.indexEntry = indexEntry;
-this.indexes = indexEntry.columnsIndex();
+this.indexInfoRetriever = indexEntry.openWithIndex(indexFile);
 this.reversed = reversed;
-this.currentIndexIdx = reversed ? indexEntry.columnsIndex().size() 
: -1;
+this.currentIndexIdx = reversed ? indexEntry.columnsIndexCount() : 
-1;
 }
 
 public boolean isDone()
 {
-return reversed ? currentIndexIdx < 0 : currentIndexIdx >= 
indexes.size();
+return reversed ? currentIndexIdx < 0 : currentIndexIdx >= 
indexEntry.columnsIndexCount();
 }
 
 // Sets the reader to the beginning of blockIdx.
 public void setToBlock(int blockIdx) throws IOException
 {
-if (blockIdx >= 0 && blockIdx < indexes.size())
+if (blockIdx >= 0 && blockIdx < indexEntry.columnsIndexCount())
 {
 reader.seekToPosition(columnOffset(blockIdx));
 reader.deserializer.clearState();
 }
 
 currentIndexIdx = blockIdx;
-reader.openMarker = blockIdx > 0 ? indexes.get(blockIdx - 
1).endOpenMarker : null;
+reader.openMarker = blockIdx > 0 ? index(blockIdx - 

[1/3] cassandra git commit: Support large partitions on the 3.0 sstable format

2016-04-21 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk ae063e806 -> ef5bbedd6


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef5bbedd/test/unit/org/apache/cassandra/db/RowIndexEntryTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/RowIndexEntryTest.java 
b/test/unit/org/apache/cassandra/db/RowIndexEntryTest.java
index 0c7ee59..ebacf34 100644
--- a/test/unit/org/apache/cassandra/db/RowIndexEntryTest.java
+++ b/test/unit/org/apache/cassandra/db/RowIndexEntryTest.java
@@ -20,23 +20,52 @@ package org.apache.cassandra.db;
 import java.io.File;
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
 import java.util.Collections;
+import java.util.Iterator;
 import java.util.List;
 
+import com.google.common.primitives.Ints;
+
 import org.apache.cassandra.Util;
+import org.apache.cassandra.cache.IMeasurableMemory;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.cql3.CQLTester;
+import org.apache.cassandra.db.columniterator.AbstractSSTableIterator;
 import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.LongType;
+import org.apache.cassandra.db.rows.AbstractUnfilteredRowIterator;
+import org.apache.cassandra.db.rows.BTreeRow;
+import org.apache.cassandra.db.rows.BufferCell;
+import org.apache.cassandra.db.rows.ColumnData;
 import org.apache.cassandra.db.rows.EncodingStats;
 import org.apache.cassandra.db.partitions.*;
-import org.apache.cassandra.io.sstable.IndexHelper;
+import org.apache.cassandra.db.rows.RangeTombstoneMarker;
+import org.apache.cassandra.db.rows.Row;
+import org.apache.cassandra.db.rows.Unfiltered;
+import org.apache.cassandra.db.rows.UnfilteredRowIterator;
+import org.apache.cassandra.db.rows.UnfilteredSerializer;
+import org.apache.cassandra.dht.Murmur3Partitioner;
+import org.apache.cassandra.dht.Token;
+import org.apache.cassandra.io.compress.BufferType;
+import org.apache.cassandra.io.sstable.IndexInfo;
+import org.apache.cassandra.io.sstable.format.SSTableFlushObserver;
+import org.apache.cassandra.io.sstable.format.Version;
 import org.apache.cassandra.io.sstable.format.big.BigFormat;
 import org.apache.cassandra.io.util.DataInputBuffer;
+import org.apache.cassandra.io.util.DataInputPlus;
 import org.apache.cassandra.io.util.DataOutputBuffer;
+import org.apache.cassandra.io.util.DataOutputPlus;
+import org.apache.cassandra.io.util.SegmentedFile;
 import org.apache.cassandra.io.util.SequentialWriter;
+import org.apache.cassandra.serializers.LongSerializer;
+import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
+import org.apache.cassandra.utils.ObjectSizes;
+import org.apache.cassandra.utils.btree.BTree;
 
 import org.junit.Assert;
 import org.junit.Test;
@@ -46,68 +75,330 @@ import static junit.framework.Assert.assertTrue;
 
 public class RowIndexEntryTest extends CQLTester
 {
-private static final List clusterTypes = 
Collections.singletonList(LongType.instance);
+private static final List clusterTypes = 
Collections.singletonList(LongType.instance);
 private static final ClusteringComparator comp = new 
ClusteringComparator(clusterTypes);
-private static ClusteringPrefix cn(long l)
+
+private static final byte[] dummy_100k = new byte[10];
+
+private static Clustering cn(long l)
 {
 return Util.clustering(comp, l);
 }
 
 @Test
-public void testArtificialIndexOf() throws IOException
+public void testC11206AgainstPreviousArray() throws Exception
+{
+DatabaseDescriptor.setColumnIndexCacheSize(9);
+testC11206AgainstPrevious();
+}
+
+@Test
+public void testC11206AgainstPreviousShallow() throws Exception
+{
+DatabaseDescriptor.setColumnIndexCacheSize(0);
+testC11206AgainstPrevious();
+}
+
+private static void testC11206AgainstPrevious() throws Exception
+{
+// partition without IndexInfo
+try (DoubleSerializer doubleSerializer = new DoubleSerializer())
+{
+doubleSerializer.build(null, partitionKey(42L),
+   Collections.singletonList(cn(42)),
+   0L);
+assertEquals(doubleSerializer.rieOldSerialized, 
doubleSerializer.rieNewSerialized);
+}
+
+// partition with multiple IndexInfo
+try (DoubleSerializer doubleSerializer = new DoubleSerializer())
+{
+doubleSerializer.build(null, partitionKey(42L),
+   Arrays.asList(cn(42), cn(43), cn(44)),
+   0L);
+assertEquals(doubleSerializer.rieOldSerialized, 
doubleSerializer.rieNewSerialized);
+}
+
+// partition 

[3/3] cassandra git commit: Support large partitions on the 3.0 sstable format

2016-04-21 Thread snazy
Support large partitions on the 3.0 sstable format

patch by Robert Stupp; reviewed by T Jake Luciani for CASSANDRA-11206


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ef5bbedd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ef5bbedd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ef5bbedd

Branch: refs/heads/trunk
Commit: ef5bbedd687d75923e9a20fde9d2f78b4535241d
Parents: ae063e8
Author: Robert Stupp 
Authored: Thu Apr 21 16:48:26 2016 +0200
Committer: Robert Stupp 
Committed: Thu Apr 21 16:48:26 2016 +0200

--
 CHANGES.txt |1 +
 NEWS.txt|3 +
 conf/cassandra.yaml |8 +
 .../apache/cassandra/cache/AutoSavingCache.java |4 +-
 .../org/apache/cassandra/config/Config.java |1 +
 .../cassandra/config/DatabaseDescriptor.java|   11 +
 .../org/apache/cassandra/db/Clustering.java |6 +
 .../cassandra/db/ClusteringComparator.java  |2 +-
 .../apache/cassandra/db/ClusteringPrefix.java   |   29 +
 .../org/apache/cassandra/db/ColumnIndex.java|  296 +++---
 .../org/apache/cassandra/db/RangeTombstone.java |   10 +
 .../org/apache/cassandra/db/RowIndexEntry.java  | 1006 +++---
 .../cassandra/db/SerializationHeader.java   |   20 -
 .../org/apache/cassandra/db/Serializers.java|   95 +-
 .../columniterator/AbstractSSTableIterator.java |  111 +-
 .../db/columniterator/SSTableIterator.java  |   15 +-
 .../columniterator/SSTableReversedIterator.java |   19 +-
 .../cassandra/db/compaction/Scrubber.java   |4 +-
 .../cassandra/db/compaction/Verifier.java   |4 +-
 .../UnfilteredRowIteratorWithLowerBound.java|   29 +-
 .../cassandra/index/sasi/SASIIndexBuilder.java  |2 +-
 .../org/apache/cassandra/io/ISerializer.java|5 +
 .../cassandra/io/sstable/IndexHelper.java   |  192 
 .../apache/cassandra/io/sstable/IndexInfo.java  |  178 
 .../io/sstable/format/SSTableFormat.java|6 -
 .../io/sstable/format/SSTableReader.java|   10 +-
 .../io/sstable/format/big/BigTableReader.java   |8 +-
 .../io/sstable/format/big/BigTableScanner.java  |6 +-
 .../io/sstable/format/big/BigTableWriter.java   |   32 +-
 .../apache/cassandra/service/CacheService.java  |   12 +-
 .../cassandra/cache/AutoSavingCacheTest.java|   15 +
 .../apache/cassandra/cql3/KeyCacheCqlTest.java  |   76 +-
 .../apache/cassandra/cql3/PagingQueryTest.java  |  111 ++
 .../cql3/TombstonesWithIndexedSSTableTest.java  |   16 +-
 .../org/apache/cassandra/db/KeyCacheTest.java   |  151 ++-
 .../org/apache/cassandra/db/KeyspaceTest.java   |2 +-
 .../apache/cassandra/db/RowIndexEntryTest.java  |  768 +++--
 .../cassandra/io/sstable/IndexHelperTest.java   |   78 --
 .../io/sstable/LargePartitionsTest.java |  219 
 39 files changed, 2775 insertions(+), 786 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef5bbedd/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1b94b2d..a0c7df6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * Support large partitions on the 3.0 sstable format (CASSANDRA-11206)
  * JSON datetime formatting needs timezone (CASSANDRA-11137)
  * Add support to rebuild from specific range (CASSANDRA-10409)
  * Optimize the overlapping lookup by calculating all the

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef5bbedd/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index e073592..a177d37 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -18,6 +18,9 @@ using the provided 'sstableupgrade' tool.
 
 New features
 
+   - Key cache will only hold indexed entries up to the size configured by
+ column_index_cache_size_in_kb in cassandra.yaml in memory. Larger indexed 
entries
+ will never go into memory. See CASSANDRA-11206 for more details.
- For tables having a default_time_to_live specifying a TTL of 0 will 
remove the TTL
  from the inserted or updated values.
- Startup is now aborted if corrupted transaction log files are found. The 
details

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef5bbedd/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index f9be453..582859c 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -659,6 +659,14 @@ auto_snapshot: true
 #  rows (as part of the key cache), so a larger granularity means
 #  you can cache more hot rows
 column_index_size_in_kb: 64
+# Per sstable indexed key cache 

[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2016-04-21 Thread Brian Hess (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251904#comment-15251904
 ] 

 Brian Hess commented on CASSANDRA-8844:


I think that restricting a user to not be able to ALTER any table in side a 
keyspace that has CDC enabled is a bit too much.  Additionally, I see use cases 
where the keyspace exists already and is in use and then a user layers in CDC - 
namely, that CDC is enabled after the keyspace exists.  So, I'm -1 on saying 
that CDC needs to be there at keyspace creation time.  One big reason is that 
if I want to add CDC later, there isn't a good way in Cassandra to create a new 
keyspace (with CDC enabled) and then copy all the data from the first one to 
the second one (no INSERT-SELECT in Cassandra).  So, I'd be stuck.

As for the durability question, I think we should throw an error if someone 
wants to set the keyspace to have DURABLE_WRITES=false and CDC enabled - or to 
alter the keyspace to put it in that position.  I do not like the idea of 
automagically changing the DURABLE_WRITES=true for the user.  I don't like that 
surprise behavior.

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> again, reduce complexity that would otherwise need to be built into the 
> daemons.
> h2. Format and Consumption
> - Cassandra would only write to the CDC log, and never delete from it. 
> - Cleaning up consumed logfiles would be the 

[jira] [Commented] (CASSANDRA-11615) cassandra-stress blocks when connecting to a big cluster

2016-04-21 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251883#comment-15251883
 ] 

T Jake Luciani commented on CASSANDRA-11615:


The latter.  If a node stress is connected to disconnects during the run (due 
to GC, latency etc) it will reconnect to a new node and have to re-prepare.

Going back to the original problem though.  If prepare is never completing on 
clusters > 50 nodes perhaps there is a deeper bug the driver team needs to look 
into?

> cassandra-stress blocks when connecting to a big cluster
> 
>
> Key: CASSANDRA-11615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11615
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Eduard Tudenhoefner
>Assignee: Eduard Tudenhoefner
> Fix For: 3.0.x
>
> Attachments: 11615-3.0.patch
>
>
> I had a *100* node cluster and was running 
> {code}
> cassandra-stress read n=100 no-warmup cl=LOCAL_QUORUM -rate 'threads=20' 
> 'limit=1000/s'
> {code}
> Based on the thread dump it looks like it's been blocked at 
> https://github.com/apache/cassandra/blob/cassandra-3.0/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java#L96
> {code}
> "Thread-20" #245 prio=5 os_prio=0 tid=0x7f3781822000 nid=0x46c4 waiting 
> for monitor entry [0x7f36cc788000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96)
> - waiting to lock <0x0005c003d920> (a 
> java.util.concurrent.ConcurrentHashMap)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
> at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> "Thread-19" #244 prio=5 os_prio=0 tid=0x7f378182 nid=0x46c3 waiting 
> for monitor entry [0x7f36cc889000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96)
> - waiting to lock <0x0005c003d920> (a 
> java.util.concurrent.ConcurrentHashMap)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
> at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> {code}
> I was trying the same with with a smaller cluster (50 nodes) and it was 
> working fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11574:

Labels: cqlsh  (was: )

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2016-04-21 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251870#comment-15251870
 ] 

Joshua McKenzie commented on CASSANDRA-8844:


bq. This is starting to sound way too hairy and scary for my comfort
I agree.

bq. Can't we go with a simpler v1? ...work on the reader/extractor part to add 
some trivial code to skip over anything not needed?
This puts a heavier CPU burden on the consumer to deserialize and discard 
unwanted data. After the points / concerns you've raised (and acknowledging 
that the unknown-unknowns in this case could be far worse), I think the right 
call is to have them accept this much smaller relative burden than expose 
Cassandra to correctness risks and the complexity this current implementation 
introduces.

bq. That way we can't mess up the commit log core
Exactly the reason I wanted a second pair of eyes on the review, specifically 
yours given your experience in this portion of the code-base.

bq. strip out unwanted data during the "archiving" step
Hm. One of our design options we discussed was doing something similar to this, 
however it was going from the Mutation in memory to selective serialization to 
a 2nd log, and the lack of atomicity across multiple log writes made it a 
no-go. If we instead performed this filtering as part of the CDC-move process, 
we'd get the atomicity of the initial write and could instead have our "CDC 
correctness" point be at flush. This also shouldn't further negatively impact 
the "realtime" CDC consumption use-case as they should be able to tail and 
parse the live CommitLogSegment, utilizing their filtering logic from v1 to 
exclude unwanted mutations.

The only other concern I have with the approach of "write all to single 
CommitLogSegment stream, filter on archival / move" is that we pull the CPU and 
heap pressure burden of that filtering into the C* JVM proper. At this point, 
compared to what we're facing w/the dual CLSM approach, I think that's the 
lesser of two evils. As a final counter-point to that - there's no reason the 
C* daemon would need to be the one to perform that filtering, as an external 
process could simply scrape through cdc_overflow and compact the data into a 
3rd directory for final consumption (a.k.a. the unix philosophy).

This design change would open us back up to the option to enable CDC on a 
per-CF basis again instead of per-Keyspace, as writing all mutations to a 
single CommitLog stream would remove the batch atomicity needs that led to 
pushing to a per-keyspace basis in the first place.

I'm going to take a day to think on this and discuss with a few people as these 
changes would clearly push us past 3.6. Thanks for the extensive feedback 
[~blambov].

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per 

[jira] [Commented] (CASSANDRA-11194) materialized views - support explode() on collections

2016-04-21 Thread Keith Wansbrough (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251727#comment-15251727
 ] 

Keith Wansbrough commented on CASSANDRA-11194:
--

We would also find it very useful to be able to create a materialized view on a 
set. The {{explode}} syntax looks good for this:
{code}
CREATE TABLE customers (
  id text PRIMARY KEY,
  data text,
  phones frozen
);
  
CREATE MATERIALIZED VIEW customers_by_phone AS
  SELECT explode(phones), id
  FROM customers
  WHERE phones IS NOT NULL;
{code}

We have a database of customers with an ID as primary key. Each customer has 
zero or more phone numbers. We would like to be able to create a materialized 
view so we can look up by phone number.

Our current schema uses a frozen set for this, but either frozen or unfrozen 
would be fine.

> materialized views - support explode() on collections
> -
>
> Key: CASSANDRA-11194
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11194
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jon Haddad
>
> I'm working on a database design to model a product catalog.  Products can 
> belong to categories.  Categories can belong to multiple sub categories 
> (think about Amazon's complex taxonomies).
> My category table would look like this, giving me individual categories & 
> their parents:
> {code}
> CREATE TABLE category (
> category_id uuid primary key,
> name text,
> parents set
> );
> {code}
> To get a list of all the children of a particular category, I need a table 
> that looks like the following:
> {code}
> CREATE TABLE categories_by_parent (
> parent_id uuid,
> category_id uuid,
> name text,
> primary key (parent_id, category_id)
> );
> {code}
> The important thing to note here is that a single category can have multiple 
> parents.
> I'd like to propose support for collections in materialized views via an 
> explode() function that would create 1 row per item in the collection.  For 
> instance, I'll insert the following 3 rows (2 parents, 1 child) into the 
> category table:
> {code}
> insert into category (category_id, name, parents) values 
> (009fe0e1-5b09-4efc-a92d-c03720324a4f, 'Parent', null);
> insert into category (category_id, name, parents) values 
> (1f2914de-0adf-4afc-b7ad-ddd8dc876ab1, 'Parent2', null);
> insert into category (category_id, name, parents) values 
> (1f93bc07-9874-42a5-a7d1-b741dc9c509c, 'Child', 
> {009fe0e1-5b09-4efc-a92d-c03720324a4f, 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1 
> });
> cqlsh:test> select * from category;
>  category_id  | name| parents
> --+-+--
>  009fe0e1-5b09-4efc-a92d-c03720324a4f |  Parent | 
> null
>  1f2914de-0adf-4afc-b7ad-ddd8dc876ab1 | Parent2 | 
> null
>  1f93bc07-9874-42a5-a7d1-b741dc9c509c |   Child | 
> {009fe0e1-5b09-4efc-a92d-c03720324a4f, 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1}
> (3 rows)
> {code}
> Given the following CQL to select the child category, utilizing an explode 
> function, I would expect to get back 2 rows, 1 for each parent:
> {code}
> select category_id, name, explode(parents) as parent_id from category where 
> category_id = 1f93bc07-9874-42a5-a7d1-b741dc9c509c;
> category_id  | name  | parent_id
> --+---+--
> 1f93bc07-9874-42a5-a7d1-b741dc9c509c | Child | 
> 009fe0e1-5b09-4efc-a92d-c03720324a4f
> 1f93bc07-9874-42a5-a7d1-b741dc9c509c | Child | 
> 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1
> (2 rows)
> {code}
> This functionality would ideally apply to materialized views, since the 
> ability to control partitioning here would allow us to efficiently query our 
> MV for all categories belonging to a parent in a complex taxonomy.
> {code}
> CREATE MATERIALIZED VIEW categories_by_parent as
> SELECT explode(parents) as parent_id,
> category_id, name FROM category WHERE parents IS NOT NULL
> {code}
> The explode() function is available in Spark Dataframes and my proposed 
> function has the same behavior: 
> http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.explode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-21 Thread Andrew Jefferson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251682#comment-15251682
 ] 

Andrew Jefferson commented on CASSANDRA-11621:
--

thanks!

> Stack Overflow inserting value with many columns
> 
>
> Key: CASSANDRA-11621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11621
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQL 3
> C* 2.2.5
>Reporter: Andrew Jefferson
>Assignee: Alex Petrov
> Fix For: 2.2.x
>
>
> I am using CQL to insert into a table that has ~4000 columns
> {code}
>   TABLE_DEFINITION = "
>   id uuid,
>   "dimension_n" for n in _.range(N_DIMENSIONS)
>   ETAG timeuuid,
>   PRIMARY KEY (id)
> "
> {code}
> I am using the node.js library from Datastax to execute CQL. This creates a 
> prepared statement and then uses it to perform an insert. It works fine on C* 
> 2.1 but after upgrading to C* 2.2.5 I get the stack overflow below.
> I know enough Java to think that recursing an iterator is bad form and should 
> be easy to fix.
> {code}
> ERROR 14:59:01 Unexpected exception during request; channel = [id: 
> 0xaac42a5d, /10.0.7.182:58736 => /10.0.0.87:9042]
> java.lang.StackOverflowError: null
>   at 
> com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> ...
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:168)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:223)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:257) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:242) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:123)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  

[jira] [Resolved] (CASSANDRA-11024) Unexpected exception during request; java.lang.StackOverflowError: null

2016-04-21 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov resolved CASSANDRA-11024.
-
Resolution: Duplicate

Closing as duplicate of 
[11621|https://issues.apache.org/jira/browse/CASSANDRA-11621]

> Unexpected exception during request; java.lang.StackOverflowError: null
> ---
>
> Key: CASSANDRA-11024
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11024
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS 7, Java x64 1.8.0_65
>Reporter: Kai Wang
>Priority: Minor
>
> This happened when I run a "SELECT *" query on a very wide table. The table 
> has over 1000 columns and a lot of nulls. If I run "SELECT * ... LIMIT 10" or 
> "SELECT a,b,c FROM ...", then it's fine. The data is being actively inserted 
> when I run the query. Will try later when compaction (LCS) catches up.
> {noformat}
> ERROR [SharedPool-Worker-5] 2016-01-15 20:49:08,212 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x8e11d570, 
> /192.168.0.3:50332 => /192.168.0.11:9042]
> java.lang.StackOverflowError: null
>   at 
> com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at 

[jira] [Updated] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-21 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-11621:

Fix Version/s: 2.2.x

> Stack Overflow inserting value with many columns
> 
>
> Key: CASSANDRA-11621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11621
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQL 3
> C* 2.2.5
>Reporter: Andrew Jefferson
>Assignee: Alex Petrov
> Fix For: 2.2.x
>
>
> I am using CQL to insert into a table that has ~4000 columns
> {code}
>   TABLE_DEFINITION = "
>   id uuid,
>   "dimension_n" for n in _.range(N_DIMENSIONS)
>   ETAG timeuuid,
>   PRIMARY KEY (id)
> "
> {code}
> I am using the node.js library from Datastax to execute CQL. This creates a 
> prepared statement and then uses it to perform an insert. It works fine on C* 
> 2.1 but after upgrading to C* 2.2.5 I get the stack overflow below.
> I know enough Java to think that recursing an iterator is bad form and should 
> be easy to fix.
> {code}
> ERROR 14:59:01 Unexpected exception during request; channel = [id: 
> 0xaac42a5d, /10.0.7.182:58736 => /10.0.0.87:9042]
> java.lang.StackOverflowError: null
>   at 
> com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> ...
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:168)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:223)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:257) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:242) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:123)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> 

[jira] [Updated] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11574:
-
Status: Patch Available  (was: In Progress)

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251642#comment-15251642
 ] 

Stefania commented on CASSANDRA-11574:
--

Here is the patch for all branches affected:

||2.1||2.2||3.0||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/11574-2.1]|[patch|https://github.com/stef1927/cassandra/commits/11574-2.2]|[patch|https://github.com/stef1927/cassandra/commits/11574-3.0]|[patch|https://github.com/stef1927/cassandra/commits/11574]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11574-2.1-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11574-2.2-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11574-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11574-dtest/]|

The 2.1 patch up-merges upwards without conflicts. It's so trivial that I've 
run CI only on 3.0. We need to request to the test eng. team to set-up a cqlsh 
job where we run the copy tests with the Cython extensions enabled. cc 
[~philipthompson].

[~thobbs] or [~pauloricardomg] would you mind reviewing?

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-21 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251629#comment-15251629
 ] 

Alex Petrov commented on CASSANDRA-11621:
-

Updated stack to make it complete.

> Stack Overflow inserting value with many columns
> 
>
> Key: CASSANDRA-11621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11621
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQL 3
> C* 2.2.5
>Reporter: Andrew Jefferson
>Assignee: Alex Petrov
>
> I am using CQL to insert into a table that has ~4000 columns
> {code}
>   TABLE_DEFINITION = "
>   id uuid,
>   "dimension_n" for n in _.range(N_DIMENSIONS)
>   ETAG timeuuid,
>   PRIMARY KEY (id)
> "
> {code}
> I am using the node.js library from Datastax to execute CQL. This creates a 
> prepared statement and then uses it to perform an insert. It works fine on C* 
> 2.1 but after upgrading to C* 2.2.5 I get the stack overflow below.
> I know enough Java to think that recursing an iterator is bad form and should 
> be easy to fix.
> {code}
> ERROR 14:59:01 Unexpected exception during request; channel = [id: 
> 0xaac42a5d, /10.0.7.182:58736 => /10.0.0.87:9042]
> java.lang.StackOverflowError: null
>   at 
> com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> ...
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:168)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:223)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:257) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:242) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:123)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 

[jira] [Updated] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-21 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-11621:

Description: 
I am using CQL to insert into a table that has ~4000 columns

{code}
  TABLE_DEFINITION = "
  id uuid,
  "dimension_n" for n in _.range(N_DIMENSIONS)
  ETAG timeuuid,
  PRIMARY KEY (id)
"
{code}

I am using the node.js library from Datastax to execute CQL. This creates a 
prepared statement and then uses it to perform an insert. It works fine on C* 
2.1 but after upgrading to C* 2.2.5 I get the stack overflow below.

I know enough Java to think that recursing an iterator is bad form and should 
be easy to fix.

{code}
ERROR 14:59:01 Unexpected exception during request; channel = [id: 0xaac42a5d, 
/10.0.7.182:58736 => /10.0.0.87:9042]
java.lang.StackOverflowError: null
at 
com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
 ~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
 ~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
...
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at 
org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:168)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:223)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:257) 
~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:242) 
~[main/:na]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:123)
 ~[main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
 [main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
 [main/:na]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_77]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 [main/:na]
at 

[jira] [Commented] (CASSANDRA-11616) cassandra very high cpu rate

2016-04-21 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251539#comment-15251539
 ] 

Sam Tunnicliffe commented on CASSANDRA-11616:
-

[~jobtg] you reported seeing the same symptoms, might this also be the case for 
you?

> cassandra very high cpu rate
> 
>
> Key: CASSANDRA-11616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: CentOS release 6.4
> 4 nodes cluster
> cassandra 3.0.5
> nodetool cfstats mykeyspace show the table data volume: Number of keys 
> (estimate): 77570676
>Reporter: PengtaoGeng
>Assignee: Sam Tunnicliffe
> Attachments: Image.png
>
>
> Under the very low speed of query CPU utilization of 100%.
> Query cql is only by partition key or by second index.
> Blow is the desc table info:
> CREATE TABLE mykeyspace.userlabel (
> id text PRIMARY KEY,
> cardno text,
> phone text,
> ccount text,
> username text
> ) ;
> CREATE INDEX userlabel_phone ON mykeyspace.userlabel (phone)
> top -H and jstack find the utilization cpu threads, they are all come from 
> "SharedPool-Worker".
> Show one thread jstack info:
> {quote}
> "SharedPool-Worker-28" #205 daemon prio=5 os_prio=0 tid=0x7f1610cc8780 
> nid=0xe7c0 runnable [0x7f0ed566f000]
>java.lang.Thread.State: RUNNABLE
> at 
> org.apache.cassandra.utils.MurmurHash.hash3_x64_128(MurmurHash.java:191)
> at 
> org.apache.cassandra.dht.Murmur3Partitioner.getHash(Murmur3Partitioner.java:181)
> at 
> org.apache.cassandra.dht.Murmur3Partitioner.decorateKey(Murmur3Partitioner.java:53)
> at 
> org.apache.cassandra.db.PartitionPosition$ForKey.get(PartitionPosition.java:49)
> at 
> org.apache.cassandra.db.marshal.PartitionerDefinedOrder.compareCustom(PartitionerDefinedOrder.java:93)
> at 
> org.apache.cassandra.db.marshal.AbstractType.compare(AbstractType.java:158)
> at 
> org.apache.cassandra.db.ClusteringComparator.compareComponent(ClusteringComparator.java:166)
> at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:137)
> at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:126)
> at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:44)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.compareTo(MergeIterator.java:378)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.replaceAndSink(MergeIterator.java:266)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:428)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:288)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108)
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:130)
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.hasNext(CompositesSearcher.java:83)
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295)
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
> at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65)
> at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289)
> at 
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47)
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
> at 

[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Nandakishore Arvapaly (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251536#comment-15251536
 ] 

Nandakishore Arvapaly commented on CASSANDRA-11574:
---

May be its modified in tar file package but not in repository when we do yum 
install dsc30

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11574:
-
Fix Version/s: (was: 3.0.6)
   3.x
   3.0.x
   2.2.x
   2.1.x

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11574:
-
Component/s: (was: CQL)
 Tools

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251530#comment-15251530
 ] 

Stefania commented on CASSANDRA-11574:
--

You're welcome, thanks for helping us debug this.

.pyc and .so are actually Cython extensions and they must be deleted manually 
or else changing the .py has not effect. I had no idea they would be present in 
package deployments. CASSANDRA-11053, which was delivered in 3.0.5, modified 
_pylib/setup.py_ so that people can create these files manually for added 
performance by typing {{python setup.py build_ext --inplace}} in the pylib 
folder. They must be generated somewhere when the package is created. 
[~mshuler]: do you know how the packaging process uses _pylib/setup.py_?

The initial solution, {{copy_options\['numprocesses'\] = 
int(opts.pop('numprocesses', self.get_num_processes(16)))}} is actually 
correct, provided the .pyc and .so are regenerated by running {{python setup.py 
build_ext --inplace}} or, as you already pointed out, if they are removed (in 
which case the original code also works).

I will prepare a patch so that the Cython extensions also work.

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Nandakishore Arvapaly (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251502#comment-15251502
 ] 

Nandakishore Arvapaly commented on CASSANDRA-11574:
---

Awesome Stefania !!!
Now it worked.

I replaced the python code as below and also it worked.
copy_options['numprocesses'] = int(opts.pop('numprocesses', 
self.get_num_processes(cap=16)))

So the problem is with the python .pyc and .so file. I will keep that in mind.

Thanks a lot for suggesting and spending your time.

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2016-04-21 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251501#comment-15251501
 ] 

Branimir Lambov commented on CASSANDRA-8844:


This is starting to sound way too hairy and scary for my comfort -- you can 
disable the problem points I could see, but what about the ones I couldn't?

Can't we go with a simpler v1? Still single log with an archiver that moves 
_all_ log data to the CDC directory and work on the reader/extractor part to 
add some trivial code to skip over anything not needed? That way we can't mess 
up the commit log core, and we can easily move on to v2 where we either strip 
out unwanted data during the "archiving" step, or think more carefully how to 
properly maintain order in a segregated implementation such as this one. Such a 
v2 wouldn't need any changes to extractors or syntax and depending on 
complexity could also ship in a point-odd release.

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> again, reduce complexity that would otherwise need to be built into the 
> daemons.
> h2. Format and Consumption
> - Cassandra would only write to the CDC log, and never delete from it. 
> - Cleaning up consumed logfiles would be the client daemon's responibility
> - Logfile size should probably be configurable.
> - Logfiles should be named with a predictable naming schema, making it 
> triivial to process them in order.
> - 

[jira] [Updated] (CASSANDRA-11609) Nested UDTs cause error when migrating 2.x schema to trunk

2016-04-21 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11609:
---
Reviewer: Benjamin Lerer

> Nested UDTs cause error when migrating 2.x schema to trunk
> --
>
> Key: CASSANDRA-11609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11609
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Tyler Hobbs
> Fix For: 3.x
>
>
> This was found in the upgrades user_types_test.
> Can also be repro'd with ccm.
> To repro using ccm:
> Create a 1 node cluster on 2.2.x
> Create this schema:
> {noformat}
> create keyspace test2 with replication = {'class':'SimpleStrategy', 
> 'replication_factor':1};
> use test2;
> CREATE TYPE address (
>  street text,
>  city text,
>  zip_code int,
>  phones set
>  );
> CREATE TYPE fullname (
>  irstname text,
>  astname text
>  );
> CREATE TABLE users (
>  d uuid PRIMARY KEY,
>  ame frozen,
>  ddresses map
>  );
> {noformat}
> Upgrade the single node to trunk, attempt to start the node up. Start will 
> fail with this exception:
> {noformat}
> ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
> not allowed inside collections: map
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
> ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251482#comment-15251482
 ] 

Stefania commented on CASSANDRA-11574:
--

Move _copyutil.c_ somewhere else. Same thing if you also have a _copyutil.so_. 

I assumed an updated .py file would cause the .c file to be recompiled when 
running the Python command but it doesn't look like it.

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10988) isInclusive and boundsAsComposites in Restriction take bounds in different order

2016-04-21 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251465#comment-15251465
 ] 

Alex Petrov commented on CASSANDRA-10988:
-

Made the suggested changes (move the logic to {{SelectStatement}}). Also, made 
the mixed order columns tests also test with {{COMPACT}} tables and added all 
tests to both {{3.0}} and {{trunk}} (all are passing}}. 

|code|utest|dtest|
|[2.2|https://github.com/ifesdjeen/cassandra/tree/10988-2.2]|[testall|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-10988-2.2-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-10988-2.2-dtest/]|
|[3.0|https://github.com/ifesdjeen/cassandra/tree/10988-3.0]|[testall|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-10988-3.0-testall/]|
|[trunk|https://github.com/ifesdjeen/cassandra/tree/10988-trunk]|[testall|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-10988-trunk-testall/]|

(I skipped dtests for {{3.0}} and {{trunk}} since there were no code changes, 
tests only)
Ran all failing unit tests locally, they're all passing. Dtest shows no 
failures.

> isInclusive and boundsAsComposites in Restriction take bounds in different 
> order
> 
>
> Key: CASSANDRA-10988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10988
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Vassil Hristov
>Assignee: Alex Petrov
> Fix For: 2.2.x
>
>
> After we've upgraded our cluster to version 2.1.11, we started getting the 
> bellow exceptions for some of our queries. Issue seems to be very similar to 
> CASSANDRA-7284.
> Code to reproduce:
> {code:java}
> createTable("CREATE TABLE %s (" +
> "a text," +
> "b int," +
> "PRIMARY KEY (a, b)" +
> ") WITH COMPACT STORAGE" +
> "AND CLUSTERING ORDER BY (b DESC)");
> execute("insert into %s (a, b) values ('a', 2)");
> execute("SELECT * FROM %s WHERE a = 'a' AND b > 0");
> {code}
> {code:java}
> java.lang.ClassCastException: 
> org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
> to org.apache.cassandra.db.composites.CellName
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1197)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1205)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1283)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1250)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:276)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:224)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:138)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
>  [apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
>  [apache-cassandra-2.1.11.jar:2.1.11]
> at 
> 

  1   2   >