[jira] [Updated] (CASSANDRA-11246) (windows) dtest failure in replace_address_test.TestReplaceAddress.replace_with_reset_resume_state_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11246:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> replace_address_test.TestReplaceAddress.replace_with_reset_resume_state_test
> ---
>
> Key: CASSANDRA-11246
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11246
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/165/testReport/replace_address_test/TestReplaceAddress/replace_with_reset_resume_state_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #165
> 2 flaps of this test in recent history, looks like a possible test issue, 
> perhaps with invalid yaml at startup (somehow).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10916) TestGlobalRowKeyCache.functional_test fails on Windows

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10916:
--
Labels: dtest windows  (was: dtest)

> TestGlobalRowKeyCache.functional_test fails on Windows
> --
>
> Key: CASSANDRA-10916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10916
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Joshua McKenzie
>  Labels: dtest, windows
> Fix For: 3.0.x
>
>
> {{global_row_key_cache_test.py:TestGlobalRowKeyCache.functional_test}} fails 
> hard on Windows when a node fails to start:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test/
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/140/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test_2/
> I have not dug much into the failure history, so I don't know how closely the 
> failures are related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11298) (windows) dtest failure in repair_tests.repair_test.TestRepairDataSystemTable.repair_table_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11298:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> repair_tests.repair_test.TestRepairDataSystemTable.repair_table_test
> ---
>
> Key: CASSANDRA-11298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11298
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/191/testReport/repair_tests.repair_test/TestRepairDataSystemTable/repair_table_test
> Failed on CassCI build cassandra-3.0_dtest_win32 #191
> This is a singular new failure, but the error message looks suspicious and 
> worth digging into.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11266) (windows) dtest failure in read_repair_test.TestReadRepair.alter_rf_and_run_read_repair_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11266:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> read_repair_test.TestReadRepair.alter_rf_and_run_read_repair_test
> 
>
> Key: CASSANDRA-11266
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11266
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/140/testReport/read_repair_test/TestReadRepair/alter_rf_and_run_read_repair_test
> Failed on CassCI build cassandra-3.0_dtest_win32 #140
> Failing on every run, looks like could be test or cassandra issue.
> {noformat}
> Couldn't identify initial replica
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10639) Commitlog compression test fails on Windows

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10639:
--
Labels: dtest windows  (was: dtest)

> Commitlog compression test fails on Windows
> ---
>
> Key: CASSANDRA-10639
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10639
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Jim Witschey
>Assignee: Joshua McKenzie
>  Labels: dtest, windows
> Fix For: 3.0.x
>
>
> {{commitlog_test.py:TestCommitLog.test_compression_error}} fails on Windows 
> under CassCI. It fails in a number of different ways. Here, it looks like 
> reading the CRC fails:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/100/testReport/commitlog_test/TestCommitLog/test_compression_error/
> Here, I believe it fails when trying to validate the CRC header:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/99/testReport/commitlog_test/TestCommitLog/test_compression_error/
> https://github.com/riptano/cassandra-dtest/blob/master/commitlog_test.py#L497
> Here's another failure where the header has a {{Q}} written in it instead of 
> a closing brace:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/91/testReport/junit/commitlog_test/TestCommitLog/test_compression_error/
> https://github.com/riptano/cassandra-dtest/blob/master/commitlog_test.py#L513
> [~bdeggleston] Do I remember correctly that you wrote this test? Can you take 
> this on?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11251) (windows) dtest failure in putget_test.TestPutGet.non_local_read_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11251:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in putget_test.TestPutGet.non_local_read_test
> -
>
> Key: CASSANDRA-11251
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11251
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/174/testReport/putget_test/TestPutGet/non_local_read_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #174
> Failing intermittently, error:
> {noformat}
> code=1500 [Replica(s) failed to execute write] message="Operation failed - 
> received 1 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 1, 'required_responses': 2, 'consistency': 'QUORUM'}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11234) (windows) dtest failure in largecolumn_test.TestLargeColumn.cleanup_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11234:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in largecolumn_test.TestLargeColumn.cleanup_test
> 
>
> Key: CASSANDRA-11234
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11234
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/largecolumn_test/TestLargeColumn/cleanup_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #156
> failing consistently
> looks like maybe a python platform issue or something:
> {noformat}
> Expected output from nodetool gcstats starts with a header line with first 
> column Interval
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11236) (windows) dtest failure in scrub_test.TestScrub.test_standalone_scrub

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11236:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in scrub_test.TestScrub.test_standalone_scrub
> -
>
> Key: CASSANDRA-11236
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11236
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/scrub_test/TestScrub/test_standalone_scrub
> Failed on CassCI build cassandra-2.2_dtest_win32 #156
> Failing on every run on windows, with:
> {noformat}
> sstablescrub failed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11252) (windows) dtest failure in read_repair_test.TestReadRepair.range_slice_query_with_tombstones_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11252:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> read_repair_test.TestReadRepair.range_slice_query_with_tombstones_test
> -
>
> Key: CASSANDRA-11252
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11252
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/176/testReport/read_repair_test/TestReadRepair/range_slice_query_with_tombstones_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #176
> {noformat}
> Trace information was not available within 120.00 seconds. Consider 
> raising Session.max_trace_wait.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11249) (windows) dtest failure in paging_test.TestPagingData.test_paging_using_secondary_indexes

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11249:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> paging_test.TestPagingData.test_paging_using_secondary_indexes
> -
>
> Key: CASSANDRA-11249
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11249
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/169/testReport/paging_test/TestPagingData/test_paging_using_secondary_indexes
> Failed on CassCI build cassandra-2.2_dtest_win32 #169



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10915) netstats_test dtest fails on Windows

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10915:
--
Labels: dtest windows  (was: dtest)

> netstats_test dtest fails on Windows
> 
>
> Key: CASSANDRA-10915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10915
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>  Labels: dtest, windows
> Fix For: 3.0.x
>
>
> jmx_test.py:TestJMX.netstats_test started failing hard on Windows about a 
> month ago:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/140/testReport/junit/jmx_test/TestJMX/netstats_test/history/?start=25
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/jmx_test/TestJMX/netstats_test/history/
> It fails when it is unable to connect to a node via JMX. I don't know if this 
> problem has any relationship to CASSANDRA-10913.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11281) (windows) dtest failures with permission issues on trunk

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11281:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failures with permission issues on trunk
> 
>
> Key: CASSANDRA-11281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11281
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest_win32/337/testReport/bootstrap_test/TestBootstrap/shutdown_wiped_node_cannot_join_test
> Failed on CassCI build trunk_dtest_win32 #337
> Failing tests with very similar error messages:
> * 
> compaction_test.TestCompaction_with_DateTieredCompactionStrategy.compaction_strategy_switching_test
> * 
> compaction_test.TestCompaction_with_LeveledCompactionStrategy.compaction_strategy_switching_test
> * bootstrap_test.TestBootstrap.shutdown_wiped_node_cannot_join_test
> * bootstrap_test.TestBootstrap.killed_wiped_node_cannot_join_test
> * bootstrap_test.TestBootstrap.decommissioned_wiped_node_can_join_test
> * 
> bootstrap_test.TestBootstrap.decommissioned_wiped_node_can_gossip_to_single_seed_test
> * bootstrap_test.TestBootstrap.failed_bootstrap_wiped_node_can_join_test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11242) (windows) dtest failure in commitlog_test.TestCommitLog.ignore_failure_policy_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11242:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> commitlog_test.TestCommitLog.ignore_failure_policy_test
> --
>
> Key: CASSANDRA-11242
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11242
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/162/testReport/commitlog_test/TestCommitLog/ignore_failure_policy_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #162
> looks like a probably test issue, failing intermittently, recent failure 
> message:
> {noformat}
> Cannot find the commitlog failure message in logs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11241) (windows) dtest failure in jmx_test.TestJMX.phi_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11241:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in jmx_test.TestJMX.phi_test
> 
>
> Key: CASSANDRA-11241
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11241
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/162/testReport/jmx_test/TestJMX/phi_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #162
> Looks like a probable test issue, failing every run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11267) (windows) dtest failure in upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11267:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test
> 
>
> Key: CASSANDRA-11267
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11267
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/167/testReport/upgrade_internal_auth_test/TestAuthUpgrade/upgrade_to_30_test
> Failed on CassCI build cassandra-3.0_dtest_win32 #167
> this test is flapping pretty frequently. not certain yet on failure cause, 
> might vary across builds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11127) index_summary_upgrade_test.py is failing

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11127:
--
Labels: dtest windows  (was: dtest)

> index_summary_upgrade_test.py is failing
> 
>
> Key: CASSANDRA-11127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest, windows
> Fix For: 3.0.x
>
> Attachments: node1_debug.log
>
>
> index_summary_upgrade_test.py is failing on the cassandra-3.0 branch, when 
> run without vnodes. The exception I'm seeing on cassci is different than 
> locally. The cassci failure is 
> [here|http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/157/testReport/index_summary_upgrade_test/TestUpgradeIndexSummary/test_upgrade_index_summary/].
> Locally I see the following:
> {code}
> 'ERROR [SSTableBatchOpen:2] 2016-02-05 15:29:04,304 CassandraDaemon.java:195 
> - Exception in thread 
> Thread[SSTableBatchOpen:2,5,main]\njava.lang.AssertionError: Illegal bounds 
> [4..8); size: 4\n\tat 
> org.apache.cassandra.io.util.Memory.checkBounds(Memory.java:339) 
> ~[main/:na]\n\tat org.apache.cassandra.io.util.Memory.getInt(Memory.java:292) 
> ~[main/:na]\n\tat 
> org.apache.cassandra.io.sstable.IndexSummary.getPositionInSummary(IndexSummary.java:146)
>  ~[main/:na]\n\tat 
> org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:151) 
> ~[main/:na]\n\tat 
> org.apache.cassandra.io.sstable.format.SSTableReader.validateSummarySamplingLevel(SSTableReader.java:928)
>  ~[main/:na]\n\tat 
> org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:748)
>  ~[main/:na]\n\tat 
> org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:705)
>  ~[main/:na]\n\tat 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:491)
>  ~[main/:na]\n\tat 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:374)
>  ~[main/:na]\n\tat 
> org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:533)
>  ~[main/:na]\n\tat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_66]\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_66]\n\tat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_66]\n\tat 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_66]\n\tat java.lang.Thread.run(Thread.java:745) [na:1.8.0_66]']
> {code}
> Node log is attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11253) (windows) dtest failure in paging_test.TestPagingData.test_paging_using_secondary_indexes

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11253:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> paging_test.TestPagingData.test_paging_using_secondary_indexes
> -
>
> Key: CASSANDRA-11253
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11253
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/177/testReport/paging_test/TestPagingData/test_paging_using_secondary_indexes
> Failed on CassCI build cassandra-2.2_dtest_win32 #177
> test is intermittently failint, most recently:
> {noformat}
> code=1000 [Unavailable exception] message="Cannot achieve consistency level 
> ALL" info={'required_replicas': 2, 'alive_replicas': 1, 'consistency': 'ALL'}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11248) (windows) dtest failure in commitlog_test.TestCommitLog.stop_failure_policy_test and stop_commit_failure_policy_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11248:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> commitlog_test.TestCommitLog.stop_failure_policy_test and 
> stop_commit_failure_policy_test
> 
>
> Key: CASSANDRA-11248
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11248
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/167/testReport/commitlog_test/TestCommitLog/stop_failure_policy_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #167
> failing intermittently, looks possibly related to CASSANDRA-11242 with:
> {noformat}
> Cannot find the commitlog failure message in logs
> {noformat}
> But there's another suspect message here not present on 11242, which is
> {noformat}
> [node1 ERROR] Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file 
> D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra\/logs/gc.log due to 
> No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11268) (windows) dtest failure in incremental_repair_test.TestIncRepair.multiple_repair_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11268:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> incremental_repair_test.TestIncRepair.multiple_repair_test
> -
>
> Key: CASSANDRA-11268
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11268
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/174/testReport/incremental_repair_test/TestIncRepair/multiple_repair_test
> Failed on CassCI build cassandra-3.0_dtest_win32 #174
> test flapping intermittently, failure looks like:
> {noformat}
> Unexpected error in node1 node log: ['ERROR [STREAM-IN-/127.0.0.3] 2016-01-08 
> 23:11:45,653 StreamSession.java:520 - [Stream 
> #2f390d30-b65d-11e5-aebf-f1f62dad0b04] Streaming error 
> occurred\njava.io.IOException: An existing connection was forcibly closed by 
> the remote host
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11276) (windows) dtest failure in commitlog_test.TestCommitLog.test_compression_error

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11276:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in commitlog_test.TestCommitLog.test_compression_error
> --
>
> Key: CASSANDRA-11276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11276
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/178/testReport/commitlog_test/TestCommitLog/test_compression_error
> Failed on CassCI build cassandra-3.0_dtest_win32 #178
> Intermittent failures of this test on windows, error:
> {noformat}
> 11 Feb 2016 20:00:22 [node1] Missing: ['Could not create Compression for type 
> org.apache.cassandra.io.compress.LZ5Compressor']
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11240) (windows) dtest failure in scrub_test.TestScrub.test_standalone_scrub_essential_files_only

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11240:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> scrub_test.TestScrub.test_standalone_scrub_essential_files_only
> --
>
> Key: CASSANDRA-11240
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11240
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/scrub_test/TestScrub/test_standalone_scrub_essential_files_only
> Failed on CassCI build cassandra-2.2_dtest_win32 #156



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11245) (windows) dtest failure in commitlog_test.TestCommitLog.die_failure_policy_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11245:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> commitlog_test.TestCommitLog.die_failure_policy_test
> ---
>
> Key: CASSANDRA-11245
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11245
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/165/testReport/commitlog_test/TestCommitLog/die_failure_policy_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #165
> flaky test, fails intermittently, error message:
> {noformat}
> Cannot find the commitlog failure message in logs
> {noformat}
> looks like it could be the same cause or related to CASSANDRA-11242



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11247) (windows) dtest failure in repair_test.TestRepair.simple_parallel_repair_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11247:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in repair_test.TestRepair.simple_parallel_repair_test
> -
>
> Key: CASSANDRA-11247
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11247
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/166/testReport/repair_test/TestRepair/simple_parallel_repair_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #166



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11235) (windows) dtest failure in offline_tools_test.TestOfflineTools.sstablelevelreset_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11235:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> offline_tools_test.TestOfflineTools.sstablelevelreset_test
> -
>
> Key: CASSANDRA-11235
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11235
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/offline_tools_test/TestOfflineTools/sstablelevelreset_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #156
> looks to be a test issue, something with JNA or unexpected stderr text that 
> can possibly be ignored.
> {noformat}
> Found line 
> "WARN  13:43:40 JNA link failure, one or more native method will be 
> unavailable."
>  in error
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11250) (windows) dtest failure in upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test

2016-04-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-11250:
--
Labels: dtest windows  (was: dtest)

> (windows) dtest failure in 
> upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
> 
>
> Key: CASSANDRA-11250
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11250
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/174/testReport/upgrade_internal_auth_test/TestAuthUpgrade/upgrade_to_22_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #174
> looks like there could be multiple causes for this intermittent failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10916) TestGlobalRowKeyCache.functional_test fails on Windows

2016-02-04 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10916:
--
Labels: dtest  (was: )

> TestGlobalRowKeyCache.functional_test fails on Windows
> --
>
> Key: CASSANDRA-10916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10916
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Joshua McKenzie
>  Labels: dtest
> Fix For: 3.0.x
>
>
> {{global_row_key_cache_test.py:TestGlobalRowKeyCache.functional_test}} fails 
> hard on Windows when a node fails to start:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test/
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/140/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test_2/
> I have not dug much into the failure history, so I don't know how closely the 
> failures are related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10912) resumable_bootstrap_test dtest flaps

2016-02-04 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10912:
--
Labels: dtest  (was: )

> resumable_bootstrap_test dtest flaps
> 
>
> Key: CASSANDRA-10912
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10912
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Yuki Morishita
>  Labels: dtest
> Fix For: 3.0.x
>
>
> {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} test flaps when 
> a node fails to start listening for connections via CQL:
> {code}
> 21 Dec 2015 10:07:48 [node3] Missing: ['Starting listening for CQL clients']:
> {code}
> I've seen it on 2.2 HEAD:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/449/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/
> and 3.0 HEAD:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/444/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/
> and trunk:
> http://cassci.datastax.com/job/trunk_dtest/838/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10896) Fix skipping logic on upgrade tests in dtest

2016-02-04 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10896:
--
Labels: dtest  (was: )

> Fix skipping logic on upgrade tests in dtest
> 
>
> Key: CASSANDRA-10896
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10896
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: DS Test Eng
>  Labels: dtest
> Fix For: 3.0.x
>
>
> This will be a general ticket for upgrade dtests that fail because of bad 
> logic surrounding skipping tests. We need a better system in place for 
> skipping tests that are not intended to work on certain versions of 
> Cassandra; at present, we run the upgrade tests with {{SKIP=false}} because, 
> again, the built-in skipping logic is bad.
> One such test is test_v2_protocol_IN_with_tuples:
> http://cassci.datastax.com/job/storage_engine_upgrade_dtest-22_tarball-311/lastCompletedBuild/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3/test_v2_protocol_IN_with_tuples/
> This shouldn't be run on clusters that include nodes running 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10915) netstats_test dtest fails on Windows

2016-02-04 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10915:
--
Labels: dtest  (was: )

> netstats_test dtest fails on Windows
> 
>
> Key: CASSANDRA-10915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10915
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>  Labels: dtest
> Fix For: 3.0.x
>
>
> jmx_test.py:TestJMX.netstats_test started failing hard on Windows about a 
> month ago:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/140/testReport/junit/jmx_test/TestJMX/netstats_test/history/?start=25
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/jmx_test/TestJMX/netstats_test/history/
> It fails when it is unable to connect to a node via JMX. I don't know if this 
> problem has any relationship to CASSANDRA-10913.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10884) test_refresh_schema_on_timeout_error dtest flapping on CassCI

2016-02-04 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10884:
--
Labels: dtest  (was: )

> test_refresh_schema_on_timeout_error dtest flapping on CassCI
> -
>
> Key: CASSANDRA-10884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10884
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>  Labels: dtest
> Fix For: 3.0.x
>
>
> These tests create keyspaces and tables through cqlsh, then runs {{DESCRIBE}} 
> to confirm they were successfully created. These tests flap under the novnode 
> dtest runs:
> http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error/history/
> http://cassci.datastax.com/job/cassandra-2.2_novnode_dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error/history/
> I have not reproduced this locally on Linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10668) bootstrap_test.TestBootstrap.resumable_bootstrap_test is failing

2016-02-04 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10668:
--
Labels: dtest  (was: )

> bootstrap_test.TestBootstrap.resumable_bootstrap_test is failing
> 
>
> Key: CASSANDRA-10668
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10668
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Sylvain Lebresne
>Assignee: Yuki Morishita
>  Labels: dtest
> Fix For: 3.0.x
>
>
> From the [test 
> history|http://cassci.datastax.com/job/cassandra-3.0_dtest/335/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/],
>  it seems the test has been flappy for a while, but it's been constantly 
> failing for the last few builts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10848) Upgrade paging dtests involving deletion flap on CassCI

2016-02-04 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10848:
--
Labels: dtest  (was: )

> Upgrade paging dtests involving deletion flap on CassCI
> ---
>
> Key: CASSANDRA-10848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10848
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>  Labels: dtest
> Fix For: 3.0.x, 3.x
>
>
> A number of dtests in the {{upgrade_tests.paging_tests}} that involve 
> deletion flap with the following error:
> {code}
> Requested pages were not delivered before timeout.
> {code}
> This may just be an effect of CASSANDRA-10730, but it's worth having a look 
> at separately. Here are some examples of tests flapping in this way:
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/422/testReport/junit/upgrade_tests.paging_test/TestPagingWithDeletionsNodes2RF1/test_multiple_partition_deletions/
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/422/testReport/junit/upgrade_tests.paging_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10639) Commitlog compression test fails on Windows

2016-02-04 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10639:
--
Labels: dtest  (was: )

> Commitlog compression test fails on Windows
> ---
>
> Key: CASSANDRA-10639
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10639
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Jim Witschey
>Assignee: Joshua McKenzie
>  Labels: dtest
> Fix For: 3.0.x
>
>
> {{commitlog_test.py:TestCommitLog.test_compression_error}} fails on Windows 
> under CassCI. It fails in a number of different ways. Here, it looks like 
> reading the CRC fails:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/100/testReport/commitlog_test/TestCommitLog/test_compression_error/
> Here, I believe it fails when trying to validate the CRC header:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/99/testReport/commitlog_test/TestCommitLog/test_compression_error/
> https://github.com/riptano/cassandra-dtest/blob/master/commitlog_test.py#L497
> Here's another failure where the header has a {{Q}} written in it instead of 
> a closing brace:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/91/testReport/junit/commitlog_test/TestCommitLog/test_compression_error/
> https://github.com/riptano/cassandra-dtest/blob/master/commitlog_test.py#L513
> [~bdeggleston] Do I remember correctly that you wrote this test? Can you take 
> this on?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10869) paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions dtest fails on 2.1

2016-02-04 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10869:
--
Labels: dtest  (was: )

> paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions dtest 
> fails on 2.1
> --
>
> Key: CASSANDRA-10869
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10869
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>  Labels: dtest
> Fix For: 2.1.x
>
>
> This test is failing hard on 2.1. Here is its history on the JDK8 job for 
> cassandra-2.1:
> http://cassci.datastax.com/job/cassandra-2.1_dtest_jdk8/lastCompletedBuild/testReport/paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/history/
> and on the JDK7 job:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/lastCompletedBuild/testReport/paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/history/
> It fails because a read times out after ~1.5 minutes. If this is a test 
> error, it's specific to 2.1, because the test passes consistently on newer 
> versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10867) thrift_tests.py:TestCQLAccesses.test_range_tombstone_and_static failing on C* 2.1

2016-02-04 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10867:
--
Labels: dtest  (was: )

> thrift_tests.py:TestCQLAccesses.test_range_tombstone_and_static failing on C* 
> 2.1
> -
>
> Key: CASSANDRA-10867
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10867
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Sylvain Lebresne
>  Labels: dtest
>
> http://cassci.datastax.com/job/cassandra-2.1_dtest/376/testReport/thrift_tests/TestCQLAccesses/test_range_tombstone_and_static/history/
> I haven't had enough experience with thrift or the thrift tests to debug 
> this. It passes on 2.2+. I've reproduced this failure locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10664) Fix failing tests

2016-02-04 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10664:
--
Labels: dtest  (was: )

> Fix failing tests
> -
>
> Key: CASSANDRA-10664
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10664
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sylvain Lebresne
>  Labels: dtest
> Fix For: 3.0.x
>
>
> This is the continuation of CASSANDRA-10166, just a meta-ticket to group all 
> tickets related to fixing any unit test or dtest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10612) Protocol v3 upgrade tests on 2.1->3.0 path fail when compaction is interrupted

2016-02-04 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-10612:
--
Labels: dtest  (was: )

> Protocol v3 upgrade tests on 2.1->3.0 path fail when compaction is interrupted
> --
>
> Key: CASSANDRA-10612
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10612
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
> Fix For: 3.0.x
>
>
> The following tests in the upgrade_through_versions dtest suite fail:
> * 
> upgrade_through_versions_test.py:TestRandomPartitionerUpgrade.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestRandomPartitionerUpgrade.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgradeThroughVersions.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgradeThroughVersions.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_latest_tag.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_latest_tag.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_HEAD.rolling_upgrade_test
> See this report:
> http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/
> They fail with the following error:
> {code}
> A subprocess has terminated early. Subprocess statuses: Process-41 (is_alive: 
> True), Process-42 (is_alive: False), Process-43 (is_alive: True), Process-44 
> (is_alive: False), attempting to terminate remaining subprocesses now.
> {code}
> and with logs that look like this:
> {code}
> Unexpected error in node1 node log: ['ERROR [SecondaryIndexManagement:1] 
> 2015-10-27 00:06:52,335 CassandraDaemon.java:195 - Exception in thread 
> Thread[SecondaryIndexManagement:1,5,main] java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 578160/1663620)bytes
> at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:368) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.buildBlocking(CassandraIndex.java:688)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.lambda$getBuildIndexTask$206(CassandraIndex.java:658)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex$$Lambda$151/1841229245.call(Unknown
>  Source) ~[na:na]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_51]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51] Caused by: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 
> 578160/{code}1663620)bytes
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.8.0_51]
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[na:1.8.0_51]
> at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:364) 
> ~[main/:na]
> ... 7 common frames omitted Caused by: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 578160/1663620)bytes
> at 
> org.apache.cassandra.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:67)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1269)
>  ~[main/:na]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_51]
> ... 4 common frames omitted', 'ERROR [HintsDispatcher:2] 2015-10-27 
> 00:08:48,520 CassandraDaemon.java:195 - Exception in thread 
> Thread[HintsDispatcher:2,1,main]', 'ERROR [HintsDispatcher:2] 2015-10-27 
> 00:11:58,336 CassandraDaemon.java:195 - Exception in thread 
> Thread[HintsDispatcher:2,1,main]']
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6177) remove all sleeps in the dtests

2013-11-06 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-6177:
-

Assignee: (was: Daniel Meyer)

 remove all sleeps in the dtests
 ---

 Key: CASSANDRA-6177
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6177
 Project: Cassandra
  Issue Type: Test
Reporter: Brandon Williams

 The dtests use a ton of sleep calls for various things, most of which is 
 guessing if Cassandra has finished doing something or not.  Guessing is 
 problematic and shouldn't be necessary -- a prime example of this is creating 
 a ks or cf.  When done over cql, we sleep and hope it's done propagating, but 
 when done over thrift we actually check for schema agreement.  We should be 
 able to eliminate the sleeps and reliably detect when it's time for the next 
 step programmatically.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-4981) Error when starting a node with vnodes while counter-add operations underway

2013-10-29 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw resolved CASSANDRA-4981.
--

Resolution: Cannot Reproduce

 Error when starting a node with vnodes while counter-add operations underway
 

 Key: CASSANDRA-4981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4981
 Project: Cassandra
  Issue Type: Bug
 Environment: 2-node cluster on ec2, ubuntu, cassandra-1.2.0 commit 
 a32eb9f7d2f2868e8154d178e96e045859e1d855
Reporter: Tyler Patterson
Assignee: Ryan McGuire
Priority: Minor
 Attachments: system.log


 Start both nodes, start stress on one node like this: cassandra-stress 
 --replication-factor=2 --operation=COUNTER_ADD
 While that is running: On the other node, kill cassandra, wait for nodetool 
 status to show the node as down, and restart cassandra. I sometimes have to 
 kill and restart cassandra several times to get the problem to happen.
 I get this error several times in the log:
 {code}
 ERROR 15:39:33,198 Exception in thread Thread[MutationStage:16,5,main]
 java.lang.AssertionError
   at 
 org.apache.cassandra.locator.TokenMetadata.firstTokenIndex(TokenMetadata.java:748)
   at 
 org.apache.cassandra.locator.TokenMetadata.firstToken(TokenMetadata.java:762)
   at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:95)
   at 
 org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2426)
   at 
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:396)
   at 
 org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:755)
   at 
 org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:53)
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-5325) Report generator for stress testing two branches

2013-10-29 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-5325:
-

Description: 
We need a simple and automatic way of reporting/charting the performance 
differences between two different branches of C* using cassandra-stress. 

* Bootstrap appropriate java and cassandra onto a set of nodes
* Create cluster out of those nodes
* Run cassandra-stress write
* Allow compaction to settle
* Run cassandra-stress read
* Gather statistics and chart 


  was:
We need a simple and automatic way of reporting/charting the performance 
differences between two different branches of C* using cassandra-stress. 

* Bootstrap appropriate java and cassandra onto a set of nodes
* Create cluster out of those nodes
* Run cassandra-stress write
* Allow compaction to settle
* Run cassandra-stress read
* Gather statistics and chart 


 Report generator for stress testing two branches
 

 Key: CASSANDRA-5325
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5325
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire

 We need a simple and automatic way of reporting/charting the performance 
 differences between two different branches of C* using cassandra-stress. 
 * Bootstrap appropriate java and cassandra onto a set of nodes
 * Create cluster out of those nodes
 * Run cassandra-stress write
 * Allow compaction to settle
 * Run cassandra-stress read
 * Gather statistics and chart 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6116) /etc/init.d/cassandra stop and service don't work

2013-09-30 Thread Cathy Daw (JIRA)
Cathy Daw created CASSANDRA-6116:


 Summary: /etc/init.d/cassandra stop and service don't work
 Key: CASSANDRA-6116
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6116
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw
Priority: Minor


These use to work in 2.0.0 and appears to be introduced in 2.0.1

Test Scenario
{noformat}
# Start Server
automaton@ip-10-171-39-230:~$ sudo service cassandra start
xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1862M -Xmx1862M 
-Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k


# Check Status
automaton@ip-10-171-39-230:~$ nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens  Owns   Host ID   
Rack
UN  127.0.0.1  81.72 KB   256 100.0%  e40ef77c-9cf7-4e27-b651-ede3b7269019  
rack1


# Check Status of service
automaton@ip-10-171-39-230:~$ sudo service cassandra status
xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1862M -Xmx1862M 
-Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
 * Cassandra is not running


# Stop Server
automaton@ip-10-171-39-230:~$ sudo service cassandra stop
xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1862M -Xmx1862M 
-Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k


# Verify Server is no longer up
automaton@ip-10-171-39-230:~$ nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens  Owns   Host ID   
Rack
UN  127.0.0.1  81.72 KB   256 100.0%  e40ef77c-9cf7-4e27-b651-ede3b7269019  
rack1
{noformat}

Installation Instructions
{noformat}
wget http://people.apache.org/~slebresne/cassandra_2.0.1_all.deb
sudo dpkg -i cassandra_2.0.1_all.deb # Error about dependencies
sudo apt-get -f install
sudo dpkg -i cassandra_2.0.1_all.deb
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-5828) add counters coverage to upgrade tests

2013-07-30 Thread Cathy Daw (JIRA)
Cathy Daw created CASSANDRA-5828:


 Summary: add counters coverage to upgrade tests
 Key: CASSANDRA-5828
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5828
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Cathy Daw
Assignee: Daniel Meyer
Priority: Critical
 Fix For: 2.0 rc1, 1.2.9


this was encountered as missing coverage when upgrading to 1.2.7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5466) Compaction task eats 100% CPU for a long time for tables with collection typed columns

2013-07-29 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13722938#comment-13722938
 ] 

Cathy Daw edited comment on CASSANDRA-5466 at 7/29/13 8:34 PM:
---

@azarutin per brandon, please retest on 1.2.8


  was (Author: cdaw):
@alex per brandon, please retest on 1.2.8

  
 Compaction task eats 100% CPU for a long time for tables with collection 
 typed columns
 --

 Key: CASSANDRA-5466
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5466
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: ubuntu 12.10, sun-6-java 1.6.0.37, Core-i7, 8GB RAM
Reporter: Alexey Tereschenko
Assignee: Alex Zarutin
 Attachments: CASSANDRA-5466.txt, Cassandra_JDBC_Updater.tar.gz, 
 logs-system-cass-5466-output-30-threads-1386752-req-Default-LCS.log, 
 logs-system-cass-5466-output-30-threads-1578752-req-LeveledCompactionStrategy.log,
  
 nodetool-compactionstats-cass-5466-output-30-threads-1386752-req-Default-LCS.log,
  
 nodetool-compactionstats-cass-5466-output-30-threads-1578752-req-LeveledCompactionStrategy.log


 For the table:
 {code:sql}
 create table test (
 user_id bigint,
 first_list listbigint,
 second_list listbigint,
 third_list listbigint,
 PRIMARY KEY (user_id)
 );
 {code}
 I do thousands of updates like the following:
 {code:sql}
 UPDATE test SET first_list = [1], second_list = [2], third_list = [3] WHERE 
 user_id = ?;
 {code}
 In several minutes a compaction task starts running. {{nodetool 
 compactionstats}} shows that remaining time is 2 seconds but in fact it can 
 take hours to really complete the compaction tasks. And during that time 
 Cassandra consumes 100% of CPU and slows down so significally that it gives 
 connection timeout exceptions to any client code trying to establish 
 connection with Cassandra. This happens only with tables with collection 
 typed columns.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5466) Compaction task eats 100% CPU for a long time for tables with collection typed columns

2013-07-29 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13722938#comment-13722938
 ] 

Cathy Daw commented on CASSANDRA-5466:
--

@alex per brandon, please retest on 1.2.8


 Compaction task eats 100% CPU for a long time for tables with collection 
 typed columns
 --

 Key: CASSANDRA-5466
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5466
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: ubuntu 12.10, sun-6-java 1.6.0.37, Core-i7, 8GB RAM
Reporter: Alexey Tereschenko
Assignee: Alex Zarutin
 Attachments: CASSANDRA-5466.txt, Cassandra_JDBC_Updater.tar.gz, 
 logs-system-cass-5466-output-30-threads-1386752-req-Default-LCS.log, 
 logs-system-cass-5466-output-30-threads-1578752-req-LeveledCompactionStrategy.log,
  
 nodetool-compactionstats-cass-5466-output-30-threads-1386752-req-Default-LCS.log,
  
 nodetool-compactionstats-cass-5466-output-30-threads-1578752-req-LeveledCompactionStrategy.log


 For the table:
 {code:sql}
 create table test (
 user_id bigint,
 first_list listbigint,
 second_list listbigint,
 third_list listbigint,
 PRIMARY KEY (user_id)
 );
 {code}
 I do thousands of updates like the following:
 {code:sql}
 UPDATE test SET first_list = [1], second_list = [2], third_list = [3] WHERE 
 user_id = ?;
 {code}
 In several minutes a compaction task starts running. {{nodetool 
 compactionstats}} shows that remaining time is 2 seconds but in fact it can 
 take hours to really complete the compaction tasks. And during that time 
 Cassandra consumes 100% of CPU and slows down so significally that it gives 
 connection timeout exceptions to any client code trying to establish 
 connection with Cassandra. This happens only with tables with collection 
 typed columns.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5732) Can not query secondary index

2013-07-29 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw reassigned CASSANDRA-5732:


Assignee: Cathy Daw  (was: Alex Zarutin)

 Can not query secondary index
 -

 Key: CASSANDRA-5732
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5732
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.5
 Environment: Windows 8, Jre 1.6.0_45 32-bit
Reporter: Tony Anecito
Assignee: Cathy Daw

 Noticed after taking a column family that already existed and assigning to an 
 IntegerType index_type:KEYS and the caching was already set to 'ALL' that the 
 prepared statement do not return rows neither did it throw an exception. Here 
 is the sequence.
 1. Starting state query running with caching off for a Column Family with the 
 query using the secondary index for te WHERE clause.
 2, Set Column Family caching to ALL using Cassandra-CLI and update CQL. 
 Cassandra-cli Describe shows column family caching set to ALL
 3. Rerun query and it works.
 4. Restart Cassandra and run query and no rows returned. Cassandra-cli 
 Describe shows column family caching set to ALL
 5. Set Column Family caching to NONE using Cassandra-cli and update CQL. 
 Rerun query and no rows returned. Cassandra-cli Describe for column family 
 shows caching set to NONE.
 6. Restart Cassandra. Rerun query and it is working again. We are now back to 
 the starting state.
 Best Regards,
 -Tony

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5689) NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)

2013-07-22 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-5689:
-

Labels: test_done  (was: )

 NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)
 

 Key: CASSANDRA-5689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5689
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0
 Environment: Ubuntu Precise with Oracle Java 7u25.
Reporter: Blair Zajac
Assignee: Cathy Daw
Priority: Trivial
  Labels: test_done
 Attachments: CASSANDRA-5689.txt, init1, init2, init3


 I built Cassandra from git trunk at cassandra-1.2.5-989-g70dfb70 using the 
 debian/ package.  I have a shell script to shut down Cassandra:
 {code}
   $nodetool disablegossip
   sleep 5
   $nodetool disablebinary
   $nodetool disablethrift
   $nodetool drain
   /etc/init.d/cassandra stop
 {code}
 Shutting it down I get this exception on all three nodes:
 {code}
 Exception in thread main java.lang.NullPointerException
   at org.apache.cassandra.transport.Server.close(Server.java:156)
   at org.apache.cassandra.transport.Server.stop(Server.java:107)
   at 
 org.apache.cassandra.service.StorageService.stopNativeTransport(StorageService.java:347)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at java.security.AccessController.doPrivileged(Native Method)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5689) NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)

2013-07-20 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13714491#comment-13714491
 ] 

Cathy Daw commented on CASSANDRA-5689:
--

I'll raise a bug in the DSE bug system to debug more.

 NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)
 

 Key: CASSANDRA-5689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5689
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0
 Environment: Ubuntu Precise with Oracle Java 7u25.
Reporter: Blair Zajac
Assignee: Cathy Daw
Priority: Trivial
 Attachments: CASSANDRA-5689.txt, init1, init2, init3


 I built Cassandra from git trunk at cassandra-1.2.5-989-g70dfb70 using the 
 debian/ package.  I have a shell script to shut down Cassandra:
 {code}
   $nodetool disablegossip
   sleep 5
   $nodetool disablebinary
   $nodetool disablethrift
   $nodetool drain
   /etc/init.d/cassandra stop
 {code}
 Shutting it down I get this exception on all three nodes:
 {code}
 Exception in thread main java.lang.NullPointerException
   at org.apache.cassandra.transport.Server.close(Server.java:156)
   at org.apache.cassandra.transport.Server.stop(Server.java:107)
   at 
 org.apache.cassandra.service.StorageService.stopNativeTransport(StorageService.java:347)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at java.security.AccessController.doPrivileged(Native Method)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5689) NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)

2013-07-18 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13713096#comment-13713096
 ] 

Cathy Daw commented on CASSANDRA-5689:
--

[~brandon.williams]
I was not able to reproduce this using the Apache Cassandra 1.2.6 packages, via 
these instructions: http://wiki.apache.org/cassandra/DebianPackaging

*First Test:* I installed, started up the server, and manually stopped the 
server:
sudo service cassandra stop

*Second Test:* start/stop a few times
sudo service cassandra start
sudo service cassandra stop

*Third Test:*
$ nodetool disablegossip
$ nodetool disablebinary
$ nodetool disablethrift
$ nodetool drain
$ sudo /etc/init.d/cassandra stop



 NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)
 

 Key: CASSANDRA-5689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5689
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0
 Environment: Ubuntu Precise with Oracle Java 7u25.
Reporter: Blair Zajac
Assignee: Cathy Daw
Priority: Trivial
 Attachments: CASSANDRA-5689.txt, init1, init2, init3


 I built Cassandra from git trunk at cassandra-1.2.5-989-g70dfb70 using the 
 debian/ package.  I have a shell script to shut down Cassandra:
 {code}
   $nodetool disablegossip
   sleep 5
   $nodetool disablebinary
   $nodetool disablethrift
   $nodetool drain
   /etc/init.d/cassandra stop
 {code}
 Shutting it down I get this exception on all three nodes:
 {code}
 Exception in thread main java.lang.NullPointerException
   at org.apache.cassandra.transport.Server.close(Server.java:156)
   at org.apache.cassandra.transport.Server.stop(Server.java:107)
   at 
 org.apache.cassandra.service.StorageService.stopNativeTransport(StorageService.java:347)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at java.security.AccessController.doPrivileged(Native Method)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 

[jira] [Comment Edited] (CASSANDRA-5689) NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)

2013-07-18 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698581#comment-13698581
 ] 

Cathy Daw edited comment on CASSANDRA-5689 at 7/18/13 11:36 PM:


I was testing the debian installer for DataStax Community 1.2.6 and noticed 
first that when we install that we do a startup/shutdown, but also that on a 
debian platform (not centos, rhel, mac or ubuntu) that I consistently see this 
exception during that initial shutdown. Not sure if this is relevant or not but 
I was testing on squeeze (debian 6.0.1) using this EC2 AMI: ami-75287b30.

{noformat}
ERROR [StorageServiceShutdownHook] 2013-07-03 00:35:16,628 CassandraDaemon.java 
(line 192) Exception in thread Thread[StorageServiceShutdownHook,5,main]
java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:321)
at 
org.apache.cassandra.service.StorageService.shutdownClientServers(StorageService.java:370)
at 
org.apache.cassandra.service.StorageService.access$000(StorageService.java:88)
at 
org.apache.cassandra.service.StorageService$1.runMayThrow(StorageService.java:519)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:662)
{noformat}

  was (Author: cdaw):
I was testing the debian installer for 1.2.6 and noticed first that when we 
install that we do a startup/shutdown, but also that on a debian platform (not 
centos, rhel, mac or ubuntu) that I consistently see this exception during that 
initial shutdown. Not sure if this is relevant or not but I was testing on 
squeeze (debian 6.0.1) using this EC2 AMI: ami-75287b30.

{noformat}
ERROR [StorageServiceShutdownHook] 2013-07-03 00:35:16,628 CassandraDaemon.java 
(line 192) Exception in thread Thread[StorageServiceShutdownHook,5,main]
java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:321)
at 
org.apache.cassandra.service.StorageService.shutdownClientServers(StorageService.java:370)
at 
org.apache.cassandra.service.StorageService.access$000(StorageService.java:88)
at 
org.apache.cassandra.service.StorageService$1.runMayThrow(StorageService.java:519)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:662)
{noformat}
  
 NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)
 

 Key: CASSANDRA-5689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5689
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0
 Environment: Ubuntu Precise with Oracle Java 7u25.
Reporter: Blair Zajac
Assignee: Cathy Daw
Priority: Trivial
 Attachments: CASSANDRA-5689.txt, init1, init2, init3


 I built Cassandra from git trunk at cassandra-1.2.5-989-g70dfb70 using the 
 debian/ package.  I have a shell script to shut down Cassandra:
 {code}
   $nodetool disablegossip
   sleep 5
   $nodetool disablebinary
   $nodetool disablethrift
   $nodetool drain
   /etc/init.d/cassandra stop
 {code}
 Shutting it down I get this exception on all three nodes:
 {code}
 Exception in thread main java.lang.NullPointerException
   at org.apache.cassandra.transport.Server.close(Server.java:156)
   at org.apache.cassandra.transport.Server.stop(Server.java:107)
   at 
 org.apache.cassandra.service.StorageService.stopNativeTransport(StorageService.java:347)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 

[jira] [Assigned] (CASSANDRA-5689) NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)

2013-07-15 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw reassigned CASSANDRA-5689:


Assignee: Cathy Daw  (was: Alex Zarutin)

 NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)
 

 Key: CASSANDRA-5689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5689
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0
 Environment: Ubuntu Precise with Oracle Java 7u25.
Reporter: Blair Zajac
Assignee: Cathy Daw
Priority: Trivial
 Attachments: CASSANDRA-5689.txt, init1, init2, init3


 I built Cassandra from git trunk at cassandra-1.2.5-989-g70dfb70 using the 
 debian/ package.  I have a shell script to shut down Cassandra:
 {code}
   $nodetool disablegossip
   sleep 5
   $nodetool disablebinary
   $nodetool disablethrift
   $nodetool drain
   /etc/init.d/cassandra stop
 {code}
 Shutting it down I get this exception on all three nodes:
 {code}
 Exception in thread main java.lang.NullPointerException
   at org.apache.cassandra.transport.Server.close(Server.java:156)
   at org.apache.cassandra.transport.Server.stop(Server.java:107)
   at 
 org.apache.cassandra.service.StorageService.stopNativeTransport(StorageService.java:347)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at java.security.AccessController.doPrivileged(Native Method)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5695) Convert pig smoke tests into real PigUnit tests

2013-07-15 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw reassigned CASSANDRA-5695:


Assignee: Alex Zarutin  (was: Ryan McGuire)

 Convert pig smoke tests into real PigUnit tests
 ---

 Key: CASSANDRA-5695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5695
 Project: Cassandra
  Issue Type: Test
  Components: Hadoop
Reporter: Brandon Williams
Assignee: Alex Zarutin
Priority: Minor
 Fix For: 1.2.7


 Currently, we have some ghetto pig tests in examples/pig/test, but there's 
 currently no way to continuously integrate these since a human needs to check 
 that the output isn't wrong, not just that the tests ran successfully.  We've 
 had garbled output problems in the past, so it would be nice to formalize our 
 tests to catch this.  PigUnit appears to be a good choice for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5689) NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)

2013-07-02 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698581#comment-13698581
 ] 

Cathy Daw commented on CASSANDRA-5689:
--

I was testing the debian installer for 1.2.6 and noticed first that when we 
install that we do a startup/shutdown, but also that on a debian platform (not 
centos, rhel, mac or ubuntu) that I consistently see this exception during that 
initial shutdown. Not sure if this is relevant or not but I was testing on 
squeeze (debian 6.0.1) using this EC2 AMI: ami-75287b30.

{noformat}
ERROR [StorageServiceShutdownHook] 2013-07-03 00:35:16,628 CassandraDaemon.java 
(line 192) Exception in thread Thread[StorageServiceShutdownHook,5,main]
java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:321)
at 
org.apache.cassandra.service.StorageService.shutdownClientServers(StorageService.java:370)
at 
org.apache.cassandra.service.StorageService.access$000(StorageService.java:88)
at 
org.apache.cassandra.service.StorageService$1.runMayThrow(StorageService.java:519)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:662)
{noformat}

 NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)
 

 Key: CASSANDRA-5689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5689
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0
 Environment: Ubuntu Precise with Oracle Java 7u25.
Reporter: Blair Zajac
Assignee: Alex Zarutin
Priority: Trivial
 Attachments: CASSANDRA-5689.txt, init1, init2, init3


 I built Cassandra from git trunk at cassandra-1.2.5-989-g70dfb70 using the 
 debian/ package.  I have a shell script to shut down Cassandra:
 {code}
   $nodetool disablegossip
   sleep 5
   $nodetool disablebinary
   $nodetool disablethrift
   $nodetool drain
   /etc/init.d/cassandra stop
 {code}
 Shutting it down I get this exception on all three nodes:
 {code}
 Exception in thread main java.lang.NullPointerException
   at org.apache.cassandra.transport.Server.close(Server.java:156)
   at org.apache.cassandra.transport.Server.stop(Server.java:107)
   at 
 org.apache.cassandra.service.StorageService.stopNativeTransport(StorageService.java:347)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at java.security.AccessController.doPrivileged(Native Method)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 

[jira] [Commented] (CASSANDRA-5689) NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)

2013-07-02 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698608#comment-13698608
 ] 

Cathy Daw commented on CASSANDRA-5689:
--

FWIW, I hadn't seen it testing the 1.2.5 installer either.

 NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)
 

 Key: CASSANDRA-5689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5689
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0
 Environment: Ubuntu Precise with Oracle Java 7u25.
Reporter: Blair Zajac
Assignee: Alex Zarutin
Priority: Trivial
 Attachments: CASSANDRA-5689.txt, init1, init2, init3


 I built Cassandra from git trunk at cassandra-1.2.5-989-g70dfb70 using the 
 debian/ package.  I have a shell script to shut down Cassandra:
 {code}
   $nodetool disablegossip
   sleep 5
   $nodetool disablebinary
   $nodetool disablethrift
   $nodetool drain
   /etc/init.d/cassandra stop
 {code}
 Shutting it down I get this exception on all three nodes:
 {code}
 Exception in thread main java.lang.NullPointerException
   at org.apache.cassandra.transport.Server.close(Server.java:156)
   at org.apache.cassandra.transport.Server.stop(Server.java:107)
   at 
 org.apache.cassandra.service.StorageService.stopNativeTransport(StorageService.java:347)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at java.security.AccessController.doPrivileged(Native Method)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5689) NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)

2013-07-02 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698608#comment-13698608
 ] 

Cathy Daw edited comment on CASSANDRA-5689 at 7/3/13 5:09 AM:
--

FWIW, I hadn't seen it testing the 1.2.5 installer either.  In my tests, I have 
been using java 1.6_0.45.

  was (Author: cdaw):
FWIW, I hadn't seen it testing the 1.2.5 installer either.
  
 NPE shutting down Cassandra trunk (cassandra-1.2.5-989-g70dfb70)
 

 Key: CASSANDRA-5689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5689
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0
 Environment: Ubuntu Precise with Oracle Java 7u25.
Reporter: Blair Zajac
Assignee: Alex Zarutin
Priority: Trivial
 Attachments: CASSANDRA-5689.txt, init1, init2, init3


 I built Cassandra from git trunk at cassandra-1.2.5-989-g70dfb70 using the 
 debian/ package.  I have a shell script to shut down Cassandra:
 {code}
   $nodetool disablegossip
   sleep 5
   $nodetool disablebinary
   $nodetool disablethrift
   $nodetool drain
   /etc/init.d/cassandra stop
 {code}
 Shutting it down I get this exception on all three nodes:
 {code}
 Exception in thread main java.lang.NullPointerException
   at org.apache.cassandra.transport.Server.close(Server.java:156)
   at org.apache.cassandra.transport.Server.stop(Server.java:107)
   at 
 org.apache.cassandra.service.StorageService.stopNativeTransport(StorageService.java:347)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at java.security.AccessController.doPrivileged(Native Method)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 {code}

--

[jira] [Updated] (CASSANDRA-5661) Discard pooled readers for cold data

2013-06-25 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-5661:
-

Tester: dmeyer

 Discard pooled readers for cold data
 

 Key: CASSANDRA-5661
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Jonathan Ellis
Assignee: Pavel Yaskevich
 Fix For: 1.2.7


 Reader pooling was introduced in CASSANDRA-4942 but pooled 
 RandomAccessReaders are never cleaned up until the SSTableReader is closed.  
 So memory use is the worst case simultaneous RAR we had open for this file, 
 forever.
 We should introduce a global limit on how much memory to use for RAR, and 
 evict old ones.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5692) Race condition in detecting version on a mixed 1.1/1.2 cluster

2013-06-23 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13691630#comment-13691630
 ] 

Cathy Daw commented on CASSANDRA-5692:
--

[~sbtourist]
i can apply patch 001 or 004 successfully, but not able to apply 001 + 004.   
Do I need both?

{noformat}
# via git apply
$ git reset --hard HEAD
HEAD is now at 0f7c7dc Merge pull request #15 from riptano/CASSANDRA-5476

$ git apply --whitespace=fix 5692-0001.patch 5692-0004.patch
5692-0001.patch:37: trailing whitespace.

5692-0004.patch:89: trailing whitespace.

5692-0004.patch:133: trailing whitespace.

5692-0004.patch:138: trailing whitespace.

5692-0004.patch:143: trailing whitespace.

error: patch failed: 
src/java/org/apache/cassandra/net/OutboundTcpConnection.java:278
error: src/java/org/apache/cassandra/net/OutboundTcpConnection.java: patch does 
not apply

# via unix patch
$ git reset --hard HEAD
HEAD is now at 0f7c7dc Merge pull request #15 from riptano/CASSANDRA-5476

$ patch -l -p1  5692-0001.patch
patching file src/java/org/apache/cassandra/net/OutboundTcpConnection.java

$ patch -l -p1  5692-0004.patch
patching file src/java/org/apache/cassandra/net/MessagingService.java
patching file src/java/org/apache/cassandra/net/OutboundTcpConnection.java
Hunk #1 FAILED at 278.
1 out of 1 hunk FAILED -- saving rejects to file 
src/java/org/apache/cassandra/net/OutboundTcpConnection.java.rej

$ cat src/java/org/apache/cassandra/net/OutboundTcpConnection.java.rej
***
*** 278,284 
  if (logger.isDebugEnabled())
  logger.debug(attempting to connect to  + 
poolReference.endPoint());

- targetVersion = 
MessagingService.instance().getVersion(poolReference.endPoint());

  long start = System.currentTimeMillis();
  while (System.currentTimeMillis()  start + 
DatabaseDescriptor.getRpcTimeout())
--- 278,284 
  if (logger.isDebugEnabled())
  logger.debug(attempting to connect to  + 
poolReference.endPoint());

+ targetVersion = 
MessagingService.instance().getVersion(poolReference.endPoint(), true);

  long start = System.currentTimeMillis();
  while (System.currentTimeMillis()  start + 
DatabaseDescriptor.getRpcTimeout())
{noformat} 

 Race condition in detecting version on a mixed 1.1/1.2 cluster
 --

 Key: CASSANDRA-5692
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5692
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.9, 1.2.5
Reporter: Sergio Bossa
Priority: Minor
 Attachments: 5692-0001.patch, 5692-0004.patch


 On a mixed 1.1 / 1.2 cluster, starting 1.2 nodes fires sometimes a race 
 condition in version detection, where the 1.2 node wrongly detects version 6 
 for a 1.1 node.
 It works as follows:
 1) The just started 1.2 node quickly opens an OutboundTcpConnection toward a 
 1.1 node before receiving any messages from the latter.
 2) Given the version is correctly detected only when the first message is 
 received, the version is momentarily set at 6.
 3) This opens an OutboundTcpConnection from 1.2 to 1.1 at version 6, which 
 gets stuck in the connect() method.
 Later, the version is correctly fixed, but all outbound connections from 1.2 
 to 1.1 are stuck at this point.
 Evidence from 1.2 logs:
 TRACE 13:48:31,133 Assuming current protocol version for /127.0.0.2
 DEBUG 13:48:37,837 Setting version 5 for /127.0.0.2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5524) Allow upgradesstables to be run against a specified directory

2013-06-21 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-5524:
-

Tester: alexzar  (was: enigmacurry)

 Allow upgradesstables to be run against a specified directory
 -

 Key: CASSANDRA-5524
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5524
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Nick Bailey
Priority: Minor
 Fix For: 1.2.6

 Attachments: 0001-Add-a-snapshot-upgrade-tool.patch, 
 0002-Rename-snapshotupgrade-to-sstableupgrade.patch, 
 0003-Update-NEWS.txt-and-debian-scripts.patch


 Currently, upgradesstables only modifies live SSTables.  Because 
 sstableloader cannot stream old SSTable formats, this makes it difficult to 
 restore data from a snapshot taken in a previous major version of Cassandra.
 Allowing the user to specify a directory for upgradesstables would resolve 
 this, but it may also be nice to upgrade SSTables in snapshot directories 
 automatically or with a separate flag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5670) running compact on an index did not compact two index files into one

2013-06-19 Thread Cathy Daw (JIRA)
Cathy Daw created CASSANDRA-5670:


 Summary: running compact on an index did not compact two index 
files into one
 Key: CASSANDRA-5670
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5670
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.5
Reporter: Cathy Daw
Priority: Minor


With a data directory containing secondary index files ending in -1 and -2, I 
expected that when I ran compact against the index that they would compact down 
to a set of -3 files.  This column family uses SizeTieredCompactionStrategy.

Using our standard CQL example, the compact command used was: 
$ ./nodetool compact test1 test1-playlists.playlists_artist_idx

Please note: reproducing this test on 1.1.12 (using a single primary key), you 
will see that running compact on the keyspace also does not compact the index 
file.  There is no option to compact the index, so I could not compare that.

{noformat}
CREATE KEYSPACE test1 WITH replication = {'class':'SimpleStrategy', 
'replication_factor':1};

use test1;

CREATE TABLE playlists (
  id uuid,
  song_order int,
  song_id uuid,
  title text,
  album text,
  artist text,
  PRIMARY KEY  (id, song_order ) );

INSERT INTO playlists (id, song_order, song_id, title, artist, album)
  VALUES (62c36092-82a1-3a00-93d1-46196ee77204, 1,
  a3e64f8f-bd44-4f28-b8d9-6938726e34d4, 'La Grange', 'ZZ Top', 'Tres Hombres');

select * from playlists;

=
./nodetool flush test1

$ ls /var/lib/cassandra/data/test1/playlists
test1-playlists-ic-1-CompressionInfo.db 
test1-playlists-ic-1-Data.db
test1-playlists-ic-1-Filter.db  
test1-playlists-ic-1-Index.db   
test1-playlists-ic-1-Statistics.db  
test1-playlists-ic-1-Summary.db 
test1-playlists-ic-1-TOC.txt

test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db

test1-playlists.playlists_artist_idx-ic-1-Data.db
test1-playlists.playlists_artist_idx-ic-1-Filter.db
test1-playlists.playlists_artist_idx-ic-1-Index.db
test1-playlists.playlists_artist_idx-ic-1-Statistics.db
test1-playlists.playlists_artist_idx-ic-1-Summary.db
test1-playlists.playlists_artist_idx-ic-1-TOC.txt

=

CREATE INDEX ON playlists(artist );
select * from playlists;
select * from playlists where artist = 'ZZ Top';

=
$ ./nodetool flush test1

$ ls /var/lib/cassandra/data/test1/playlists
test1-playlists-ic-1-CompressionInfo.db 
test1-playlists-ic-1-Data.db
test1-playlists-ic-1-Filter.db  
test1-playlists-ic-1-Index.db   
test1-playlists-ic-1-Statistics.db  
test1-playlists-ic-1-Summary.db 
test1-playlists-ic-1-TOC.txt

test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db
test1-playlists.playlists_artist_idx-ic-1-Data.db
test1-playlists.playlists_artist_idx-ic-1-Filter.db
test1-playlists.playlists_artist_idx-ic-1-Index.db
test1-playlists.playlists_artist_idx-ic-1-Statistics.db
test1-playlists.playlists_artist_idx-ic-1-Summary.db
test1-playlists.playlists_artist_idx-ic-1-TOC.txt

=

delete artist from playlists where id = 62c36092-82a1-3a00-93d1-46196ee77204 
and song_order = 1;
select * from playlists;
select * from playlists where artist = 'ZZ Top';

=
$ ./nodetool flush test1

$ ls /var/lib/cassandra/data/test1/playlists
test1-playlists-ic-1-CompressionInfo.db 
test1-playlists-ic-1-Data.db
test1-playlists-ic-1-Filter.db  
test1-playlists-ic-1-Index.db   
test1-playlists-ic-1-Statistics.db  
test1-playlists-ic-1-Summary.db 
test1-playlists-ic-1-TOC.txt
test1-playlists-ic-2-CompressionInfo.db 
test1-playlists-ic-2-Data.db
test1-playlists-ic-2-Filter.db  
test1-playlists-ic-2-Index.db   
test1-playlists-ic-2-Statistics.db  
test1-playlists-ic-2-Summary.db 
test1-playlists-ic-2-TOC.txt

test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db
test1-playlists.playlists_artist_idx-ic-1-Data.db
test1-playlists.playlists_artist_idx-ic-1-Filter.db
test1-playlists.playlists_artist_idx-ic-1-Index.db

[jira] [Updated] (CASSANDRA-5670) running compact on an index did not compact two index files into one

2013-06-19 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-5670:
-

Description: 
With a data directory containing secondary index files ending in -1 and -2, I 
expected that when I ran compact against the index that they would compact down 
to a set of -3 files.  This column family uses SizeTieredCompactionStrategy.

Using our standard CQL example, the compact command used was: 
$ ./nodetool compact test1 test1-playlists.playlists_artist_idx

Please note: reproducing this test on 1.1.12 (using a single primary key), you 
will see that running compact on the keyspace also does not compact the index 
file.  There is no option to compact the index, so I could not compare that.

{noformat}
CREATE KEYSPACE test1 WITH replication = {'class':'SimpleStrategy', 
'replication_factor':1};

use test1;

CREATE TABLE playlists (
  id uuid,
  song_order int,
  song_id uuid,
  title text,
  album text,
  artist text,
  PRIMARY KEY  (id, song_order ) );

INSERT INTO playlists (id, song_order, song_id, title, artist, album)
  VALUES (62c36092-82a1-3a00-93d1-46196ee77204, 1,
  a3e64f8f-bd44-4f28-b8d9-6938726e34d4, 'La Grange', 'ZZ Top', 'Tres Hombres');

select * from playlists;

=
./nodetool flush test1

$ ls /var/lib/cassandra/data/test1/playlists
test1-playlists-ic-1-CompressionInfo.db 
test1-playlists-ic-1-Data.db
test1-playlists-ic-1-Filter.db  
test1-playlists-ic-1-Index.db   
test1-playlists-ic-1-Statistics.db  
test1-playlists-ic-1-Summary.db 
test1-playlists-ic-1-TOC.txt

=

CREATE INDEX ON playlists(artist );
select * from playlists;
select * from playlists where artist = 'ZZ Top';

=
$ ./nodetool flush test1

$ ls /var/lib/cassandra/data/test1/playlists
test1-playlists-ic-1-CompressionInfo.db 
test1-playlists-ic-1-Data.db
test1-playlists-ic-1-Filter.db  
test1-playlists-ic-1-Index.db   
test1-playlists-ic-1-Statistics.db  
test1-playlists-ic-1-Summary.db 
test1-playlists-ic-1-TOC.txt

test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db
test1-playlists.playlists_artist_idx-ic-1-Data.db
test1-playlists.playlists_artist_idx-ic-1-Filter.db
test1-playlists.playlists_artist_idx-ic-1-Index.db
test1-playlists.playlists_artist_idx-ic-1-Statistics.db
test1-playlists.playlists_artist_idx-ic-1-Summary.db
test1-playlists.playlists_artist_idx-ic-1-TOC.txt

=

delete artist from playlists where id = 62c36092-82a1-3a00-93d1-46196ee77204 
and song_order = 1;
select * from playlists;
select * from playlists where artist = 'ZZ Top';

=
$ ./nodetool flush test1

$ ls /var/lib/cassandra/data/test1/playlists
test1-playlists-ic-1-CompressionInfo.db 
test1-playlists-ic-1-Data.db
test1-playlists-ic-1-Filter.db  
test1-playlists-ic-1-Index.db   
test1-playlists-ic-1-Statistics.db  
test1-playlists-ic-1-Summary.db 
test1-playlists-ic-1-TOC.txt
test1-playlists-ic-2-CompressionInfo.db 
test1-playlists-ic-2-Data.db
test1-playlists-ic-2-Filter.db  
test1-playlists-ic-2-Index.db   
test1-playlists-ic-2-Statistics.db  
test1-playlists-ic-2-Summary.db 
test1-playlists-ic-2-TOC.txt

test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db
test1-playlists.playlists_artist_idx-ic-1-Data.db
test1-playlists.playlists_artist_idx-ic-1-Filter.db
test1-playlists.playlists_artist_idx-ic-1-Index.db
test1-playlists.playlists_artist_idx-ic-1-Statistics.db
test1-playlists.playlists_artist_idx-ic-1-Summary.db
test1-playlists.playlists_artist_idx-ic-1-TOC.txt
test1-playlists.playlists_artist_idx-ic-2-CompressionInfo.db
test1-playlists.playlists_artist_idx-ic-2-Data.db
test1-playlists.playlists_artist_idx-ic-2-Filter.db
test1-playlists.playlists_artist_idx-ic-2-Index.db
test1-playlists.playlists_artist_idx-ic-2-Statistics.db
test1-playlists.playlists_artist_idx-ic-2-Summary.db
test1-playlists.playlists_artist_idx-ic-2-TOC.txt

=

./nodetool compact test1

$ ls /var/lib/cassandra/data/test1/playlists

[jira] [Updated] (CASSANDRA-5157) mac friendly cassandra-env.sh

2013-01-14 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-5157:
-

Issue Type: Task  (was: Bug)

 mac friendly cassandra-env.sh
 -

 Key: CASSANDRA-5157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5157
 Project: Cassandra
  Issue Type: Task
  Components: Config
Reporter: Cathy Daw
Priority: Trivial

 by default my mac launches with 1024mb but this fix will allow it to start up 
 at 25% total memory
 {code}
 Darwin)
 system_memory_in_bytes=`sysctl hw.memsize | awk '{print $2}'`
 system_memory_in_mb=`expr $system_memory_in_bytes / 1024 / 1024`
 system_cpu_cores=`sysctl hw.ncpu | awk '{print $2}'`
 ;;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5157) mac friendly cassandra-env.sh

2013-01-14 Thread Cathy Daw (JIRA)
Cathy Daw created CASSANDRA-5157:


 Summary: mac friendly cassandra-env.sh
 Key: CASSANDRA-5157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5157
 Project: Cassandra
  Issue Type: Bug
  Components: Config
Reporter: Cathy Daw
Priority: Trivial


by default my mac launches with 1024mb but this fix will allow it to start up 
at 25% total memory
{code}
Darwin)
system_memory_in_bytes=`sysctl hw.memsize | awk '{print $2}'`
system_memory_in_mb=`expr $system_memory_in_bytes / 1024 / 1024`
system_cpu_cores=`sysctl hw.ncpu | awk '{print $2}'`
;;
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5088) Major compaction IOException in 1.1.8

2012-12-26 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539708#comment-13539708
 ] 

Cathy Daw commented on CASSANDRA-5088:
--

Results from testing v2:
* We originally reproduced this with a forked/modified version of C* 1.1.8 
dropped in to DSE.  When we dropped in the C* 1.1.8 jar file from the apache 
download, we were also to reproduce this as well.
* Post-upgrade exception running:  list Stocks
* Post-upgrade exception running: nodetool upgradesstables
* Post-upgrade no errors running: nodetool compact or nodetool scrub

{code}
ERROR [Thrift:3] 2012-12-26 13:42:25,343 AbstractCassandraDaemon.java (line 
135) Exception in thread Thread[Thrift:3,5,main]
java.io.IOError: java.io.IOException: Bad file descriptor
at org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:65)
at 
org.apache.cassandra.db.ColumnFamilyStore$2.close(ColumnFamilyStore.java:1411)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1490)
at 
org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1435)
at 
org.apache.cassandra.service.RangeSliceVerbHandler.executeLocally(RangeSliceVerbHandler.java:50)
at 
org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:876)
at 
org.apache.cassandra.thrift.CassandraServer.get_range_slices(CassandraServer.java:703)
at 
com.datastax.bdp.server.DseServer.get_range_slices(DseServer.java:1087)
at 
org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Cassandra.java:3083)
at 
org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Cassandra.java:3071)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
at 
com.datastax.bdp.transport.server.ClientSocketAwareProcessor.process(ClientSocketAwareProcessor.java:43)
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:192)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.io.IOException: Bad file descriptor
at sun.nio.ch.FileDispatcher.preClose0(Native Method)
at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59)
at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96)
at 
java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
at java.io.RandomAccessFile.close(RandomAccessFile.java:541)
at 
org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131)
at 
org.apache.cassandra.io.sstable.SSTableScanner.close(SSTableScanner.java:89)
at org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:61)
{code}


 Major compaction IOException in 1.1.8
 -

 Key: CASSANDRA-5088
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5088
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.8
Reporter: Karl Mueller
 Attachments: 5088.txt, 5088-v2.txt


 Upgraded 1.1.6 to 1.1.8.
 Now I'm trying to do a major compaction, and seeing this:
 ERROR [CompactionExecutor:129] 2012-12-22 10:33:44,217 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[CompactionExecutor:129,1,RMI Runtime]
 java.io.IOError: java.io.IOException: Bad file descriptor
 at 
 org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:65)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:195)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:298)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 Caused by: java.io.IOException: Bad file descriptor
 at sun.nio.ch.FileDispatcher.preClose0(Native Method)
 at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59)
 

[jira] [Commented] (CASSANDRA-5088) Major compaction IOException in 1.1.8

2012-12-26 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539719#comment-13539719
 ] 

Cathy Daw commented on CASSANDRA-5088:
--

[~jbellis]
three is the charm. the last patch worked.

 Major compaction IOException in 1.1.8
 -

 Key: CASSANDRA-5088
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5088
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.8
Reporter: Karl Mueller
 Attachments: 5088.txt, 5088-v2.txt, 5088-v3.txt


 Upgraded 1.1.6 to 1.1.8.
 Now I'm trying to do a major compaction, and seeing this:
 ERROR [CompactionExecutor:129] 2012-12-22 10:33:44,217 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[CompactionExecutor:129,1,RMI Runtime]
 java.io.IOError: java.io.IOException: Bad file descriptor
 at 
 org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:65)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:195)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:298)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 Caused by: java.io.IOException: Bad file descriptor
 at sun.nio.ch.FileDispatcher.preClose0(Native Method)
 at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59)
 at 
 sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96)
 at 
 java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
 at java.io.FileInputStream.close(FileInputStream.java:258)
 at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131)
 at 
 sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121)
 at 
 java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
 at java.io.RandomAccessFile.close(RandomAccessFile.java:541)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224)
 at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130)
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.close(SSTableScanner.java:89)
 at 
 org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:61)
 ... 9 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5088) Major compaction IOException in 1.1.8

2012-12-25 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539470#comment-13539470
 ] 

Cathy Daw commented on CASSANDRA-5088:
--

I am able to consistently reproduce this running upgrade scenarios for DataStax 
Enterprise (basically C* 1.1.6 to C* 1.1.8).
* I can't reproduce this going from vanilla C* 1.1.6 to C* 1.1.8 using 
cassandra-stress
* I can reproduce this on my mac using DSE.  Java version is: 1.6.0_24
* I can't reproduce this on ubuntu precise 64-bit using Java 1.6.0_31

*Pre-Upgrade: run on DSE 2.2.1 / Cassandra 1.1.6*
{code}
~/dse-2.2.1/demos/portfolio_manager/bin/pricer -o INSERT_PRICES
~/dse-2.2.1/demos/portfolio_manager/bin/pricer -o UPDATE_PORTFOLIOS
~/dse-2.2.1/demos/portfolio_manager/bin/pricer -o INSERT_HISTORICAL_PRICES -n 
100
~/dse-2.2.1/bin/dse  hive -f ~/dse-2.2.1/demos/portfolio_manager/10_day_loss.q
~/dse-2.2.1/bin/nodetool drain
sudo pkill -9 java

# then restart using C* 1.1.8
{code}

+Below are the different related errors+


*Post-Upgrade: read CF created pre-upgrade*
{code}
ERROR [Thrift:3] 2012-12-25 18:53:22,139 AbstractCassandraDaemon.java (line 
135) Exception in thread Thread[Thrift:3,5,main]
java.io.IOError: java.io.IOException: Bad file descriptor
at org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:65)
at 
org.apache.cassandra.db.ColumnFamilyStore$2.close(ColumnFamilyStore.java:1411)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1490)
at 
org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1435)
at 
org.apache.cassandra.service.RangeSliceVerbHandler.executeLocally(RangeSliceVerbHandler.java:50)
at 
org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:876)
at 
org.apache.cassandra.thrift.CassandraServer.get_range_slices(CassandraServer.java:705)
at 
com.datastax.bdp.server.DseServer.get_range_slices(DseServer.java:1087)
at 
org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Cassandra.java:3083)
at 
org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Cassandra.java:3071)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
at 
com.datastax.bdp.transport.server.ClientSocketAwareProcessor.process(ClientSocketAwareProcessor.java:43)
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:192)
{code}

*Post-Upgrade: running upgradesstables*
{code}
Error occured while upgrading the sstables for keyspace HiveMetaStore
java.util.concurrent.ExecutionException: java.io.IOError: java.io.IOException: 
Bad file descriptor
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:226)
at 
org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:242)
at 
org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:983)
at 
org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:1789)
{code}

*Post-Upgrade: running nodetool scrub*
{code}
WARN [CompactionExecutor:23] 2012-12-25 14:42:50,024 FileUtils.java (line 116) 
Failed closing /var/lib/cassandra/data/cfs/inode/cfs-inode-hf-1-Data.db - chunk 
length 65536, data length 48193.
java.io.IOException: Bad file descriptor
at sun.nio.ch.FileDispatcher.preClose0(Native Method)
at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59)
at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96)
at 
java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
at java.io.FileInputStream.close(FileInputStream.java:258)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131)
at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121)
at 
java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
at java.io.RandomAccessFile.close(RandomAccessFile.java:541)
at 
org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130)
at 
org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112)
at org.apache.cassandra.db.compaction.Scrubber.close(Scrubber.java:306)
at 

[jira] [Commented] (CASSANDRA-4816) Broken get_paged_slice

2012-10-24 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13483024#comment-13483024
 ] 

Cathy Daw commented on CASSANDRA-4816:
--

+1 this version fixes my tests as well.

 Broken get_paged_slice 
 ---

 Key: CASSANDRA-4816
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4816
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.6
Reporter: Piotr Kołaczkowski
Assignee: Sylvain Lebresne
Priority: Blocker
 Fix For: 1.1.7

 Attachments: 4816-2.txt, 4816-3.txt


 get_paged_slice doesn't reset the start column filter for the second returned 
 row sometimes. So instead of getting a slice:
 row 0: start_column...last_column_in_row
 row 1: first column in a row...last_column_in_row
 row 2: first column in a row...
 you sometimes get:
 row 0: start_column...last_column_in_row
 row 1: start_column...last_column_in_row
 row 2: first column in a row...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4698) Keyspace disappears when upgrading node from cassandra-1.1.1 to cassandra-1.1.5

2012-10-02 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13462180#comment-13462180
 ] 

Cathy Daw edited comment on CASSANDRA-4698 at 10/3/12 7:43 AM:
---

[~tpatterson]
Can you try this test out and let me know if the keyspace reappears?

* In 1.1.1, show keyspaces to get schema definitions
* Upgrade to 1.1.5
* Recreate table definitions:
** rm data/system/system_schema*
** restart
** CREATE KEYSPACE + CREATE TABLE as necessary

  was (Author: cdaw):
[~tpatterson]
Can you try this test out and let me know if the keyspace reappears?

* In 1.1.1, show keyspaces to get schema disagreements
* Upgrade to 1.1.5
* Recreate table definitions:
** rm data/system/system_schema*
** restart
** CREATE KEYSPACE + CREATE TABLE as necessary
  
 Keyspace disappears when upgrading node from cassandra-1.1.1 to 
 cassandra-1.1.5
 ---

 Key: CASSANDRA-4698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4698
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5
 Environment: ubuntu. JNA not installed.
Reporter: Tyler Patterson
Assignee: Pavel Yaskevich
 Fix For: 1.1.6

 Attachments: CASSANDRA-4698.patch, start_1.1.1_system.log, 
 start_1.1.5_system.log


 Here is how I got the problem to happen:
 1. Get this zipped data directory (about 33Mb):
   scp cass@50.57.69.32:/home/cass/cassandra.zip ./ (password cass)
 2. Unzip it in /var/lib/
 3. clone the cassandra git repo
 4. git checkout cassandra-1.1.1; ant jar;
 5. bin/cassandra 
 6. Run cqlsh -3, then DESC COLUMNFAMILIES; Note the presence of Keyspace 
 performance_tests
 7. pkill -f cassandra; git checkout cassandra-1.1.5; ant realclean; ant jar;
 8. bin/cassandra
 9. Run cqlsh -3, then DESC COLUMNFAMILIES; Note that there is no 
 performance_tests keyspace

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4698) Keyspace disappears when upgrading node from cassandra-1.1.1 to cassandra-1.1.5

2012-09-24 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13462180#comment-13462180
 ] 

Cathy Daw commented on CASSANDRA-4698:
--

[~tpatterson]
Can you try this test out and let me know if the keyspace reappears?

* In 1.1.1, show keyspaces to get schema disagreements
* Upgrade to 1.1.5
* Recreate table definitions:
** rm data/system/system_schema*
** restart
** CREATE KEYSPACE + CREATE TABLE as necessary

 Keyspace disappears when upgrading node from cassandra-1.1.1 to 
 cassandra-1.1.5
 ---

 Key: CASSANDRA-4698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4698
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5
 Environment: ubuntu. JNA not installed.
Reporter: Tyler Patterson
Assignee: Pavel Yaskevich
 Fix For: 1.1.6

 Attachments: start_1.1.1_system.log, start_1.1.5_system.log


 Here is how I got the problem to happen:
 1. Get this zipped data directory (about 33Mb):
   scp cass@50.57.69.32:/home/cass/cassandra.zip ./ (password cass)
 2. Unzip it in /var/lib/
 3. clone the cassandra git repo
 4. git checkout cassandra-1.1.1; ant jar;
 5. bin/cassandra 
 6. Run cqlsh -3, then DESC COLUMNFAMILIES; Note the presence of Keyspace 
 performance_tests
 7. pkill -f cassandra; git checkout cassandra-1.1.5; ant realclean; ant jar;
 8. bin/cassandra
 9. Run cqlsh -3, then DESC COLUMNFAMILIES; Note that there is no 
 performance_tests keyspace

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4633) cassandra-stress: --enable-cql does not work with COUNTER_ADD

2012-09-07 Thread Cathy Daw (JIRA)
Cathy Daw created CASSANDRA-4633:


 Summary: cassandra-stress:  --enable-cql does not work with 
COUNTER_ADD
 Key: CASSANDRA-4633
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4633
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Cathy Daw
Priority: Minor


When I remove --enable-cql the following runs successfully.
Note:  INSERT/READ are fine.

{code}
./cassandra-stress --operation=COUNTER_ADD --enable-cql --replication-factor=3 
--consistency-level=ONE --num-keys=1  --columns=20 

total,interval_op_rate,interval_key_rate,avg_latency,elapsed_time
Operation [1] retried 10 times - error incrementing key 0001 
((InvalidRequestException): cannot parse 'C58' as hex bytes)

Operation [0] retried 10 times - error incrementing key  
((InvalidRequestException): cannot parse 'C58' as hex bytes)

0,0,0,NaN,0
FAILURE
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4538) Strange CorruptedBlockException when massive insert binary data

2012-08-14 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433944#comment-13433944
 ] 

Cathy Daw commented on CASSANDRA-4538:
--

I tried lots of permutations and could not reproduce.
Can you verify if this consistently reproducible for you?
Here are my repro tests

{code}
// Test Setup
* Modify: InsertThread.java to change host IP address
* Run: mvn install
* Start: cassandra 1.1.4

// Test Run
* Test Setup:  create / modify KS and CF below
* Run test: mvn exec:java -Dexec.mainClass=com.test.CreateTestData

// *** cassandra-cli ***
create keyspace ST with
  placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
  and strategy_options = {replication_factor:1};
  
use ST;

// Test #1: SizeTieredCompactionStrategy
create column family company;

// Test #2: SizeTieredCompactionStrategy and 1mb sstables
drop column family company;
create column family company with 
and compaction_strategy=SizeTieredCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 1};

// Test #3: SizeTieredCompactionStrategy and 100mb sstables
drop column family company;
create column family company with 
and compaction_strategy=SizeTieredCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 100};


// Test #4: LeveledCompactionStrategy and 10mb sstables
drop column family company;
create column family company 
and compaction_strategy=LeveledCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 10};

// Test #5: LeveledCompactionStrategy and 1mb sstables
drop column family company;
create column family company 
and compaction_strategy=LeveledCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 1};

// Test #6: LeveledCompactionStrategy and 100mb sstables
drop column family company;
create column family company 
and compaction_strategy=LeveledCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 100};

// ADDITIONAL TESTS VIA JAVA STRESS
[default@ST] drop keyspace Keyspace1;
./cassandra-stress --operation=INSERT --num-keys=10 
--num-different-keys=2 --columns=2 --threads=2 
--compression=SnappyCompressor --compaction-strategy=LeveledCompactionStrategy 
--column-size=2
./cassandra-stress --operation=READ --num-keys=10 
--num-different-keys=2 --columns=2 --threads=2 
--compression=SnappyCompressor --compaction-strategy=LeveledCompactionStrategy 
--column-size=2


// Distructive test: check nodetool -h localhost compactionstats and run the 
following while there are pending compactions
./cassandra-stress --operation=INSERT --num-keys=1000 --num-different-keys=100 
--columns=2 --threads=2 --compression=SnappyCompressor 
--compaction-strategy=LeveledCompactionStrategy --column-size=2

// Tried with SizeTieredCompactionStrategy
[default@ST] drop keyspace Keyspace1;
./cassandra-stress --operation=INSERT --num-keys=6 
--num-different-keys=2 --columns=2 --compression=SnappyCompressor 
--compaction-strategy=SizeTieredCompactionStrategy --column-size=2
./cassandra-stress --operation=READ --num-keys=6 --num-different-keys=2 
--columns=2 --compression=SnappyCompressor 
--compaction-strategy=SizeTieredCompactionStrategy --column-size=2

// Distructive test: check nodetool -h localhost compactionstats and kill the 
c* server while compactions are in progress and then restart

{code}

 Strange CorruptedBlockException when massive insert binary data
 ---

 Key: CASSANDRA-4538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4538
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
 Environment: Debian sequeeze 32bit
Reporter: Tommy Cheng
Priority: Critical
  Labels: CorruptedBlockException, binary, insert
 Attachments: cassandra-stresstest.zip


 After inserting ~ 1 records, here is the error log
  INFO 10:53:33,543 Compacted to 
 [/var/lib/cassandra/data/ST/company/ST-company.company_acct_no_idx-he-13-Data.db,].
   407,681 to 409,133 (~100% of original) bytes for 9,250 keys at 
 0.715926MB/s.  Time: 545ms.
 ERROR 10:53:35,445 Exception in thread Thread[CompactionExecutor:3,1,main]
 java.io.IOError: org.apache.cassandra.io.compress.CorruptedBlockException: 
 (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption 
 detected, chunk at 7530128 of length 19575.
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:116)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:99)
 at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:176)
 at 
 

[jira] [Commented] (CASSANDRA-4538) Strange CorruptedBlockException when massive insert binary data

2012-08-14 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434333#comment-13434333
 ] 

Cathy Daw commented on CASSANDRA-4538:
--

I tried to reproduce on a 32-bit debian squeeze medium instance on EC2 and 
could not get the error.  I wonder if you are dealing with a permanently 
corrupted as the result of a intermittent bug.  Can you drop this column family 
and keyspace, recreate them, and then re-run the test?  Can you also paste the 
DDL to create the column family?

 Strange CorruptedBlockException when massive insert binary data
 ---

 Key: CASSANDRA-4538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4538
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
 Environment: Debian sequeeze 32bit
Reporter: Tommy Cheng
Priority: Critical
  Labels: CorruptedBlockException, binary, insert
 Attachments: cassandra-stresstest.zip


 After inserting ~ 1 records, here is the error log
  INFO 10:53:33,543 Compacted to 
 [/var/lib/cassandra/data/ST/company/ST-company.company_acct_no_idx-he-13-Data.db,].
   407,681 to 409,133 (~100% of original) bytes for 9,250 keys at 
 0.715926MB/s.  Time: 545ms.
 ERROR 10:53:35,445 Exception in thread Thread[CompactionExecutor:3,1,main]
 java.io.IOError: org.apache.cassandra.io.compress.CorruptedBlockException: 
 (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption 
 detected, chunk at 7530128 of length 19575.
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:116)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:99)
 at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:176)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:83)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:68)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:118)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:101)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
 at 
 com.google.common.collect.Iterators$7.computeNext(Iterators.java:614)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:173)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:154)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.cassandra.io.compress.CorruptedBlockException: 
 (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption 
 detected, chunk at 7530128 of length 19575.
 at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:98)
 at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:77)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:302)
 at java.io.RandomAccessFile.readFully(RandomAccessFile.java:397)
 at java.io.RandomAccessFile.readFully(RandomAccessFile.java:377)
 at 
 org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTracker.java:95)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:363)
 at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:119)
 at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
 at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:144)
 at 
 

[jira] [Created] (CASSANDRA-4459) pig driver casts ints as bytearray

2012-07-23 Thread Cathy Daw (JIRA)
Cathy Daw created CASSANDRA-4459:


 Summary: pig driver casts ints as bytearray
 Key: CASSANDRA-4459
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4459
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 1.1.2 embedded in DSE
Reporter: Cathy Daw
Assignee: Brandon Williams


we seem to be auto-mapping C* int columns to bytearray in Pig, and farther down 
I can't seem to find a way to cast that to int and do an average.  

{code}

grunt cassandra_users = LOAD 'cassandra://cqldb/users' USING 
CassandraStorage();
grunt dump cassandra_users;
(bobhatter,(act,22),(fname,bob),(gender,m),(highSchool,Cal 
High),(lname,hatter),(sat,500),(state,CA),{})
(alicesmith,(act,27),(fname,alice),(gender,f),(highSchool,Tuscon 
High),(lname,smith),(sat,650),(state,AZ),{})
 
// notice sat and act columns are bytearray values 
grunt describe cassandra_users;
cassandra_users: {key: chararray,act: (name: chararray,value: bytearray),fname: 
(name: chararray,value: chararray),
gender: (name: chararray,value: chararray),highSchool: (name: chararray,value: 
chararray),lname: (name: chararray,value: chararray),
sat: (name: chararray,value: bytearray),state: (name: chararray,value: 
chararray),columns: {(name: chararray,value: chararray)}}

grunt users_by_state = GROUP cassandra_users BY state;
grunt dump users_by_state;
((state,AX),{(aoakley,(highSchool,Phoenix 
High),(lname,Oakley),state,(act,22),(sat,500),(gender,m),(fname,Anne),{})})
((state,AZ),{(gjames,(highSchool,Tuscon 
High),(lname,James),state,(act,24),(sat,650),(gender,f),(fname,Geronomo),{})})
((state,CA),{(philton,(highSchool,Beverly 
High),(lname,Hilton),state,(act,37),(sat,220),(gender,m),(fname,Paris),{}),(jbrown,(highSchool,Cal
 High),(lname,Brown),state,(act,20),(sat,700),(gender,m),(fname,Jerry),{})})

// Error - use explicit cast
grunt user_avg = FOREACH users_by_state GENERATE cassandra_users.state, 
AVG(cassandra_users.sat);
grunt dump user_avg;
2012-07-22 17:15:04,361 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 
1045: Could not infer the matching function for org.apache.pig.builtin.AVG as 
multiple or none of them fit. Please use an explicit cast.

// Unable to cast as int
grunt user_avg = FOREACH users_by_state GENERATE cassandra_users.state, 
AVG((int)cassandra_users.sat);
grunt dump user_avg;
2012-07-22 17:07:39,217 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 
1052: Cannot cast bag with schema sat: bag({name: chararray,value: bytearray}) 
to int
{code}

*Seed data in CQL*
{code}
CREATE KEYSPACE cqldb with 
  strategy_class = 'org.apache.cassandra.locator.SimpleStrategy' 
  and strategy_options:replication_factor=3;


use cqldb;

CREATE COLUMNFAMILY users (
  KEY text PRIMARY KEY, 
  fname text, lname text, gender varchar, 
  act int, sat int, highSchool text, state varchar);

insert into users (KEY, fname, lname, gender, act, sat, highSchool, state)
values (gjames, Geronomo, James, f, 24, 650, 'Tuscon High', 'AZ');

insert into users (KEY, fname, lname, gender, act, sat, highSchool, state)
values (aoakley, Anne, Oakley, m , 22, 500, 'Phoenix High', 'AX');

insert into users (KEY, fname, lname, gender, act, sat, highSchool, state)
values (jbrown, Jerry, Brown, m , 20, 700, 'Cal High', 'CA');

insert into users (KEY, fname, lname, gender, act, sat, highSchool, state)
values (philton, Paris, Hilton, m , 37, 220, 'Beverly High', 'CA');

select * from users;
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4309) CQL3: cqlsh exception running describe schema

2012-06-05 Thread Cathy Daw (JIRA)
Cathy Daw created CASSANDRA-4309:


 Summary: CQL3: cqlsh exception running describe schema
 Key: CASSANDRA-4309
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4309
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.1
Reporter: Cathy Daw
Assignee: paul cannon
 Fix For: 1.1.2


{code}
cqlsh describe schema;

CREATE KEYSPACE system WITH strategy_class = 'LocalStrategy';

USE system;

Traceback (most recent call last):
  File ./cqlsh, line 811, in onecmd
self.handle_statement(st, statementtext)
  File ./cqlsh, line 839, in handle_statement
return custom_handler(parsed)
  File ./cqlsh, line 1329, in do_describe
self.describe_schema()
  File ./cqlsh, line 1264, in describe_schema
self.print_recreate_keyspace(k, sys.stdout)
  File ./cqlsh, line 1091, in print_recreate_keyspace
self.print_recreate_columnfamily(ksname, cf.name, out)
  File ./cqlsh, line 1114, in print_recreate_columnfamily
layout = self.get_columnfamily_layout(ksname, cfname)
  File ./cqlsh, line 706, in get_columnfamily_layout
layout = self.fetchdict()
  File ./cqlsh, line 605, in fetchdict
return dict(zip([d[0] for d in desc], row))
TypeError: 'NoneType' object is not iterable
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3020) Failures in system test: test_cql.py

2011-09-23 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw resolved CASSANDRA-3020.
--

Resolution: Not A Problem
  Assignee: Cathy Daw  (was: Tyler Hobbs)

issue with driver package moving

 Failures in system test: test_cql.py
 

 Key: CASSANDRA-3020
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3020
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.5
 Environment: 0.8 branch for Aug 11th test run: 
 https://jenkins.qa.datastax.com/job/CassandraSystem/142/
 https://jenkins.qa.datastax.com/job/Cassandra/147/ 
Reporter: Cathy Daw
Assignee: Cathy Daw

 *Test Output*
 {code}
 ==
 ERROR: reading and writing strings w/ newlines
 --
 Traceback (most recent call last):
   File /usr/local/lib/python2.6/dist-packages/nose/case.py, line 187, in 
 runTest
 self.test(*self.arg)
   File /var/lib/jenkins/jobs/Cassandra/workspace/test/system/test_cql.py, 
 line 734, in test_newline_strings
 , {key: \nkey, name: \nname})
   File /var/lib/jenkins/repos/drivers/py/cql/cursor.py, line 150, in execute
 self.description = self.decoder.decode_description(self._query_ks, 
 self._query_cf, self.result[0])
   File /var/lib/jenkins/repos/drivers/py/cql/decoders.py, line 39, in 
 decode_description
 comparator = self.__comparator_for(keyspace, column_family)
   File /var/lib/jenkins/repos/drivers/py/cql/decoders.py, line 35, in 
 __comparator_for
 return cfam.get(comparator, None)
 AttributeError: 'NoneType' object has no attribute 'get'
 --
 Ran 127 tests in 635.426s
 FAILED (errors=1)
 Sending e-mails to: q...@datastax.com
 Finished: FAILURE
 {code}
 *Suspected check-in*
 {code}
 Revision 1156198 by xedin: 
 Fixes issues with parameters being escaped incorrectly in Python CQL
 patch by Tyler Hobbs; reviewed by Pavel Yaskevich for CASSANDRA-2993
   /cassandra/branches/cassandra-0.8/test/system/test_cql.py
   /cassandra/branches/cassandra-0.8/CHANGES.txt
   /cassandra/drivers/py/cql/cursor.py
   
 /cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/cql/Cql.g
   /cassandra/drivers/py/test/test_regex.py
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3190) Fix backwards compatibilty for cql memtable properties

2011-09-12 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13103186#comment-13103186
 ] 

Cathy Daw commented on CASSANDRA-3190:
--

cql documentation needs to be updated

 Fix backwards compatibilty for cql memtable properties
 --

 Key: CASSANDRA-3190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3190
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
  Labels: cql
 Fix For: 1.0.0

 Attachments: 3190.txt


 Removed memtable_flush_after_mins in CASSANDRA-2183 instead of making it a 
 no-op.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3192) NPE in RowRepairResolver

2011-09-12 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13103267#comment-13103267
 ] 

Cathy Daw commented on CASSANDRA-3192:
--

I noticed that after starting up the server, this error message was in the log 
file of the second node (non-seed) after startup.

*node2*
{code}
ERROR 01:30:05,522 Fatal exception in thread Thread[HintedHandoff:1,5,main]
java.lang.AssertionError
at 
org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:282)
at 
org.apache.cassandra.db.HintedHandOffManager.access$100(HintedHandOffManager.java:81)
at 
org.apache.cassandra.db.HintedHandOffManager$2.runMayThrow(HintedHandOffManager.java:333)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}



 NPE in RowRepairResolver
 

 Key: CASSANDRA-3192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3192
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw

 On a 3 node brisk cluster (running against C* 1.0 branch), I was running the 
 java stress tool and the terasort concurrently in two sessions.  Eventually 
 both jobs failed with TimedOutException.
   
 From this point forward most additional activity will fail with a 
 TimedOutException. 
 * Java Stress Tool - 5 rows / 10 columns - Operation [0] retried 10 times - 
 error inserting key 0 ((TimedOutException))
 * Hive - show tables: FAILED: Error in metadata: 
 com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException: There 
 was a problem with the Cassandra Hive MetaStore: Could not connect to 
 Cassandra. Reason: Error connecting to node localhost
 However, the Cassandra CLI appears to be happy
 * Cassandra CLI: you can successfully insert and read using consistencylevel 
 as ONE or ALL
 The seed node has the following error repeatedly occurring in the logs.  The 
 other two nodes have no errors.
 {code}
 ERROR [ReadRepairStage:15] 2011-09-13 00:44:25,971 
 AbstractCassandraDaemon.java (line 133) Fatal exception in thread 
 Thread[ReadRepairStage:15,5,main]
 java.lang.RuntimeException: java.lang.NullPointerException
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.cassandra.service.RowRepairResolver.resolve(RowRepairResolver.java:82)
   at 
 org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:54)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
   ... 3 more
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3115) LongCompactionSpeedTest running longing starting with builds on Aug31

2011-08-31 Thread Cathy Daw (JIRA)
LongCompactionSpeedTest running longing starting with builds on Aug31
-

 Key: CASSANDRA-3115
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3115
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.5
 Environment: Cassandra-0.8 branch, nightly builds.
MacOS and Debian
Reporter: Cathy Daw
Priority: Minor


The Long tests started consistently timing out as this build of cassandra: 
[https://jenkins.qa.datastax.com/job/CassandraLong/131/console]

The regression server shows pretty consistent run times for this test, and then 
consistent timeouts from this point forward.
{code}
[junit] Testsuite: 
org.apache.cassandra.db.compaction.LongCompactionSpeedTest
[junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 111.379 sec
[junit] 
[junit] - Standard Output ---
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=1 colsper=20: 1637 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=20 colsper=1: 6144 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=100 rowsper=800 colsper=5: 2379 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=1 colsper=50: 15690 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=50 colsper=1: 20953 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=100 rowsper=1000 colsper=5: 5672 ms
{code}



*Single node local run:  Build 1056 / on Aug 30 / Macbook Pro w/ 8 GB ram (all 
apps shutdown)*
{panel}
+Run 1: Fresh install with no log or lib dir+
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=1 colsper=20: 850 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=20 colsper=1: *3004 ms*
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=100 rowsper=800 colsper=5: 767 ms

+Run 2: Invoke test without restarting the server+
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=1 colsper=20: 826 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=20 colsper=1: *3030 ms*
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=100 rowsper=800 colsper=5: 776 ms

+Run 3: Invoke test without restarting the server+
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=1 colsper=20: 830 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=20 colsper=1: *2964 ms*
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=100 rowsper=800 colsper=5: 635 ms

+Run 4: Invoke test without restarting the server+
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=1 colsper=20: 931 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=20 colsper=1: *2987 ms*
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=100 rowsper=800 colsper=5: 910 ms
{panel}

*Singled node local run: Build 1062 / on Aug 31 / Macbook pro w/ 8GB ram (all 
apps shutdown)*
{panel}
+Run 1: Fresh restart with no log or lib dir+
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=1 colsper=20: 802 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=20 colsper=1: *17649 ms*
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=100 rowsper=800 colsper=5: 713 ms

+Run 2: Invoke test without restarting the server+
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=1 colsper=20: 832 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=20 colsper=1: *16875 ms*
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=100 rowsper=800 colsper=5: 868 ms

+Run 3: Invoke test without restarting the server+
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=1 colsper=20: 809 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=20 colsper=1: *16818 ms*
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=100 rowsper=800 colsper=5: 807 ms

+Run 4: Invoke test without restarting the server+
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=1 colsper=20: 834 ms
[junit] 

[jira] [Updated] (CASSANDRA-3115) LongCompactionSpeedTest running longer starting with builds on Aug31

2011-08-31 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-3115:
-

Summary: LongCompactionSpeedTest running longer starting with builds on 
Aug31  (was: LongCompactionSpeedTest running longing starting with builds on 
Aug31)

 LongCompactionSpeedTest running longer starting with builds on Aug31
 

 Key: CASSANDRA-3115
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3115
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.5
 Environment: Cassandra-0.8 branch, nightly builds.
 MacOS and Debian
Reporter: Cathy Daw
Priority: Minor

 The Long tests started consistently timing out as this build of cassandra: 
 [https://jenkins.qa.datastax.com/job/CassandraLong/131/console]
 The regression server shows pretty consistent run times for this test, and 
 then consistent timeouts from this point forward.
 {code}
 [junit] Testsuite: 
 org.apache.cassandra.db.compaction.LongCompactionSpeedTest
 [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 111.379 sec
 [junit] 
 [junit] - Standard Output ---
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=1 colsper=20: 1637 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=20 colsper=1: 6144 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=100 rowsper=800 colsper=5: 2379 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=1 colsper=50: 15690 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=50 colsper=1: 20953 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=100 rowsper=1000 colsper=5: 5672 ms
 {code}
 *Single node local run:  Build 1056 / on Aug 30 / Macbook Pro w/ 8 GB ram 
 (all apps shutdown)*
 {panel}
 +Run 1: Fresh install with no log or lib dir+
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=1 colsper=20: 850 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=20 colsper=1: *3004 ms*
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=100 rowsper=800 colsper=5: 767 ms
 
 +Run 2: Invoke test without restarting the server+
   [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=1 colsper=20: 826 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=20 colsper=1: *3030 ms*
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=100 rowsper=800 colsper=5: 776 ms
 +Run 3: Invoke test without restarting the server+
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=1 colsper=20: 830 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=20 colsper=1: *2964 ms*
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=100 rowsper=800 colsper=5: 635 ms
 +Run 4: Invoke test without restarting the server+
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=1 colsper=20: 931 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=20 colsper=1: *2987 ms*
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=100 rowsper=800 colsper=5: 910 ms
 {panel}
 *Singled node local run: Build 1062 / on Aug 31 / Macbook pro w/ 8GB ram (all 
 apps shutdown)*
 {panel}
 +Run 1: Fresh restart with no log or lib dir+
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=1 colsper=20: 802 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=20 colsper=1: *17649 ms*
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=100 rowsper=800 colsper=5: 713 ms
 +Run 2: Invoke test without restarting the server+
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=1 colsper=20: 832 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=20 colsper=1: *16875 ms*
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=100 rowsper=800 colsper=5: 868 ms
 +Run 3: Invoke test without restarting the server+
 [junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
 sstables=2 rowsper=1 colsper=20: 809 ms
 [junit] 

[jira] [Commented] (CASSANDRA-3052) CQL: ResultSet.next() gives NPE when run after an INSERT or CREATE statement

2011-08-19 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13087930#comment-13087930
 ] 

Cathy Daw commented on CASSANDRA-3052:
--

* DROP/CREATE KEYSPACE does not work with Statement.executeUpdate()
* DROP/CREATE KEYSPACE do however work with Statement.execute()
{code}
[junit] DROP KEYSPACE cqldb
[junit] java.sql.SQLException: Not an update statement.

[junit] CREATE KEYSPACE cqldb with strategy_class =  
'org.apache.cassandra.locator.SimpleStrategy'  and 
strategy_options:replication_factor=1
[junit] java.sql.SQLException: Not an update statement.
{code}

I assumed this would work since since these are DDL statements
{panel}
Executes the given SQL statement, which may be an INSERT, UPDATE, or DELETE 
statement or an SQL statement that returns nothing, such as an SQL DDL 
statement.
{panel}


Do you want a new bug for this? Or is this as expected?

 CQL: ResultSet.next() gives NPE when run after an INSERT or CREATE statement
 

 Key: CASSANDRA-3052
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3052
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw
  Labels: cql

 This test script used to work until I upgraded the jdbc driver to 1.0.4.
 *CQL 1.0.4*: apache-cassandra-cql-1.0.4-SNAPSHOT.jar build at revision 1158979
 *Repro Script*: 
 * drop in test directory, change package declaration and run:  ant test 
 -Dtest.name=resultSetNPE
 * The script gives you a NullPointerException when you uncomment out the 
 following lines after a CREATE or INSERT statement.
 {code}
 colCount = res.getMetaData().getColumnCount();
 res.next();
 {code}
 * Please note that there is no need to comment out those lines if a SELECT 
 statement was run prior.
 {code}
 package com.datastax.bugs;
 import java.sql.DriverManager;
 import java.sql.Connection;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
 import org.junit.Test;
 public class resultSetNPE {
 
 @Test
 public void createKS() throws Exception {   
 Connection initConn = null;
 Connection connection = null;
 ResultSet res;
 Statement stmt;
 int colCount = 0;
 
 Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);
 
 // Check create keyspace
 initConn = 
 DriverManager.getConnection(jdbc:cassandra://127.0.0.1:9160/default); 
 stmt = initConn.createStatement();
 try {
   System.out.println(Running DROP KS Statement);  
   res = stmt.executeQuery(DROP KEYSPACE ks1);  
   // res.next();
   
 } catch (SQLException e) {
 if (e.getMessage().startsWith(Keyspace does not exist)) 
 {
 // Do nothing - this just means you tried to drop something 
 that was not there.
 // res = stmt.executeQuery(CREATE KEYSPACE ks1 with 
 strategy_class =  'org.apache.cassandra.locator.SimpleStrategy' and 
 strategy_options:replication_factor=1);  
 } 
 }   
   
 System.out.println(Running CREATE KS Statement);
 res = stmt.executeQuery(CREATE KEYSPACE ks1 with strategy_class =  
 'org.apache.cassandra.locator.SimpleStrategy' and 
 strategy_options:replication_factor=1);  
 // res.next();
 initConn.close();
 }  
  
 @Test
 public void createCF() throws Exception 
 {   
 Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);
 int colCount = 0;
 Connection connection = 
 DriverManager.getConnection(jdbc:cassandra://127.0.0.1:9160/ks1); 
 Statement stmt = connection.createStatement();
 System.out.print(Running CREATE CF Statement);
 ResultSet res = stmt.executeQuery(CREATE COLUMNFAMILY users (KEY 
 varchar PRIMARY KEY, password varchar, gender varchar, session_token varchar, 
 state varchar, birth_year bigint));
 
 //colCount = res.getMetaData().getColumnCount();
 System.out.println( -- Column Count:  + colCount); 
 //res.next();
 
 connection.close();   
 }  
 
 @Test
 public void simpleSelect() throws Exception 
 {   
 Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);
 int colCount = 0;
 Connection connection = 
 DriverManager.getConnection(jdbc:cassandra://127.0.0.1:9160/ks1); 
 Statement stmt = connection.createStatement();
 
 System.out.print(Running INSERT Statement);
 ResultSet res = stmt.executeQuery(INSERT INTO users (KEY, password) 
 VALUES ('user1', 'ch@nge'));  
 //colCount = res.getMetaData().getColumnCount();
 System.out.println( -- 

[jira] [Commented] (CASSANDRA-3052) CQL: ResultSet.next() gives NPE when run after an INSERT or CREATE statement

2011-08-18 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13087113#comment-13087113
 ] 

Cathy Daw commented on CASSANDRA-3052:
--

Hi Rick,

I am using a test harness which runs cql from a flat file, and up until this 
point all queries ran fine through executeQuery().

Thanks,
Cathy

 CQL: ResultSet.next() gives NPE when run after an INSERT or CREATE statement
 

 Key: CASSANDRA-3052
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3052
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw
  Labels: cql

 This test script used to work until I upgraded the jdbc driver to 1.0.4.
 *CQL 1.0.4*: apache-cassandra-cql-1.0.4-SNAPSHOT.jar build at revision 1158979
 *Repro Script*: 
 * drop in test directory, change package declaration and run:  ant test 
 -Dtest.name=resultSetNPE
 * The script gives you a NullPointerException when you uncomment out the 
 following lines after a CREATE or INSERT statement.
 {code}
 colCount = res.getMetaData().getColumnCount();
 res.next();
 {code}
 * Please note that there is no need to comment out those lines if a SELECT 
 statement was run prior.
 {code}
 package com.datastax.bugs;
 import java.sql.DriverManager;
 import java.sql.Connection;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
 import org.junit.Test;
 public class resultSetNPE {
 
 @Test
 public void createKS() throws Exception {   
 Connection initConn = null;
 Connection connection = null;
 ResultSet res;
 Statement stmt;
 int colCount = 0;
 
 Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);
 
 // Check create keyspace
 initConn = 
 DriverManager.getConnection(jdbc:cassandra://127.0.0.1:9160/default); 
 stmt = initConn.createStatement();
 try {
   System.out.println(Running DROP KS Statement);  
   res = stmt.executeQuery(DROP KEYSPACE ks1);  
   // res.next();
   
 } catch (SQLException e) {
 if (e.getMessage().startsWith(Keyspace does not exist)) 
 {
 // Do nothing - this just means you tried to drop something 
 that was not there.
 // res = stmt.executeQuery(CREATE KEYSPACE ks1 with 
 strategy_class =  'org.apache.cassandra.locator.SimpleStrategy' and 
 strategy_options:replication_factor=1);  
 } 
 }   
   
 System.out.println(Running CREATE KS Statement);
 res = stmt.executeQuery(CREATE KEYSPACE ks1 with strategy_class =  
 'org.apache.cassandra.locator.SimpleStrategy' and 
 strategy_options:replication_factor=1);  
 // res.next();
 initConn.close();
 }  
  
 @Test
 public void createCF() throws Exception 
 {   
 Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);
 int colCount = 0;
 Connection connection = 
 DriverManager.getConnection(jdbc:cassandra://127.0.0.1:9160/ks1); 
 Statement stmt = connection.createStatement();
 System.out.print(Running CREATE CF Statement);
 ResultSet res = stmt.executeQuery(CREATE COLUMNFAMILY users (KEY 
 varchar PRIMARY KEY, password varchar, gender varchar, session_token varchar, 
 state varchar, birth_year bigint));
 
 //colCount = res.getMetaData().getColumnCount();
 System.out.println( -- Column Count:  + colCount); 
 //res.next();
 
 connection.close();   
 }  
 
 @Test
 public void simpleSelect() throws Exception 
 {   
 Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);
 int colCount = 0;
 Connection connection = 
 DriverManager.getConnection(jdbc:cassandra://127.0.0.1:9160/ks1); 
 Statement stmt = connection.createStatement();
 
 System.out.print(Running INSERT Statement);
 ResultSet res = stmt.executeQuery(INSERT INTO users (KEY, password) 
 VALUES ('user1', 'ch@nge'));  
 //colCount = res.getMetaData().getColumnCount();
 System.out.println( -- Column Count:  + colCount); 
 //res.next();
 
 System.out.print(Running SELECT Statement);
 res = stmt.executeQuery(SELECT KEY, gender, state FROM users);  
 colCount = res.getMetaData().getColumnCount();
 System.out.println( -- Column Count:  + colCount); 
 res.getRow();
 res.next();
 
 connection.close(); 
 }  
 }
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3052) CQL: ResultSet.next() gives NPE when run after an INSERT or CREATE statement

2011-08-18 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13087186#comment-13087186
 ] 

Cathy Daw commented on CASSANDRA-3052:
--

I will make the updates to the test harness as recommended above.

 CQL: ResultSet.next() gives NPE when run after an INSERT or CREATE statement
 

 Key: CASSANDRA-3052
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3052
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw
  Labels: cql

 This test script used to work until I upgraded the jdbc driver to 1.0.4.
 *CQL 1.0.4*: apache-cassandra-cql-1.0.4-SNAPSHOT.jar build at revision 1158979
 *Repro Script*: 
 * drop in test directory, change package declaration and run:  ant test 
 -Dtest.name=resultSetNPE
 * The script gives you a NullPointerException when you uncomment out the 
 following lines after a CREATE or INSERT statement.
 {code}
 colCount = res.getMetaData().getColumnCount();
 res.next();
 {code}
 * Please note that there is no need to comment out those lines if a SELECT 
 statement was run prior.
 {code}
 package com.datastax.bugs;
 import java.sql.DriverManager;
 import java.sql.Connection;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
 import org.junit.Test;
 public class resultSetNPE {
 
 @Test
 public void createKS() throws Exception {   
 Connection initConn = null;
 Connection connection = null;
 ResultSet res;
 Statement stmt;
 int colCount = 0;
 
 Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);
 
 // Check create keyspace
 initConn = 
 DriverManager.getConnection(jdbc:cassandra://127.0.0.1:9160/default); 
 stmt = initConn.createStatement();
 try {
   System.out.println(Running DROP KS Statement);  
   res = stmt.executeQuery(DROP KEYSPACE ks1);  
   // res.next();
   
 } catch (SQLException e) {
 if (e.getMessage().startsWith(Keyspace does not exist)) 
 {
 // Do nothing - this just means you tried to drop something 
 that was not there.
 // res = stmt.executeQuery(CREATE KEYSPACE ks1 with 
 strategy_class =  'org.apache.cassandra.locator.SimpleStrategy' and 
 strategy_options:replication_factor=1);  
 } 
 }   
   
 System.out.println(Running CREATE KS Statement);
 res = stmt.executeQuery(CREATE KEYSPACE ks1 with strategy_class =  
 'org.apache.cassandra.locator.SimpleStrategy' and 
 strategy_options:replication_factor=1);  
 // res.next();
 initConn.close();
 }  
  
 @Test
 public void createCF() throws Exception 
 {   
 Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);
 int colCount = 0;
 Connection connection = 
 DriverManager.getConnection(jdbc:cassandra://127.0.0.1:9160/ks1); 
 Statement stmt = connection.createStatement();
 System.out.print(Running CREATE CF Statement);
 ResultSet res = stmt.executeQuery(CREATE COLUMNFAMILY users (KEY 
 varchar PRIMARY KEY, password varchar, gender varchar, session_token varchar, 
 state varchar, birth_year bigint));
 
 //colCount = res.getMetaData().getColumnCount();
 System.out.println( -- Column Count:  + colCount); 
 //res.next();
 
 connection.close();   
 }  
 
 @Test
 public void simpleSelect() throws Exception 
 {   
 Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);
 int colCount = 0;
 Connection connection = 
 DriverManager.getConnection(jdbc:cassandra://127.0.0.1:9160/ks1); 
 Statement stmt = connection.createStatement();
 
 System.out.print(Running INSERT Statement);
 ResultSet res = stmt.executeQuery(INSERT INTO users (KEY, password) 
 VALUES ('user1', 'ch@nge'));  
 //colCount = res.getMetaData().getColumnCount();
 System.out.println( -- Column Count:  + colCount); 
 //res.next();
 
 System.out.print(Running SELECT Statement);
 res = stmt.executeQuery(SELECT KEY, gender, state FROM users);  
 colCount = res.getMetaData().getColumnCount();
 System.out.println( -- Column Count:  + colCount); 
 res.getRow();
 res.next();
 
 connection.close(); 
 }  
 }
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3052) CQL: ResultSet.next() gives NPE when run after an INSERT or CREATE statement

2011-08-17 Thread Cathy Daw (JIRA)
CQL: ResultSet.next() gives NPE when run after an INSERT or CREATE statement


 Key: CASSANDRA-3052
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3052
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw


This test script used to work until I upgraded the jdbc driver to 1.0.4.

*CQL 1.0.4*: apache-cassandra-cql-1.0.4-SNAPSHOT.jar build at revision 1158979

*Repro Script*: 
* drop in test directory, change package declaration and run:  ant test 
-Dtest.name=resultSetNPE
* The script gives you a NullPointerException when you uncomment out the 
following lines after a CREATE or INSERT statement.
{code}
colCount = res.getMetaData().getColumnCount();

res.next();
{code}
* Please note that there is no need to comment out those lines if a SELECT 
statement was run prior.


{code}
package com.datastax.bugs;

import java.sql.DriverManager;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;

import org.junit.Test;

public class resultSetNPE {

@Test
public void createKS() throws Exception {   
Connection initConn = null;
Connection connection = null;

ResultSet res;
Statement stmt;
int colCount = 0;

Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);

// Check create keyspace
initConn = 
DriverManager.getConnection(jdbc:cassandra://127.0.0.1:9160/default); 
stmt = initConn.createStatement();

try {
  System.out.println(Running DROP KS Statement);  
  res = stmt.executeQuery(DROP KEYSPACE ks1);  
  // res.next();
  
} catch (SQLException e) {
if (e.getMessage().startsWith(Keyspace does not exist)) 
{
// Do nothing - this just means you tried to drop something 
that was not there.
// res = stmt.executeQuery(CREATE KEYSPACE ks1 with 
strategy_class =  'org.apache.cassandra.locator.SimpleStrategy' and 
strategy_options:replication_factor=1);  
} 
}   
  
System.out.println(Running CREATE KS Statement);
res = stmt.executeQuery(CREATE KEYSPACE ks1 with strategy_class =  
'org.apache.cassandra.locator.SimpleStrategy' and 
strategy_options:replication_factor=1);  
// res.next();

initConn.close();
}  
 
@Test
public void createCF() throws Exception 
{   

Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);
int colCount = 0;

Connection connection = 
DriverManager.getConnection(jdbc:cassandra://127.0.0.1:9160/ks1); 
Statement stmt = connection.createStatement();

System.out.print(Running CREATE CF Statement);
ResultSet res = stmt.executeQuery(CREATE COLUMNFAMILY users (KEY 
varchar PRIMARY KEY, password varchar, gender varchar, session_token varchar, 
state varchar, birth_year bigint));

//colCount = res.getMetaData().getColumnCount();
System.out.println( -- Column Count:  + colCount); 
//res.next();

connection.close();   
}  

@Test
public void simpleSelect() throws Exception 
{   
Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);
int colCount = 0;

Connection connection = 
DriverManager.getConnection(jdbc:cassandra://127.0.0.1:9160/ks1); 
Statement stmt = connection.createStatement();

System.out.print(Running INSERT Statement);
ResultSet res = stmt.executeQuery(INSERT INTO users (KEY, password) 
VALUES ('user1', 'ch@nge'));  
//colCount = res.getMetaData().getColumnCount();
System.out.println( -- Column Count:  + colCount); 
//res.next();

System.out.print(Running SELECT Statement);
res = stmt.executeQuery(SELECT KEY, gender, state FROM users);  
colCount = res.getMetaData().getColumnCount();
System.out.println( -- Column Count:  + colCount); 
res.getRow();
res.next();

connection.close(); 
}  
}
{code}


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3022) Failures in cassandra long test: LongCompactionSpeedTest

2011-08-12 Thread Cathy Daw (JIRA)
Failures in cassandra long test: LongCompactionSpeedTest


 Key: CASSANDRA-3022
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3022
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.5
Reporter: Cathy Daw
Assignee: Jonathan Ellis


*The failing test case*
{code}
[junit] Testsuite: 
org.apache.cassandra.db.compaction.LongCompactionSpeedTest
{code}


*The following error is repeated in the console output*
{code}
[junit] ERROR 04:02:20,654 Error in ThreadPoolExecutor
[junit] java.util.MissingFormatArgumentException: Format specifier 's'
[junit] at java.util.Formatter.format(Formatter.java:2432)
[junit] at java.util.Formatter.format(Formatter.java:2367)
[junit] at java.lang.String.format(String.java:2769)
[junit] at 
org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:136)
[junit] at 
org.apache.cassandra.db.compaction.CompactionIterator.getReduced(CompactionIterator.java:123)
[junit] at 
org.apache.cassandra.db.compaction.CompactionIterator.getReduced(CompactionIterator.java:43)
[junit] at 
org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:74)
[junit] at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
[junit] at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
[junit] at 
org.apache.commons.collections.iterators.FilterIterator.setNextObject(FilterIterator.java:183)
[junit] at 
org.apache.commons.collections.iterators.FilterIterator.hasNext(FilterIterator.java:94)
[junit] at 
org.apache.cassandra.db.compaction.CompactionManager.doCompactionWithoutSizeEstimation(CompactionManager.java:559)
[junit] at 
org.apache.cassandra.db.compaction.CompactionManager.doCompaction(CompactionManager.java:506)
[junit] at 
org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:141)
[junit] at 
org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:107)
[junit] at 
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
[junit] at java.util.concurrent.FutureTask.run(FutureTask.java:138)
[junit] at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
[junit] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
[junit] at java.lang.Thread.run(Thread.java:662)
{code}

*Cassandra Revision List at time of failure
{code}
Summary
* log ks and cf of large rows being compacted patch by Ryan King; reviewed by 
jbellis for CASSANDRA-3019
* revert r1156772
* cache invalidate removes saved cache files patch by Ed Capriolo; reviewed by 
jbellis for CASSANDRA-2325
* make sure truncate clears out the commitlog patch by jbellis; reviewed by 
slebresne for CASSANDRA-2950
* include column name in validation failure exceptions patch by jbellis; 
reviewed by David Allsopp for CASSANDRA-2849
* fix NPE when encryption_options is unspecified patch by jbellis; reviewed by 
brandonwilliams for CASSANDRA-3007
* update CHANGES
* update CHANGES

Revision 1156830 by jbellis: 
log ks and cf of large rows being compacted
patch by Ryan King; reviewed by jbellis for CASSANDRA-3019

/cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/db/compaction/CompactionController.java

Revision 1156791 by jbellis: 
revert r1156772
/cassandra/branches/cassandra-0.8/CHANGES.txt

/cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/db/ColumnFamilyStore.java

Revision 1156772 by jbellis: 
cache invalidate removes saved cache files
patch by Ed Capriolo; reviewed by jbellis for CASSANDRA-2325
/cassandra/branches/cassandra-0.8/CHANGES.txt

/cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/db/ColumnFamilyStore.java

Revision 1156763 by jbellis: 
make sure truncate clears out the commitlog
patch by jbellis; reviewed by slebresne for CASSANDRA-2950
/cassandra/branches/cassandra-0.8/CHANGES.txt

/cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/db/SystemTable.java

/cassandra/branches/cassandra-0.8/test/unit/org/apache/cassandra/db/RecoveryManagerTruncateTest.java

/cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/db/commitlog/CommitLog.java

/cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/db/ColumnFamilyStore.java

Revision 1156753 by jbellis: 
include column name in validation failure exceptions
patch by jbellis; reviewed by David Allsopp for CASSANDRA-2849
/cassandra/branches/cassandra-0.8/CHANGES.txt

/cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/thrift/ThriftValidation.java
   

[jira] [Commented] (CASSANDRA-3022) Failures in cassandra long test: LongCompactionSpeedTest

2011-08-12 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13084464#comment-13084464
 ] 

Cathy Daw commented on CASSANDRA-3022:
--

The patch fixed the problem.
{code}

long-test:
 [echo] running long tests
[junit] WARNING: multiple versions of ant detected in path for junit 
[junit]  
jar:file:/usr/share/ant/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit]  and 
jar:file:/Users/cathy/dev/cassandra-0.8/build/lib/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Testsuite: org.apache.cassandra.db.LongTableTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 15.548 sec
[junit] 
[junit] Testsuite: org.apache.cassandra.db.MeteredFlusherTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 25.154 sec
[junit] 
[junit] Testsuite: 
org.apache.cassandra.db.compaction.LongCompactionSpeedTest
[junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 82.062 sec
[junit] 
[junit] - Standard Output ---
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=1 colsper=20: 988 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=20 colsper=1: 2751 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=100 rowsper=800 colsper=5: 1281 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=1 colsper=50: 10034 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=2 rowsper=50 colsper=1: 13489 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionSpeedTest: 
sstables=100 rowsper=1000 colsper=5: 9982 ms
[junit] -  ---
[junit] Testsuite: org.apache.cassandra.utils.LongBloomFilterTest
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 65.727 sec
[junit] 
[junit] Testsuite: org.apache.cassandra.utils.LongLegacyBloomFilterTest
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 42.892 sec
[junit] 

BUILD SUCCESSFUL
{code}

 Failures in cassandra long test: LongCompactionSpeedTest
 

 Key: CASSANDRA-3022
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3022
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.5
Reporter: Cathy Daw
Assignee: Jonathan Ellis
 Fix For: 0.8.5

 Attachments: 3022.txt


 *The failing test case*
 {code}
 [junit] Testsuite: 
 org.apache.cassandra.db.compaction.LongCompactionSpeedTest
 {code}
 *The following error is repeated in the console output*
 {code}
 [junit] ERROR 04:02:20,654 Error in ThreadPoolExecutor
 [junit] java.util.MissingFormatArgumentException: Format specifier 's'
 [junit]   at java.util.Formatter.format(Formatter.java:2432)
 [junit]   at java.util.Formatter.format(Formatter.java:2367)
 [junit]   at java.lang.String.format(String.java:2769)
 [junit]   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:136)
 [junit]   at 
 org.apache.cassandra.db.compaction.CompactionIterator.getReduced(CompactionIterator.java:123)
 [junit]   at 
 org.apache.cassandra.db.compaction.CompactionIterator.getReduced(CompactionIterator.java:43)
 [junit]   at 
 org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:74)
 [junit]   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
 [junit]   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
 [junit]   at 
 org.apache.commons.collections.iterators.FilterIterator.setNextObject(FilterIterator.java:183)
 [junit]   at 
 org.apache.commons.collections.iterators.FilterIterator.hasNext(FilterIterator.java:94)
 [junit]   at 
 org.apache.cassandra.db.compaction.CompactionManager.doCompactionWithoutSizeEstimation(CompactionManager.java:559)
 [junit]   at 
 org.apache.cassandra.db.compaction.CompactionManager.doCompaction(CompactionManager.java:506)
 [junit]   at 
 org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:141)
 [junit]   at 
 org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:107)
 [junit]   at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 [junit]   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 [junit]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 [junit]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 [junit]   at 

[jira] [Created] (CASSANDRA-3020) Failures in system test: test_cql.py

2011-08-11 Thread Cathy Daw (JIRA)
Failures in system test: test_cql.py


 Key: CASSANDRA-3020
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3020
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.5
 Environment: 0.8 branch for Aug 11th test run: 
https://jenkins.qa.datastax.com/job/CassandraSystem/142/
https://jenkins.qa.datastax.com/job/Cassandra/147/ 
Reporter: Cathy Daw
Assignee: Tyler Hobbs


*Test Output*
{code}
==
ERROR: reading and writing strings w/ newlines
--
Traceback (most recent call last):
  File /usr/local/lib/python2.6/dist-packages/nose/case.py, line 187, in 
runTest
self.test(*self.arg)
  File /var/lib/jenkins/jobs/Cassandra/workspace/test/system/test_cql.py, 
line 734, in test_newline_strings
, {key: \nkey, name: \nname})
  File /var/lib/jenkins/repos/drivers/py/cql/cursor.py, line 150, in execute
self.description = self.decoder.decode_description(self._query_ks, 
self._query_cf, self.result[0])
  File /var/lib/jenkins/repos/drivers/py/cql/decoders.py, line 39, in 
decode_description
comparator = self.__comparator_for(keyspace, column_family)
  File /var/lib/jenkins/repos/drivers/py/cql/decoders.py, line 35, in 
__comparator_for
return cfam.get(comparator, None)
AttributeError: 'NoneType' object has no attribute 'get'

--
Ran 127 tests in 635.426s

FAILED (errors=1)
Sending e-mails to: q...@datastax.com
Finished: FAILURE
{code}

*Suspected check-in*
{code}
Revision 1156198 by xedin: 
Fixes issues with parameters being escaped incorrectly in Python CQL
patch by Tyler Hobbs; reviewed by Pavel Yaskevich for CASSANDRA-2993
/cassandra/branches/cassandra-0.8/test/system/test_cql.py
/cassandra/branches/cassandra-0.8/CHANGES.txt
/cassandra/drivers/py/cql/cursor.py

/cassandra/branches/cassandra-0.8/src/java/org/apache/cassandra/cql/Cql.g
/cassandra/drivers/py/test/test_regex.py
{code}


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2959) long-test fails to build

2011-07-27 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13071899#comment-13071899
 ] 

Cathy Daw commented on CASSANDRA-2959:
--

The check-ins for the build where this error started are:
{code}
Revision: 1150847

Changes

Gossip handles dead states, token removal actually works, gossip states
are held for aVeryLongTime.
Patch by brandonwilliams and Paul Cannon, reviewed by Paul Cannon for
CASSANDRA-2496. (detail)

add ability to drop local reads/writes that are going to timeout
patch by jbellis; reviewed by brandonwilliams for CASSANDRA-2943 (detail)
{code}

 long-test fails to build
 

 Key: CASSANDRA-2959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2959
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Allen
Priority: Minor

 build-test:
 [javac] /var/lib/jenkins/jobs/Cassandra/workspace/build.xml:910: warning: 
 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set 
 to false for repeatable builds
 [javac] Compiling 125 source files to 
 /var/lib/jenkins/jobs/Cassandra/workspace/build/test/classes
 [javac] 
 /var/lib/jenkins/jobs/Cassandra/workspace/test/unit/org/apache/cassandra/service/RemoveTest.java:172:
  removingNonlocal(org.apache.cassandra.dht.Token) in 
 org.apache.cassandra.gms.VersionedValue.VersionedValueFactory cannot be 
 applied to (org.apache.cassandra.dht.Token,org.apache.cassandra.dht.Token)
 [javac] 
 valueFactory.removingNonlocal(endpointTokens.get(1), removaltoken));
 [javac] ^
 [javac] 
 /var/lib/jenkins/jobs/Cassandra/workspace/test/unit/org/apache/cassandra/service/RemoveTest.java:189:
  removedNonlocal(org.apache.cassandra.dht.Token) in 
 org.apache.cassandra.gms.VersionedValue.VersionedValueFactory cannot be 
 applied to (org.apache.cassandra.dht.Token,org.apache.cassandra.dht.Token)
 [javac] 
 valueFactory.removedNonlocal(endpointTokens.get(1), removaltoken));
 [javac] ^
 [javac] Note: Some input files use or override a deprecated API.
 [javac] Note: Recompile with -Xlint:deprecation for details.
 [javac] Note: Some input files use unchecked or unsafe operations.
 [javac] Note: Recompile with -Xlint:unchecked for details.
 [javac] 2 errors
 BUILD FAILED
 /var/lib/jenkins/jobs/Cassandra/workspace/build.xml:910: Compile failed; see 
 the compiler error output for details.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-2950) Data from truncated Counter CF reappears after server restart

2011-07-26 Thread Cathy Daw (JIRA)
Data from truncated Counter CF reappears after server restart
-

 Key: CASSANDRA-2950
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2950
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw


* Configure 3 node cluster
* Ensure the java stress tool creates Keyspace1 with RF=3

{code}
// Run Stress Tool to generate 10 keys, 1 column
stress --operation=COUNTER_ADD --family-type=Standard --num-keys=1000 
--num-different-keys=10 --columns=1 --consistency-level=QUORUM 
--average-size-values --replication-factor=3 --nodes=node1,node1

// Verify 10 keys in CLI
use Keyspace1; 
list Counter1; 

//TRUNCATE CF in CLI
use Keyspace1;
truncate counter1;
list counter1;

// Run stress tool and verify creation of 1 key with 1 column valued @ 1000
stress --operation=COUNTER_ADD --family-type=Standard --num-keys=1000 
--num-different-keys=1 --columns=1 --consistency-level=QUORUM 
--average-size-values --replication-factor=3 --nodes=node1,node1


// Run stress tool and verify update of existing key -- Final result is 2 
columns valued at 1500, 500.
stress --operation=COUNTER_ADD --family-type=Standard --num-keys=500 
--num-different-keys=1 --columns=2 --consistency-level=QUORUM 
--average-size-values --replication-factor=3 --nodes=node1,node1

// Run stress tool and verify update of existing key -- Final result is 3 
columns valued at 2100, 1100, 600.
stress --operation=COUNTER_ADD --family-type=Standard --num-keys=600 
--num-different-keys=1 --columns=3 --consistency-level=QUORUM 
--average-size-values --replication-factor=3 --nodes=node1,node1
{code}

*Data while all three nodes are up*
{code}
[default@Keyspace1] list Counter1;
Using default limit of 100
---
RowKey: 30
= (counter=4330, value=2100)
= (counter=4331, value=1100)
= (counter=4332, value=600)
{code}

* Shutdown nodes 1,2,3
* Startup nodes 1,2,3
* Verify in CLI: 11 keys.  I am expecting only 1.

*Data after bouncing nodes*
{code}
[default@Keyspace1] list Counter1;
Using default limit of 100
---
RowKey: 3036
= (counter=4330, value=500597)
---
RowKey: 3038
= (counter=4330, value=500591)
---
RowKey: 3039
= (counter=4330, value=500609)
---
RowKey: 3033
= (counter=4330, value=500607)
---
RowKey: 3037
= (counter=4330, value=500601)
---
RowKey: 30
= (counter=4330, value=2708611)
= (counter=4331, value=606482)
= (counter=4332, value=180798)
---
RowKey: 3030
= (counter=4330, value=500616)
---
RowKey: 3032
= (counter=4330, value=500596)
---
RowKey: 3031
= (counter=4330, value=500613)
---
RowKey: 3035
= (counter=4330, value=500624)
---
RowKey: 3034
= (counter=4330, value=500618)

11 Rows Returned.
{code}





--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2950) Data from truncated Counter CF reappears after server restart

2011-07-26 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13071469#comment-13071469
 ] 

Cathy Daw commented on CASSANDRA-2950:
--

Confirmed this only occurs with Counters.  There is a limitation on deleting 
counters, that they may appear if they haven't been applied to all nodes and 
compacted away.  Not sure if this falls under the same limitation, since in a 
traditional RDBMS, the semantics for truncate are different from delete.

 Data from truncated Counter CF reappears after server restart
 -

 Key: CASSANDRA-2950
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2950
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw

 * Configure 3 node cluster
 * Ensure the java stress tool creates Keyspace1 with RF=3
 {code}
 // Run Stress Tool to generate 10 keys, 1 column
 stress --operation=COUNTER_ADD --family-type=Standard --num-keys=1000 
 --num-different-keys=10 --columns=1 --consistency-level=QUORUM 
 --average-size-values --replication-factor=3 --nodes=node1,node1
 // Verify 10 keys in CLI
 use Keyspace1; 
 list Counter1; 
 //TRUNCATE CF in CLI
 use Keyspace1;
 truncate counter1;
 list counter1;
 // Run stress tool and verify creation of 1 key with 1 column valued @ 1000
 stress --operation=COUNTER_ADD --family-type=Standard --num-keys=1000 
 --num-different-keys=1 --columns=1 --consistency-level=QUORUM 
 --average-size-values --replication-factor=3 --nodes=node1,node1
 // Run stress tool and verify update of existing key -- Final result is 2 
 columns valued at 1500, 500.
 stress --operation=COUNTER_ADD --family-type=Standard --num-keys=500 
 --num-different-keys=1 --columns=2 --consistency-level=QUORUM 
 --average-size-values --replication-factor=3 --nodes=node1,node1
 // Run stress tool and verify update of existing key -- Final result is 3 
 columns valued at 2100, 1100, 600.
 stress --operation=COUNTER_ADD --family-type=Standard --num-keys=600 
 --num-different-keys=1 --columns=3 --consistency-level=QUORUM 
 --average-size-values --replication-factor=3 --nodes=node1,node1
 {code}
 *Data while all three nodes are up*
 {code}
 [default@Keyspace1] list Counter1;
 Using default limit of 100
 ---
 RowKey: 30
 = (counter=4330, value=2100)
 = (counter=4331, value=1100)
 = (counter=4332, value=600)
 {code}
 * Shutdown nodes 1,2,3
 * Startup nodes 1,2,3
 * Verify in CLI: 11 keys.  I am expecting only 1.
 *Data after bouncing nodes*
 {code}
 [default@Keyspace1] list Counter1;
 Using default limit of 100
 ---
 RowKey: 3036
 = (counter=4330, value=500597)
 ---
 RowKey: 3038
 = (counter=4330, value=500591)
 ---
 RowKey: 3039
 = (counter=4330, value=500609)
 ---
 RowKey: 3033
 = (counter=4330, value=500607)
 ---
 RowKey: 3037
 = (counter=4330, value=500601)
 ---
 RowKey: 30
 = (counter=4330, value=2708611)
 = (counter=4331, value=606482)
 = (counter=4332, value=180798)
 ---
 RowKey: 3030
 = (counter=4330, value=500616)
 ---
 RowKey: 3032
 = (counter=4330, value=500596)
 ---
 RowKey: 3031
 = (counter=4330, value=500613)
 ---
 RowKey: 3035
 = (counter=4330, value=500624)
 ---
 RowKey: 3034
 = (counter=4330, value=500618)
 11 Rows Returned.
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2950) Data from truncated Counter CF reappears after server restart

2011-07-26 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-2950:
-

Comment: was deleted

(was: Confirmed this only occurs with Counters.  There is a limitation on 
deleting counters, that they may appear if they haven't been applied to all 
nodes and compacted away.  Not sure if this falls under the same limitation, 
since in a traditional RDBMS, the semantics for truncate are different from 
delete.)

 Data from truncated Counter CF reappears after server restart
 -

 Key: CASSANDRA-2950
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2950
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw
Assignee: Sylvain Lebresne

 * Configure 3 node cluster
 * Ensure the java stress tool creates Keyspace1 with RF=3
 {code}
 // Run Stress Tool to generate 10 keys, 1 column
 stress --operation=COUNTER_ADD --family-type=Standard --num-keys=1000 
 --num-different-keys=10 --columns=1 --consistency-level=QUORUM 
 --average-size-values --replication-factor=3 --nodes=node1,node1
 // Verify 10 keys in CLI
 use Keyspace1; 
 list Counter1; 
 //TRUNCATE CF in CLI
 use Keyspace1;
 truncate counter1;
 list counter1;
 // Run stress tool and verify creation of 1 key with 1 column valued @ 1000
 stress --operation=COUNTER_ADD --family-type=Standard --num-keys=1000 
 --num-different-keys=1 --columns=1 --consistency-level=QUORUM 
 --average-size-values --replication-factor=3 --nodes=node1,node1
 // Run stress tool and verify update of existing key -- Final result is 2 
 columns valued at 1500, 500.
 stress --operation=COUNTER_ADD --family-type=Standard --num-keys=500 
 --num-different-keys=1 --columns=2 --consistency-level=QUORUM 
 --average-size-values --replication-factor=3 --nodes=node1,node1
 // Run stress tool and verify update of existing key -- Final result is 3 
 columns valued at 2100, 1100, 600.
 stress --operation=COUNTER_ADD --family-type=Standard --num-keys=600 
 --num-different-keys=1 --columns=3 --consistency-level=QUORUM 
 --average-size-values --replication-factor=3 --nodes=node1,node1
 {code}
 *Data while all three nodes are up*
 {code}
 [default@Keyspace1] list Counter1;
 Using default limit of 100
 ---
 RowKey: 30
 = (counter=4330, value=2100)
 = (counter=4331, value=1100)
 = (counter=4332, value=600)
 {code}
 * Shutdown nodes 1,2,3
 * Startup nodes 1,2,3
 * Verify in CLI: 11 keys.  I am expecting only 1.
 *Data after bouncing nodes*
 {code}
 [default@Keyspace1] list Counter1;
 Using default limit of 100
 ---
 RowKey: 3036
 = (counter=4330, value=500597)
 ---
 RowKey: 3038
 = (counter=4330, value=500591)
 ---
 RowKey: 3039
 = (counter=4330, value=500609)
 ---
 RowKey: 3033
 = (counter=4330, value=500607)
 ---
 RowKey: 3037
 = (counter=4330, value=500601)
 ---
 RowKey: 30
 = (counter=4330, value=2708611)
 = (counter=4331, value=606482)
 = (counter=4332, value=180798)
 ---
 RowKey: 3030
 = (counter=4330, value=500616)
 ---
 RowKey: 3032
 = (counter=4330, value=500596)
 ---
 RowKey: 3031
 = (counter=4330, value=500613)
 ---
 RowKey: 3035
 = (counter=4330, value=500624)
 ---
 RowKey: 3034
 = (counter=4330, value=500618)
 11 Rows Returned.
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2950) Data from truncated Counter CF reappears after server restart

2011-07-26 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13071475#comment-13071475
 ] 

Cathy Daw commented on CASSANDRA-2950:
--

This is a general issue with all CF's. updating bug.

 Data from truncated Counter CF reappears after server restart
 -

 Key: CASSANDRA-2950
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2950
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw
Assignee: Sylvain Lebresne

 * Configure 3 node cluster
 * Ensure the java stress tool creates Keyspace1 with RF=3
 {code}
 // Run Stress Tool to generate 10 keys, 1 column
 stress --operation=COUNTER_ADD --family-type=Standard --num-keys=1000 
 --num-different-keys=10 --columns=1 --consistency-level=QUORUM 
 --average-size-values --replication-factor=3 --nodes=node1,node1
 // Verify 10 keys in CLI
 use Keyspace1; 
 list Counter1; 
 //TRUNCATE CF in CLI
 use Keyspace1;
 truncate counter1;
 list counter1;
 // Run stress tool and verify creation of 1 key with 1 column valued @ 1000
 stress --operation=COUNTER_ADD --family-type=Standard --num-keys=1000 
 --num-different-keys=1 --columns=1 --consistency-level=QUORUM 
 --average-size-values --replication-factor=3 --nodes=node1,node1
 // Run stress tool and verify update of existing key -- Final result is 2 
 columns valued at 1500, 500.
 stress --operation=COUNTER_ADD --family-type=Standard --num-keys=500 
 --num-different-keys=1 --columns=2 --consistency-level=QUORUM 
 --average-size-values --replication-factor=3 --nodes=node1,node1
 // Run stress tool and verify update of existing key -- Final result is 3 
 columns valued at 2100, 1100, 600.
 stress --operation=COUNTER_ADD --family-type=Standard --num-keys=600 
 --num-different-keys=1 --columns=3 --consistency-level=QUORUM 
 --average-size-values --replication-factor=3 --nodes=node1,node1
 {code}
 *Data while all three nodes are up*
 {code}
 [default@Keyspace1] list Counter1;
 Using default limit of 100
 ---
 RowKey: 30
 = (counter=4330, value=2100)
 = (counter=4331, value=1100)
 = (counter=4332, value=600)
 {code}
 * Shutdown nodes 1,2,3
 * Startup nodes 1,2,3
 * Verify in CLI: 11 keys.  I am expecting only 1.
 *Data after bouncing nodes*
 {code}
 [default@Keyspace1] list Counter1;
 Using default limit of 100
 ---
 RowKey: 3036
 = (counter=4330, value=500597)
 ---
 RowKey: 3038
 = (counter=4330, value=500591)
 ---
 RowKey: 3039
 = (counter=4330, value=500609)
 ---
 RowKey: 3033
 = (counter=4330, value=500607)
 ---
 RowKey: 3037
 = (counter=4330, value=500601)
 ---
 RowKey: 30
 = (counter=4330, value=2708611)
 = (counter=4331, value=606482)
 = (counter=4332, value=180798)
 ---
 RowKey: 3030
 = (counter=4330, value=500616)
 ---
 RowKey: 3032
 = (counter=4330, value=500596)
 ---
 RowKey: 3031
 = (counter=4330, value=500613)
 ---
 RowKey: 3035
 = (counter=4330, value=500624)
 ---
 RowKey: 3034
 = (counter=4330, value=500618)
 11 Rows Returned.
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2950) Data from truncated CF reappears after server restart

2011-07-26 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-2950:
-

Description: 
* Configure 3 node cluster
* Ensure the java stress tool creates Keyspace1 with RF=3

{code}
// Run Stress Tool to generate 10 keys, 1 column
stress --operation=INSERT -t 2 --num-keys=50 --columns=20 
--consistency-level=QUORUM --average-size-values --replication-factor=3 
--create-index=KEYS --nodes=cathy1,cathy2

// Verify 50 keys in CLI
use Keyspace1; 
list Standard1; 

// TRUNCATE CF in CLI
use Keyspace1;
truncate counter1;
list counter1;

// Run stress tool and verify creation of 1 key with 10 columns
stress --operation=INSERT -t 2 --num-keys=1 --columns=10 
--consistency-level=QUORUM --average-size-values --replication-factor=3 
--create-index=KEYS --nodes=cathy1,cathy2

// Verify 1 key in CLI
use Keyspace1; 
list Standard1; 

// Restart all three nodes

// You will see 51 keys in CLI
use Keyspace1; 
list Standard1; 
{code}




  was:
* Configure 3 node cluster
* Ensure the java stress tool creates Keyspace1 with RF=3

{code}
// Run Stress Tool to generate 10 keys, 1 column
stress --operation=COUNTER_ADD --family-type=Standard --num-keys=1000 
--num-different-keys=10 --columns=1 --consistency-level=QUORUM 
--average-size-values --replication-factor=3 --nodes=node1,node1

// Verify 10 keys in CLI
use Keyspace1; 
list Counter1; 

//TRUNCATE CF in CLI
use Keyspace1;
truncate counter1;
list counter1;

// Run stress tool and verify creation of 1 key with 1 column valued @ 1000
stress --operation=COUNTER_ADD --family-type=Standard --num-keys=1000 
--num-different-keys=1 --columns=1 --consistency-level=QUORUM 
--average-size-values --replication-factor=3 --nodes=node1,node1


// Run stress tool and verify update of existing key -- Final result is 2 
columns valued at 1500, 500.
stress --operation=COUNTER_ADD --family-type=Standard --num-keys=500 
--num-different-keys=1 --columns=2 --consistency-level=QUORUM 
--average-size-values --replication-factor=3 --nodes=node1,node1

// Run stress tool and verify update of existing key -- Final result is 3 
columns valued at 2100, 1100, 600.
stress --operation=COUNTER_ADD --family-type=Standard --num-keys=600 
--num-different-keys=1 --columns=3 --consistency-level=QUORUM 
--average-size-values --replication-factor=3 --nodes=node1,node1
{code}

*Data while all three nodes are up*
{code}
[default@Keyspace1] list Counter1;
Using default limit of 100
---
RowKey: 30
= (counter=4330, value=2100)
= (counter=4331, value=1100)
= (counter=4332, value=600)
{code}

* Shutdown nodes 1,2,3
* Startup nodes 1,2,3
* Verify in CLI: 11 keys.  I am expecting only 1.

*Data after bouncing nodes*
{code}
[default@Keyspace1] list Counter1;
Using default limit of 100
---
RowKey: 3036
= (counter=4330, value=500597)
---
RowKey: 3038
= (counter=4330, value=500591)
---
RowKey: 3039
= (counter=4330, value=500609)
---
RowKey: 3033
= (counter=4330, value=500607)
---
RowKey: 3037
= (counter=4330, value=500601)
---
RowKey: 30
= (counter=4330, value=2708611)
= (counter=4331, value=606482)
= (counter=4332, value=180798)
---
RowKey: 3030
= (counter=4330, value=500616)
---
RowKey: 3032
= (counter=4330, value=500596)
---
RowKey: 3031
= (counter=4330, value=500613)
---
RowKey: 3035
= (counter=4330, value=500624)
---
RowKey: 3034
= (counter=4330, value=500618)

11 Rows Returned.
{code}





Summary: Data from truncated CF reappears after server restart  (was: 
Data from truncated Counter CF reappears after server restart)

 Data from truncated CF reappears after server restart
 -

 Key: CASSANDRA-2950
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2950
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw
Assignee: Sylvain Lebresne

 * Configure 3 node cluster
 * Ensure the java stress tool creates Keyspace1 with RF=3
 {code}
 // Run Stress Tool to generate 10 keys, 1 column
 stress --operation=INSERT -t 2 --num-keys=50 --columns=20 
 --consistency-level=QUORUM --average-size-values --replication-factor=3 
 --create-index=KEYS --nodes=cathy1,cathy2
 // Verify 50 keys in CLI
 use Keyspace1; 
 list Standard1; 
 // TRUNCATE CF in CLI
 use Keyspace1;
 truncate counter1;
 list counter1;
 // Run stress tool and verify creation of 1 key with 10 columns
 stress --operation=INSERT -t 2 --num-keys=1 --columns=10 
 --consistency-level=QUORUM --average-size-values --replication-factor=3 
 --create-index=KEYS --nodes=cathy1,cathy2
 // Verify 1 key in CLI
 use Keyspace1; 
 list Standard1; 
 // Restart all three nodes
 // You will see 51 keys in CLI
 use Keyspace1; 
 list 

[jira] [Commented] (CASSANDRA-2950) Data from truncated CF reappears after server restart

2011-07-26 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13071479#comment-13071479
 ] 

Cathy Daw commented on CASSANDRA-2950:
--

The other permutation of this bug looked like, assuming write with CL.Q:
* Insert 50 (3 nodes up)
* truncate CF (3 nodes up)
* Insert 1 (3 nodes up)
* Bring node3 down
* Delete 1  (2 nodes up)
* Bring up node3 and run repair
* Take down node1 and node2.
* Query node3 with CL.ONE: list Standard1;  --- 30 rows returned

Not sure, but this looked suspicious in my logs:
{code}
 INFO 01:19:45,616 Streaming to /50.57.114.45
 INFO 01:19:45,689 Finished streaming session 69860958341 from 
/50.57.107.176
 INFO 01:19:45,690 Finished streaming session 698609609994154 from /50.57.114.45
 INFO 01:19:46,501 Finished streaming repair with /50.57.114.45 for 
(0,56713727820156410577229101238628035242]: 0 oustanding to complete session
 INFO 01:19:46,531 Compacted to 
/var/lib/cassandra/data/Keyspace1/Standard1-tmp-g-106-Data.db.  16,646,523 to 
16,646,352 (~99% of original) bytes for 30 keys.  Time: 1,509ms.
 INFO 01:19:46,930 Finished streaming repair with /50.57.107.176 for 
(113427455640312821154458202477256070484,0]: 1 oustanding to complete session
 INFO 01:19:47,619 Finished streaming repair with /50.57.114.45 for 
(113427455640312821154458202477256070484,0]: 0 oustanding to complete session
 INFO 01:19:48,232 Finished streaming repair with /50.57.107.176 for 
(56713727820156410577229101238628035242,113427455640312821154458202477256070484]:
 1 oustanding to complete session
 INFO 01:19:48,856 Finished streaming repair with /50.57.114.45 for 
(56713727820156410577229101238628035242,113427455640312821154458202477256070484]:
 0 oustanding to complete session
{code}

 Data from truncated CF reappears after server restart
 -

 Key: CASSANDRA-2950
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2950
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw
Assignee: Sylvain Lebresne

 * Configure 3 node cluster
 * Ensure the java stress tool creates Keyspace1 with RF=3
 {code}
 // Run Stress Tool to generate 10 keys, 1 column
 stress --operation=INSERT -t 2 --num-keys=50 --columns=20 
 --consistency-level=QUORUM --average-size-values --replication-factor=3 
 --create-index=KEYS --nodes=cathy1,cathy2
 // Verify 50 keys in CLI
 use Keyspace1; 
 list Standard1; 
 // TRUNCATE CF in CLI
 use Keyspace1;
 truncate counter1;
 list counter1;
 // Run stress tool and verify creation of 1 key with 10 columns
 stress --operation=INSERT -t 2 --num-keys=1 --columns=10 
 --consistency-level=QUORUM --average-size-values --replication-factor=3 
 --create-index=KEYS --nodes=cathy1,cathy2
 // Verify 1 key in CLI
 use Keyspace1; 
 list Standard1; 
 // Restart all three nodes
 // You will see 51 keys in CLI
 use Keyspace1; 
 list Standard1; 
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2938) NotFoundException doing a quick succession of insert/get's on the same CF or rowkey

2011-07-22 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-2938:
-

Attachment: t910.py

 NotFoundException doing a quick succession of insert/get's on the same CF or 
 rowkey
 ---

 Key: CASSANDRA-2938
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2938
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Cathy Daw
Priority: Minor
 Attachments: t910.py


 *Issue*
 * A customer complained about pycassa.cassandra.c08.ttypes.NotFoundException: 
 NotFoundException()
 * This issue is related to a quick succession of insert/get's.  See: 
 [http://support.datastax.com/tickets/910]
 * Customer's AWS instance on EBS:  1 row with 10K columns, with sleep code:  
 fails 1/10 inserts
 * Rackspace:
 ** We could not reproduce this with the sleep code left in.  This was tested 
 with the SimpleStrategy and NetworkTopologyStrategy.
 ** 1 row with 10K columns, without sleep code:  fails 1/500 inserts
 ** 10K row with 1 column, without sleep code:  Script passes 2 in 5 attempts. 
  When it fails, it is at about the 4000-5000th insert.
 *Stack*
 {code}
 Traceback (most recent call last):
   File t910.py, line 56, in module
 test()
   File t910.py, line 43, in test
 db.get('testraw', columns=[key, ]) 
   File 
 /usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/columnfamily.py,
  line 391, in get
 read_consistency_level or self.read_consistency_level)
   File 
 /usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/pool.py,
  line 380, in new_f
 result = getattr(super(ConnectionWrapper, self), f.__name__)(*args, 
 **kwargs)
   File 
 /usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/cassandra/c08/Cassandra.py,
  line 422, in get
 return self.recv_get()
   File 
 /usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/cassandra/c08/Cassandra.py,
  line 449, in recv_get
 raise result.nfe
 pycassa.cassandra.c08.ttypes.NotFoundException: NotFoundException()
 {code}
 *Script - also attached*
 {code}
 #!/usr/bin/python
 import time
 import pycassa
 from pycassa import system_manager
 from pycassa.system_manager import *
 def test():
 m = pycassa.system_manager.SystemManager('cathy1:9160')
 pool = pycassa.pool.ConnectionPool('testraw',
 server_list=['cathy1:9160', ], timeout=5, pool_size=16,
 max_overflow=0, prefill=False, pool_timeout=30, max_retries=8)
 kspaces = m.list_keyspaces()
 if not 'testraw' in kspaces:
 m.create_keyspace('testraw', 3)
 cfs = m.get_keyspace_column_families('testraw')
 if 'testraw' not in cfs:
 m.create_column_family('testraw', 'testraw',
 comparator_type=system_manager.BYTES_TYPE,
 default_validation_class=system_manager.BYTES_TYPE,
 row_cache_size=1024 * 1024, key_cache_size=0)
 db = pycassa.ColumnFamily(pool, 'testraw',
 read_consistency_level=pycassa.ConsistencyLevel.QUORUM,
 write_consistency_level=pycassa.ConsistencyLevel.QUORUM)
 try:
 for i in range(1):
 print 'Inserting %d' % i
 # The following code generates 1 row with 10K columns
 key = str(i)
 db.insert('testraw', {key: ''})
 db.get('testraw', columns=[key, ])
 # The following code generates 10K rows with 1 columns
 #key = 'key' + str(i) 
 #db.insert(key, {str(i) : ''}) 
 #db.get(key, columns=[str(i), ]) 
 # time.sleep(.1) 
 finally:
 print 'Done'
 m.drop_keyspace('testraw')
 if __name__ == '__main__':
 test()
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-2938) NotFoundException doing a quick succession of insert/get's on the same CF or rowkey

2011-07-22 Thread Cathy Daw (JIRA)
NotFoundException doing a quick succession of insert/get's on the same CF or 
rowkey
---

 Key: CASSANDRA-2938
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2938
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Cathy Daw
Priority: Minor
 Attachments: t910.py

*Issue*
* A customer complained about pycassa.cassandra.c08.ttypes.NotFoundException: 
NotFoundException()
* This issue is related to a quick succession of insert/get's.  See: 
[http://support.datastax.com/tickets/910]
* Customer's AWS instance on EBS:  1 row with 10K columns, with sleep code:  
fails 1/10 inserts

* Rackspace:
** We could not reproduce this with the sleep code left in.  This was tested 
with the SimpleStrategy and NetworkTopologyStrategy.
** 1 row with 10K columns, without sleep code:  fails 1/500 inserts
** 10K row with 1 column, without sleep code:  Script passes 2 in 5 attempts.  
When it fails, it is at about the 4000-5000th insert.


*Stack*
{code}
Traceback (most recent call last):
  File t910.py, line 56, in module
test()
  File t910.py, line 43, in test
db.get('testraw', columns=[key, ]) 
  File 
/usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/columnfamily.py,
 line 391, in get
read_consistency_level or self.read_consistency_level)
  File 
/usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/pool.py,
 line 380, in new_f
result = getattr(super(ConnectionWrapper, self), f.__name__)(*args, 
**kwargs)
  File 
/usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/cassandra/c08/Cassandra.py,
 line 422, in get
return self.recv_get()
  File 
/usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/cassandra/c08/Cassandra.py,
 line 449, in recv_get
raise result.nfe
pycassa.cassandra.c08.ttypes.NotFoundException: NotFoundException()
{code}

*Script - also attached*
{code}
#!/usr/bin/python

import time
import pycassa
from pycassa import system_manager
from pycassa.system_manager import *

def test():
m = pycassa.system_manager.SystemManager('cathy1:9160')

pool = pycassa.pool.ConnectionPool('testraw',
server_list=['cathy1:9160', ], timeout=5, pool_size=16,
max_overflow=0, prefill=False, pool_timeout=30, max_retries=8)

kspaces = m.list_keyspaces()

if not 'testraw' in kspaces:
m.create_keyspace('testraw', 3)

cfs = m.get_keyspace_column_families('testraw')

if 'testraw' not in cfs:
m.create_column_family('testraw', 'testraw',

comparator_type=system_manager.BYTES_TYPE,
default_validation_class=system_manager.BYTES_TYPE,
row_cache_size=1024 * 1024, key_cache_size=0)

db = pycassa.ColumnFamily(pool, 'testraw',
read_consistency_level=pycassa.ConsistencyLevel.QUORUM,
write_consistency_level=pycassa.ConsistencyLevel.QUORUM)

try:
for i in range(1):
print 'Inserting %d' % i

# The following code generates 1 row with 10K columns
key = str(i)
db.insert('testraw', {key: ''})
db.get('testraw', columns=[key, ])

# The following code generates 10K rows with 1 columns
#key = 'key' + str(i) 
#db.insert(key, {str(i) : ''}) 
#db.get(key, columns=[str(i), ]) 

# time.sleep(.1) 
finally:
print 'Done'
m.drop_keyspace('testraw')

if __name__ == '__main__':
test()
{code}



--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2938) NotFoundException doing a quick succession of insert/get's on the same CF or rowkey

2011-07-22 Thread Cathy Daw (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cathy Daw updated CASSANDRA-2938:
-

Description: 
*Issue*
* A customer complained about pycassa.cassandra.c08.ttypes.NotFoundException: 
NotFoundException()
* This issue is related to a quick succession of insert/get's.  See: 
[http://support.datastax.com/tickets/910]
* Customer's AWS instance on EBS:  1 row with 10K columns, with sleep code:  
fails 1/10 inserts

* 3-node test cluster on Rackspace.  
** We could not reproduce this with the sleep code left in.  This was tested 
with the SimpleStrategy and NetworkTopologyStrategy.
** 1 row with 10K columns, without sleep code:  fails 1/500 inserts
** 10K row with 1 column, without sleep code:  Script passes 2 in 5 attempts.  
When it fails, it is at about the 4000-5000th insert.


*Stack*
{code}
Traceback (most recent call last):
  File t910.py, line 56, in module
test()
  File t910.py, line 43, in test
db.get('testraw', columns=[key, ]) 
  File 
/usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/columnfamily.py,
 line 391, in get
read_consistency_level or self.read_consistency_level)
  File 
/usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/pool.py,
 line 380, in new_f
result = getattr(super(ConnectionWrapper, self), f.__name__)(*args, 
**kwargs)
  File 
/usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/cassandra/c08/Cassandra.py,
 line 422, in get
return self.recv_get()
  File 
/usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/cassandra/c08/Cassandra.py,
 line 449, in recv_get
raise result.nfe
pycassa.cassandra.c08.ttypes.NotFoundException: NotFoundException()
{code}

*Script - also attached*
{code}
#!/usr/bin/python

import time
import pycassa
from pycassa import system_manager
from pycassa.system_manager import *

def test():
m = pycassa.system_manager.SystemManager('cathy1:9160')

pool = pycassa.pool.ConnectionPool('testraw',
server_list=['cathy1:9160', ], timeout=5, pool_size=16,
max_overflow=0, prefill=False, pool_timeout=30, max_retries=8)

kspaces = m.list_keyspaces()

if not 'testraw' in kspaces:
m.create_keyspace('testraw', 3)

cfs = m.get_keyspace_column_families('testraw')

if 'testraw' not in cfs:
m.create_column_family('testraw', 'testraw',

comparator_type=system_manager.BYTES_TYPE,
default_validation_class=system_manager.BYTES_TYPE,
row_cache_size=1024 * 1024, key_cache_size=0)

db = pycassa.ColumnFamily(pool, 'testraw',
read_consistency_level=pycassa.ConsistencyLevel.QUORUM,
write_consistency_level=pycassa.ConsistencyLevel.QUORUM)

try:
for i in range(1):
print 'Inserting %d' % i

# The following code generates 1 row with 10K columns
key = str(i)
db.insert('testraw', {key: ''})
db.get('testraw', columns=[key, ])

# The following code generates 10K rows with 1 columns
#key = 'key' + str(i) 
#db.insert(key, {str(i) : ''}) 
#db.get(key, columns=[str(i), ]) 

# time.sleep(.1) 
finally:
print 'Done'
m.drop_keyspace('testraw')

if __name__ == '__main__':
test()
{code}



  was:
*Issue*
* A customer complained about pycassa.cassandra.c08.ttypes.NotFoundException: 
NotFoundException()
* This issue is related to a quick succession of insert/get's.  See: 
[http://support.datastax.com/tickets/910]
* Customer's AWS instance on EBS:  1 row with 10K columns, with sleep code:  
fails 1/10 inserts

* Rackspace:
** We could not reproduce this with the sleep code left in.  This was tested 
with the SimpleStrategy and NetworkTopologyStrategy.
** 1 row with 10K columns, without sleep code:  fails 1/500 inserts
** 10K row with 1 column, without sleep code:  Script passes 2 in 5 attempts.  
When it fails, it is at about the 4000-5000th insert.


*Stack*
{code}
Traceback (most recent call last):
  File t910.py, line 56, in module
test()
  File t910.py, line 43, in test
db.get('testraw', columns=[key, ]) 
  File 
/usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/columnfamily.py,
 line 391, in get
read_consistency_level or self.read_consistency_level)
  File 
/usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/pool.py,
 line 380, in new_f
result = getattr(super(ConnectionWrapper, self), f.__name__)(*args, 
**kwargs)
  File 
/usr/local/lib/python2.6/dist-packages/pycassa-1.1.0-py2.6.egg/pycassa/cassandra/c08/Cassandra.py,
 line 422, in get
 

[jira] [Created] (CASSANDRA-2942) If you drop a CF when one node is down the files are orphaned on the downed node

2011-07-22 Thread Cathy Daw (JIRA)
If you drop a CF when one node is down the files are orphaned on the downed node


 Key: CASSANDRA-2942
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2942
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw
Priority: Minor



* Bring up 3 node cluster
* From node1: Run Stress Tool
{code} stress --num-keys=10 --columns=10 --consistency-level=ALL 
--average-size-values --replication-factor=3 --nodes=node1,node2 {code}
* Shutdown node3
* From node1: drop the Standard1 CF in Keyspace1
* Shutdown node2 and node3
* Bring up node1 and node2. Check that the Standard1 files are gone.
{code}
ls -al /var/lib/cassandra/data/Keyspace1/
{code}
* Bring up node3. The log file shows the drop column family occurs
{code}
 INFO 00:51:25,742 Applying migration 9a76f880-b4c5-11e0--8901a7c5c9ce Drop 
column family: Keyspace1.Standard1
{code}
* Restart node3 to clear out dropped tables from the filesystem
{code}
root@cathy3:~/cass-0.8/bin# ls -al /var/lib/cassandra/data/Keyspace1/
total 36
drwxr-xr-x 3 root root 4096 Jul 23 00:51 .
drwxr-xr-x 6 root root 4096 Jul 23 00:48 ..
-rw-r--r-- 1 root root0 Jul 23 00:51 Standard1-g-1-Compacted
-rw-r--r-- 2 root root 5770 Jul 23 00:51 Standard1-g-1-Data.db
-rw-r--r-- 2 root root   32 Jul 23 00:51 Standard1-g-1-Filter.db
-rw-r--r-- 2 root root  120 Jul 23 00:51 Standard1-g-1-Index.db
-rw-r--r-- 2 root root 4276 Jul 23 00:51 Standard1-g-1-Statistics.db
drwxr-xr-x 3 root root 4096 Jul 23 00:51 snapshots
{code}
*Bug:  The files for Standard1 are orphaned on node3*



--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-2933) nodetool hangs (doesn't return prompt) if you specify a table that doesn't exist or a KS that has no CF's

2011-07-21 Thread Cathy Daw (JIRA)
nodetool hangs (doesn't return prompt) if you specify a table that doesn't 
exist or a KS that has no CF's
-

 Key: CASSANDRA-2933
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2933
 Project: Cassandra
  Issue Type: Bug
Reporter: Cathy Daw
Priority: Minor


Invalid CF
{code}
ERROR 02:18:18,904 Fatal exception in thread Thread[AntiEntropyStage:3,5,main]
java.lang.IllegalArgumentException: Unknown table/cf pair 
(StressKeyspace.StressStandard)
at org.apache.cassandra.db.Table.getColumnFamilyStore(Table.java:147)
at 
org.apache.cassandra.service.AntiEntropyService$TreeRequestVerbHandler.doVerb(AntiEntropyService.java:601)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}


Empty KS
{code}
 INFO 02:19:21,483 Waiting for repair requests: []
 INFO 02:19:21,484 Waiting for repair requests: []
 INFO 02:19:21,484 Waiting for repair requests: []
{code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-2927) Error truncating a table that is being loaded into

2011-07-20 Thread Cathy Daw (JIRA)
Error truncating a table that is being loaded into
--

 Key: CASSANDRA-2927
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2927
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.0
 Environment: * tip of 0.8 trunk as of July 19th
Reporter: Cathy Daw
Priority: Minor


* In one window, run a large stress job - 1 rows / 1000 columns
* In a second window, open the cli and truncate the table being loaded
* See the following exception on all nodes
 
{code}
ERROR 01:14:28,763 Fatal exception in thread Thread[CompactionExecutor:6,1,main]
java.io.IOError: java.io.IOException: Unable to create compaction marker
at 
org.apache.cassandra.io.sstable.SSTableReader.markCompacted(SSTableReader.java:638)
at 
org.apache.cassandra.db.DataTracker.removeOldSSTablesSize(DataTracker.java:280)
at org.apache.cassandra.db.DataTracker.replace(DataTracker.java:253)
at 
org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:214)
at 
org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:979)
at 
org.apache.cassandra.db.compaction.CompactionManager.doCompactionWithoutSizeEstimation(CompactionManager.java:594)
at 
org.apache.cassandra.db.compaction.CompactionManager.doCompaction(CompactionManager.java:505)
at 
org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:140)
at 
org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:106)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Unable to create compaction marker
at 
org.apache.cassandra.io.sstable.SSTableReader.markCompacted(SSTableReader.java:634)
... 13 more
{code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2918) After repair, one row missing from query when two rows expected

2011-07-19 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13067882#comment-13067882
 ] 

Cathy Daw commented on CASSANDRA-2918:
--

Interestingly, I did the same test case using the hector stress tool which 
writes @ CL=QUORUM.  
*Insert 1000 rows. 691 Rows Returned.*

* Bring up 3 nodes.  Create a KS with RF=3.
{code}
[default@StressKeyspace] describe keyspace StressKeyspace;  
Keyspace: StressKeyspace:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
Options: [replication_factor:3]
  Column Families:
ColumnFamily: StressStandard
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.BytesType
  Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type
  Row cache size / save period in seconds: 0.0/0
  Key cache size / save period in seconds: 1.0/14400
  Memtable thresholds: 0.2859375/1440/32 (millions of ops/MB/minutes)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 1.0
  Replicate on write: true
  Built indexes: []
{code}

* Kill one node
* Insert 1000 rows with 10 columns each in batches of 100 into standard table.
* Bring up dead node. Run Repair.
* Kill two other nodes.
* Set: consistencylevel as ONE.  
* Run: list StressStandard limit 2000;
* 691 Rows Returned.

 After repair, one row missing from query when two rows expected
 ---

 Key: CASSANDRA-2918
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2918
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.2
 Environment: Cassandra-0.8 branch @ 07/18 around 1pm PST.
Reporter: Cathy Daw
Assignee: Sylvain Lebresne

 *Cluster Config*
 {code}
 cathy1  -  50.57.114.45 - Token: 0
 cathy2  -  50.57.107.176 - Token: 56713727820156410577229101238628035242
 cathy3  -  50.57.114.39 - Token: 113427455640312821154458202477256070484
 {code}
 *+1) Create Seed Data+*
 {code}
 create keyspace testKS with placement_strategy = 'SimpleStrategy' and 
 strategy_options = [{replication_factor : 3}];
 {code}
 *+2) Kill cathy3:  50.57.114.39+*
 {code}
 root@cathy2:~/cass-0.8/bin# ./nodetool -h localhost ring
 Address DC  RackStatus State   LoadOwns   
  Token   
   
  113427455640312821154458202477256070484 
 50.57.114.45datacenter1 rack1   Up Normal  59.84 KB33.33% 
  0   
 50.57.107.176   datacenter1 rack1   Up Normal  59.85 KB33.33% 
  56713727820156410577229101238628035242  
 50.57.114.39datacenter1 rack1   Down   Normal  59.85 KB33.33% 
  113427455640312821154458202477256070484 
 {code}
 *+3) Run cassandra-cli+*
 {code}
 use testKS;
 create column family metadataCF 
 with key_validation_class = 'AsciiType' 
 and comparator = 'AsciiType' 
 and column_metadata = [
 {column_name: ascii_col, validation_class:AsciiType, index_type: KEYS},
 {column_name: byte_col, validation_class: BytesType, index_type: KEYS},
 {column_name: uuid_col, validation_class: LexicalUUIDType, index_type: KEYS},
 {column_name: int_col, validation_class: IntegerType, index_type: KEYS},
 {column_name: long_col, validation_class: LongType, index_type: KEYS},
 {column_name: utf8_col, validation_class: UTF8Type, index_type: KEYS}];
 set metadataCF['key1']['ascii_col']=ascii('this is data inserted into ascii 
 column');
 set metadataCF['key1']['byte_col']=bytes('10101010');
 set metadataCF['key1']['uuid_col']=timeuuid();
 set metadataCF['key1']['int_col']=integer(1000);
 set metadataCF['key1']['long_col']=long(444);
 set metadataCF['key1']['utf8_col']=utf8('this is data inserted into UTF8 
 column');
 //Please note: forget to change CL before inserting 'key1', so that was 
 inserted with CL=ONE by default.
 consistencylevel as TWO;
 set metadataCF['key2']['ascii_col']=ascii('key2: this is data inserted into 
 ascii column');
 set metadataCF['key2']['byte_col']=bytes('201010102');
 set metadataCF['key2']['uuid_col']=timeuuid();
 set metadataCF['key2']['int_col']=integer(2000);
 set metadataCF['key2']['long_col']=long(2);
 set metadataCF['key2']['utf8_col']=utf8('key2-this is data inserted into UTF8 
 column');
 //Assumed that the following read would be done on CL=TWO and will the second 
 replica would be guaranteed to be fix for 'key1'.
 list metadataCF;
 {code}
 {code}
 [default@testKS] list metadataCF;
 Using default limit of 100
 ---
 RowKey: key1
 = (column=ascii_col, value=this is data inserted into 

[jira] [Commented] (CASSANDRA-2918) After repair, one row missing from query when two rows expected

2011-07-19 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13067896#comment-13067896
 ] 

Cathy Daw commented on CASSANDRA-2918:
--

This looks more to me like a WRITE with CL.Q bug as the source of the problem.  
But shouldn't have repair detected that node1 and node2 were out of sync?

* 3 node cluster / KS with RF=3 / 2 nodes up

* Insert 1000 rows on node2 with CL=QUORUM
** node1:691 rows
** node2:1000 rows
** node3: down

* Repair node3
** node1:691 rows
** node2:1000 rows
** node3: 691 rows



 After repair, one row missing from query when two rows expected
 ---

 Key: CASSANDRA-2918
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2918
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.2
 Environment: Cassandra-0.8 branch @ 07/18 around 1pm PST.
Reporter: Cathy Daw
Assignee: Sylvain Lebresne

 *Cluster Config*
 {code}
 cathy1  -  50.57.114.45 - Token: 0
 cathy2  -  50.57.107.176 - Token: 56713727820156410577229101238628035242
 cathy3  -  50.57.114.39 - Token: 113427455640312821154458202477256070484
 {code}
 *+1) Create Seed Data+*
 {code}
 create keyspace testKS with placement_strategy = 'SimpleStrategy' and 
 strategy_options = [{replication_factor : 3}];
 {code}
 *+2) Kill cathy3:  50.57.114.39+*
 {code}
 root@cathy2:~/cass-0.8/bin# ./nodetool -h localhost ring
 Address DC  RackStatus State   LoadOwns   
  Token   
   
  113427455640312821154458202477256070484 
 50.57.114.45datacenter1 rack1   Up Normal  59.84 KB33.33% 
  0   
 50.57.107.176   datacenter1 rack1   Up Normal  59.85 KB33.33% 
  56713727820156410577229101238628035242  
 50.57.114.39datacenter1 rack1   Down   Normal  59.85 KB33.33% 
  113427455640312821154458202477256070484 
 {code}
 *+3) Run cassandra-cli+*
 {code}
 use testKS;
 create column family metadataCF 
 with key_validation_class = 'AsciiType' 
 and comparator = 'AsciiType' 
 and column_metadata = [
 {column_name: ascii_col, validation_class:AsciiType, index_type: KEYS},
 {column_name: byte_col, validation_class: BytesType, index_type: KEYS},
 {column_name: uuid_col, validation_class: LexicalUUIDType, index_type: KEYS},
 {column_name: int_col, validation_class: IntegerType, index_type: KEYS},
 {column_name: long_col, validation_class: LongType, index_type: KEYS},
 {column_name: utf8_col, validation_class: UTF8Type, index_type: KEYS}];
 set metadataCF['key1']['ascii_col']=ascii('this is data inserted into ascii 
 column');
 set metadataCF['key1']['byte_col']=bytes('10101010');
 set metadataCF['key1']['uuid_col']=timeuuid();
 set metadataCF['key1']['int_col']=integer(1000);
 set metadataCF['key1']['long_col']=long(444);
 set metadataCF['key1']['utf8_col']=utf8('this is data inserted into UTF8 
 column');
 //Please note: forget to change CL before inserting 'key1', so that was 
 inserted with CL=ONE by default.
 consistencylevel as TWO;
 set metadataCF['key2']['ascii_col']=ascii('key2: this is data inserted into 
 ascii column');
 set metadataCF['key2']['byte_col']=bytes('201010102');
 set metadataCF['key2']['uuid_col']=timeuuid();
 set metadataCF['key2']['int_col']=integer(2000);
 set metadataCF['key2']['long_col']=long(2);
 set metadataCF['key2']['utf8_col']=utf8('key2-this is data inserted into UTF8 
 column');
 //Assumed that the following read would be done on CL=TWO and will the second 
 replica would be guaranteed to be fix for 'key1'.
 list metadataCF;
 {code}
 {code}
 [default@testKS] list metadataCF;
 Using default limit of 100
 ---
 RowKey: key1
 = (column=ascii_col, value=this is data inserted into ascii column, 
 timestamp=1311035260656000)
 = (column=byte_col, value=10101010, timestamp=1311035260662000)
 = (column=int_col, value=1000, timestamp=1311035260669000)
 = (column=long_col, value=444, timestamp=1311035260674000)
 = (column=utf8_col, value=this is data inserted into UTF8 column, 
 timestamp=1311035260678000)
 = (column=uuid_col, value=e9811e90-b19d-11e0--2069cd105fbf, 
 timestamp=1311035260666000)
 ---
 RowKey: key2
 = (column=ascii_col, value=key2: this is data inserted into ascii column, 
 timestamp=1311035260682000)
 = (column=byte_col, value=0201010102, timestamp=1311035260685000)
 = (column=int_col, value=2000, timestamp=1311035260692000)
 = (column=long_col, value=2, timestamp=1311035260695000)
 = (column=utf8_col, value=key2-this is data inserted into UTF8 column, 
 timestamp=1311035260699000)
 = (column=uuid_col, 

[jira] [Issue Comment Edited] (CASSANDRA-2918) After repair, one row missing from query when two rows expected

2011-07-19 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13067896#comment-13067896
 ] 

Cathy Daw edited comment on CASSANDRA-2918 at 7/19/11 7:30 PM:
---

This looks more to me like a WRITE with CL.Q bug as the source of the problem.  
But shouldn't have repair detected that node1 and node2 were out of sync?

* 3 node cluster / KS with RF=3 / 2 nodes up

* Insert 1000 rows on node1 with CL=QUORUM
** node1:691 rows
** node2:1000 rows
** node3: down

* Repair node3
** node1:691 rows
** node2:1000 rows
** node3: 691 rows



  was (Author: cdaw):
This looks more to me like a WRITE with CL.Q bug as the source of the 
problem.  But shouldn't have repair detected that node1 and node2 were out of 
sync?

* 3 node cluster / KS with RF=3 / 2 nodes up

* Insert 1000 rows on node2 with CL=QUORUM
** node1:691 rows
** node2:1000 rows
** node3: down

* Repair node3
** node1:691 rows
** node2:1000 rows
** node3: 691 rows


  
 After repair, one row missing from query when two rows expected
 ---

 Key: CASSANDRA-2918
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2918
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.2
 Environment: Cassandra-0.8 branch @ 07/18 around 1pm PST.
Reporter: Cathy Daw
Assignee: Sylvain Lebresne

 *Cluster Config*
 {code}
 cathy1  -  50.57.114.45 - Token: 0
 cathy2  -  50.57.107.176 - Token: 56713727820156410577229101238628035242
 cathy3  -  50.57.114.39 - Token: 113427455640312821154458202477256070484
 {code}
 *+1) Create Seed Data+*
 {code}
 create keyspace testKS with placement_strategy = 'SimpleStrategy' and 
 strategy_options = [{replication_factor : 3}];
 {code}
 *+2) Kill cathy3:  50.57.114.39+*
 {code}
 root@cathy2:~/cass-0.8/bin# ./nodetool -h localhost ring
 Address DC  RackStatus State   LoadOwns   
  Token   
   
  113427455640312821154458202477256070484 
 50.57.114.45datacenter1 rack1   Up Normal  59.84 KB33.33% 
  0   
 50.57.107.176   datacenter1 rack1   Up Normal  59.85 KB33.33% 
  56713727820156410577229101238628035242  
 50.57.114.39datacenter1 rack1   Down   Normal  59.85 KB33.33% 
  113427455640312821154458202477256070484 
 {code}
 *+3) Run cassandra-cli+*
 {code}
 use testKS;
 create column family metadataCF 
 with key_validation_class = 'AsciiType' 
 and comparator = 'AsciiType' 
 and column_metadata = [
 {column_name: ascii_col, validation_class:AsciiType, index_type: KEYS},
 {column_name: byte_col, validation_class: BytesType, index_type: KEYS},
 {column_name: uuid_col, validation_class: LexicalUUIDType, index_type: KEYS},
 {column_name: int_col, validation_class: IntegerType, index_type: KEYS},
 {column_name: long_col, validation_class: LongType, index_type: KEYS},
 {column_name: utf8_col, validation_class: UTF8Type, index_type: KEYS}];
 set metadataCF['key1']['ascii_col']=ascii('this is data inserted into ascii 
 column');
 set metadataCF['key1']['byte_col']=bytes('10101010');
 set metadataCF['key1']['uuid_col']=timeuuid();
 set metadataCF['key1']['int_col']=integer(1000);
 set metadataCF['key1']['long_col']=long(444);
 set metadataCF['key1']['utf8_col']=utf8('this is data inserted into UTF8 
 column');
 //Please note: forget to change CL before inserting 'key1', so that was 
 inserted with CL=ONE by default.
 consistencylevel as TWO;
 set metadataCF['key2']['ascii_col']=ascii('key2: this is data inserted into 
 ascii column');
 set metadataCF['key2']['byte_col']=bytes('201010102');
 set metadataCF['key2']['uuid_col']=timeuuid();
 set metadataCF['key2']['int_col']=integer(2000);
 set metadataCF['key2']['long_col']=long(2);
 set metadataCF['key2']['utf8_col']=utf8('key2-this is data inserted into UTF8 
 column');
 //Assumed that the following read would be done on CL=TWO and will the second 
 replica would be guaranteed to be fix for 'key1'.
 list metadataCF;
 {code}
 {code}
 [default@testKS] list metadataCF;
 Using default limit of 100
 ---
 RowKey: key1
 = (column=ascii_col, value=this is data inserted into ascii column, 
 timestamp=1311035260656000)
 = (column=byte_col, value=10101010, timestamp=1311035260662000)
 = (column=int_col, value=1000, timestamp=1311035260669000)
 = (column=long_col, value=444, timestamp=1311035260674000)
 = (column=utf8_col, value=this is data inserted into UTF8 column, 
 timestamp=1311035260678000)
 = (column=uuid_col, value=e9811e90-b19d-11e0--2069cd105fbf, 
 timestamp=1311035260666000)
 

  1   2   3   >