[jira] [Updated] (CASSANDRA-10874) running stress with compaction strategy and replication factor fails on read after write

2015-12-16 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10874:

Description: 
When running a read stress after write stress with a compaction strategy and 
replication factor matching the node count will fail with an exception.  
{code}
Operation x0 on key(s) [38343433384b34364c30]: Data returned was not validated
{code}

Example run:
{code}
ccm create stress -v git:cassandra-3.0 -n 3 -s
ccm node1 stress write n=10M -rate threads=300 -schema replication\(factor=3\) 
compaction\(strategy=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy\)
ccm node1 nodetool flush
ccm node1 nodetool compactionstats # check until quiet
ccm node1 stress read n=10M -rate threads=300
{code}
- This will fail with/out vnodes but will occasionally pass without vnodes. 
- Changing the read phase to be CL=QUORUM will make it pass.  
- Removing the replication factor on write will make it pass.
- Happens on all compaction strategies

So with that in mind I attempted to add a repair after the write phase.  This 
leads to 1 of 2 outcomes.

1: a repair that has a greater than 100% completion, usually stalls after a 
bit, but have seen it get to >400% progress:
{code}
  id   compaction typekeyspace   
table completed totalunit   progress
2d5344c0-9dc8-11e5-9d5f-4fdec8d76c27Validation   keyspace1   
standard1   94722609949   44035292145   bytes215.11%
{code}

2: a repair that has a greatly inflated completed/total value, it will crunch 
for a bit then lockup:
{code}
 id   compaction typekeyspace   
table   completed  totalunit   progress
   8c4cf7f0-a34a-11e5-a321-777be88c58aeValidation   keyspace1   
standard1   0   874811100900   bytes  0.00%

❯ du -sh ~/.ccm/stress/node1/
2.4G  ~/.ccm/stress/node1/
❯ du -sh ~/.ccm/stress
7.1G  ~/.ccm/stress
{code}

This has been reproduced on cassandra-3.0 and cassandra-2.1 both locally and 
using cstar_perf (links below).  
A big twist is that cassandra-2.2 will pass the majority of the time.  It will 
complete successfully without the repair 8 out of 10 runs.  This can be seen in 
the cstar_perf links below.

cstar_perf runs:
http://cstar.datastax.com/tests/id/c8fa27a4-a205-11e5-8fbc-0256e416528f
http://cstar.datastax.com/tests/id/a254c572-a2ce-11e5-a8b9-0256e416528f

  was:
When running a read stress after write stress with a compaction strategy and 
replication factor matching the node count will fail with an exception.  
{code}
Operation x0 on key(s) [38343433384b34364c30]: Data returned was not validated
{code}

Example run:
{code}
ccm create stress -v git:cassandra-3.0 -n 3 -s
ccm node1 stress write n=10M -rate threads=300 -schema replication\(factor=3\) 
compaction\(strategy=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy\)
ccm node1 nodetool flush
ccm node1 nodetool compactionstats # check until quiet
ccm node1 stress read n=10M -rate threads=300
{code}
- This will fail with/out vnodes but will occasionally pass without vnodes. 
- Changing the read phase to be CL=QUORUM will make it pass.  
- Removing the replication factor on write will make it pass.
- Happens on all compaction strategies

So with that in mind I attempted to add a repair after the write phase.  This 
leads to 1 of 2 outcomes.

1: a repair that has a greater than 100% completion, usually stalls after a 
bit, but have seen it get to >400% progress:
{code}
  id   compaction typekeyspace   
table completed totalunit   progress
2d5344c0-9dc8-11e5-9d5f-4fdec8d76c27Validation   keyspace1   
standard1   94722609949   44035292145   bytes215.11%
{code}

2: a repair that has a greatly inflated completed/total value, it will crunch 
for a bit then lockup:
{code}
 id   compaction typekeyspace   
table   completed  totalunit   progress
   8c4cf7f0-a34a-11e5-a321-777be88c58aeValidation   keyspace1   
standard1   0   874811100900   bytes  0.00%

❯ du -sh ~/.ccm/stress/node1/
2.4G  ~/.ccm/stress/node1/
❯ du -sh ~/.ccm/stress
7.1G  ~/.ccm/stress
{code}

This has been reproduced on cassandra-3.0 and cassandra-2.2 both locally and 
using cstar_perf (links below).  
A big twist is that cassandra-2.2 will pass the majority of the time.  It will 
complete successfully without the repair 8 out of 10 runs.  This can be seen in 
the cstar_perf links below.

cstar_perf runs:
http://cstar.datastax.com/tests/id/c8fa27a4-a205-11e5-8fbc-0256e416528f
http://cstar.datastax.com/tests/id/a254c572-a2ce-11e5-a8b9-0256e416528f


> running stress with compaction strategy and replication factor fails on read 
> after write
> 

[jira] [Created] (CASSANDRA-10874) running stress with compaction strategy and replication factor fails on read after write

2015-12-15 Thread Andrew Hust (JIRA)
Andrew Hust created CASSANDRA-10874:
---

 Summary: running stress with compaction strategy and replication 
factor fails on read after write
 Key: CASSANDRA-10874
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10874
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Andrew Hust


When running a read stress after write stress with a compaction strategy and 
replication factor matching the node count will fail with an exception.  
{code}
Operation x0 on key(s) [38343433384b34364c30]: Data returned was not validated
{code}

Example run:
{code}
ccm create stress -v git:cassandra-3.0 -n 3 -s
ccm node1 stress write n=10M -rate threads=300 -schema replication\(factor=3\) 
compaction\(strategy=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy\)
ccm node1 nodetool flush
ccm node1 nodetool compactionstats # check until quiet
ccm node1 stress read n=10M -rate threads=300
{code}
- This will fail with/out vnodes but will occasionally pass without vnodes. 
- Changing the read phase to be CL=QUORUM will make it pass.  
- Removing the replication factor on write will make it pass.
- Happens on all compaction strategies

So with that in mind I attempted to add a repair after the write phase.  This 
leads to 1 of 2 outcomes.

1: a repair that has a greater than 100% completion, usually stalls after a 
bit, but have seen it get to >400% progress:
{code}
  id   compaction typekeyspace   
table completed totalunit   progress
2d5344c0-9dc8-11e5-9d5f-4fdec8d76c27Validation   keyspace1   
standard1   94722609949   44035292145   bytes215.11%
{code}

2: a repair that has a greatly inflated completed/total value, it will crunch 
for a bit then lockup:
{code}
 id   compaction typekeyspace   
table   completed  totalunit   progress
   8c4cf7f0-a34a-11e5-a321-777be88c58aeValidation   keyspace1   
standard1   0   874811100900   bytes  0.00%

❯ du -sh ~/.ccm/stress/node1/
2.4G  ~/.ccm/stress/node1/
❯ du -sh ~/.ccm/stress
7.1G  ~/.ccm/stress
{code}

This has been reproduced on cassandra-3.0 and cassandra-2.2 both locally and 
using cstar_perf (links below).  
A big twist is that cassandra-2.2 will pass the majority of the time.  It will 
complete successfully without the repair 8 out of 10 runs.  This can be seen in 
the cstar_perf links below.

cstar_perf runs:
http://cstar.datastax.com/tests/id/a8b6af02-a2ce-11e5-bb72-0256e416528f
http://cstar.datastax.com/tests/id/a254c572-a2ce-11e5-a8b9-0256e416528f



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10874) running stress with compaction strategy and replication factor fails on read after write

2015-12-15 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10874:

Description: 
When running a read stress after write stress with a compaction strategy and 
replication factor matching the node count will fail with an exception.  
{code}
Operation x0 on key(s) [38343433384b34364c30]: Data returned was not validated
{code}

Example run:
{code}
ccm create stress -v git:cassandra-3.0 -n 3 -s
ccm node1 stress write n=10M -rate threads=300 -schema replication\(factor=3\) 
compaction\(strategy=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy\)
ccm node1 nodetool flush
ccm node1 nodetool compactionstats # check until quiet
ccm node1 stress read n=10M -rate threads=300
{code}
- This will fail with/out vnodes but will occasionally pass without vnodes. 
- Changing the read phase to be CL=QUORUM will make it pass.  
- Removing the replication factor on write will make it pass.
- Happens on all compaction strategies

So with that in mind I attempted to add a repair after the write phase.  This 
leads to 1 of 2 outcomes.

1: a repair that has a greater than 100% completion, usually stalls after a 
bit, but have seen it get to >400% progress:
{code}
  id   compaction typekeyspace   
table completed totalunit   progress
2d5344c0-9dc8-11e5-9d5f-4fdec8d76c27Validation   keyspace1   
standard1   94722609949   44035292145   bytes215.11%
{code}

2: a repair that has a greatly inflated completed/total value, it will crunch 
for a bit then lockup:
{code}
 id   compaction typekeyspace   
table   completed  totalunit   progress
   8c4cf7f0-a34a-11e5-a321-777be88c58aeValidation   keyspace1   
standard1   0   874811100900   bytes  0.00%

❯ du -sh ~/.ccm/stress/node1/
2.4G  ~/.ccm/stress/node1/
❯ du -sh ~/.ccm/stress
7.1G  ~/.ccm/stress
{code}

This has been reproduced on cassandra-3.0 and cassandra-2.2 both locally and 
using cstar_perf (links below).  
A big twist is that cassandra-2.2 will pass the majority of the time.  It will 
complete successfully without the repair 8 out of 10 runs.  This can be seen in 
the cstar_perf links below.

cstar_perf runs:
http://cstar.datastax.com/tests/id/c8fa27a4-a205-11e5-8fbc-0256e416528f
http://cstar.datastax.com/tests/id/a254c572-a2ce-11e5-a8b9-0256e416528f

  was:
When running a read stress after write stress with a compaction strategy and 
replication factor matching the node count will fail with an exception.  
{code}
Operation x0 on key(s) [38343433384b34364c30]: Data returned was not validated
{code}

Example run:
{code}
ccm create stress -v git:cassandra-3.0 -n 3 -s
ccm node1 stress write n=10M -rate threads=300 -schema replication\(factor=3\) 
compaction\(strategy=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy\)
ccm node1 nodetool flush
ccm node1 nodetool compactionstats # check until quiet
ccm node1 stress read n=10M -rate threads=300
{code}
- This will fail with/out vnodes but will occasionally pass without vnodes. 
- Changing the read phase to be CL=QUORUM will make it pass.  
- Removing the replication factor on write will make it pass.
- Happens on all compaction strategies

So with that in mind I attempted to add a repair after the write phase.  This 
leads to 1 of 2 outcomes.

1: a repair that has a greater than 100% completion, usually stalls after a 
bit, but have seen it get to >400% progress:
{code}
  id   compaction typekeyspace   
table completed totalunit   progress
2d5344c0-9dc8-11e5-9d5f-4fdec8d76c27Validation   keyspace1   
standard1   94722609949   44035292145   bytes215.11%
{code}

2: a repair that has a greatly inflated completed/total value, it will crunch 
for a bit then lockup:
{code}
 id   compaction typekeyspace   
table   completed  totalunit   progress
   8c4cf7f0-a34a-11e5-a321-777be88c58aeValidation   keyspace1   
standard1   0   874811100900   bytes  0.00%

❯ du -sh ~/.ccm/stress/node1/
2.4G  ~/.ccm/stress/node1/
❯ du -sh ~/.ccm/stress
7.1G  ~/.ccm/stress
{code}

This has been reproduced on cassandra-3.0 and cassandra-2.2 both locally and 
using cstar_perf (links below).  
A big twist is that cassandra-2.2 will pass the majority of the time.  It will 
complete successfully without the repair 8 out of 10 runs.  This can be seen in 
the cstar_perf links below.

cstar_perf runs:
http://cstar.datastax.com/tests/id/a8b6af02-a2ce-11e5-bb72-0256e416528f
http://cstar.datastax.com/tests/id/a254c572-a2ce-11e5-a8b9-0256e416528f


> running stress with compaction strategy and replication factor fails on read 
> after write
> 

[jira] [Updated] (CASSANDRA-10697) Leak detected while running offline scrub

2015-11-20 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10697:

Component/s: Tools

> Leak detected while running offline scrub
> -
>
> Key: CASSANDRA-10697
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10697
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: C* 2.1.9 on Debian Wheezy
>Reporter: mlowicki
>Priority: Critical
>
> I got couple of those:
> {code}
> ERROR 05:09:15 LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3b60e162) to class 
> org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1433208674:/var/lib/cassandra/data/sync/entity2-e24b5040199b11e5a30f75bb514ae072/sync-entity2-ka-405434
>  was not released before the reference was garbage collected
> {code}
> and then:
> {code}
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:99)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:81)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:353)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:444)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:424)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:378)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:327)
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:397)
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52)
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:120)
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:165)
> at 
> org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:121)
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:192)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:127)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.tryAppend(SSTableRewriter.java:158)
> at 
> org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:220)
> at 
> org.apache.cassandra.tools.StandaloneScrubber.main(StandaloneScrubber.java:116)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10743) Failed upgradesstables (upgrade from 2.2.2 to 3.0.0)

2015-11-20 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15018895#comment-15018895
 ] 

Andrew Hust commented on CASSANDRA-10743:
-

Would it be possible to get the full schema?   I can attempt to narrow it down 
to find the problem one(s).   You can get the full schema via {{cqlsh}} ex: 
{{echo -e "desc schema\n" | cqlsh > schema.ddl}}

> Failed upgradesstables (upgrade from 2.2.2 to 3.0.0)
> 
>
> Key: CASSANDRA-10743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10743
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS Linux release 7.1.1503, OpenJDK Runtime 
> Environment (build 1.8.0_65-b17), DSC Cassandra 3.0.0 (tar.gz)
>Reporter: Gábor Auth
>
> {code}
> [cassandra@dc01-rack01-cass01 ~]$ 
> /home/cassandra/dsc-cassandra-3.0.0/bin/nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.db.rows.CellPath$EmptyCellPath.get(CellPath.java:143)
> at 
> org.apache.cassandra.db.marshal.CollectionType$CollectionPathSerializer.serializedSize(CollectionType.java:226)
> at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serializedSize(BufferCell.java:325)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.sizeOfComplexColumn(UnfilteredSerializer.java:297)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serializedRowBodySize(UnfilteredSerializer.java:282)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:163)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:144)
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:112)
> at 
> org.apache.cassandra.db.ColumnIndex.writeAndBuildIndex(ColumnIndex.java:52)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:149)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:121)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:110)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:397)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10715) Filtering on NULL returns ReadFailure exception

2015-11-18 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10715:

Component/s: CQL

> Filtering on NULL returns ReadFailure exception
> ---
>
> Key: CASSANDRA-10715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10715
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.0.0 | cqlsh | C# driver 3.0.0beta2 | Windows 2012 R2
>Reporter: Kishan Karunaratne
>
> This is an issue I first noticed through the C# driver, but I was able to 
> repro on cqlsh, leading me to believe this is a Cassandra bug.
> Given the following schema:
> {noformat}
> CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
> unique_movie_title text,
> movie_maker text,
> director text,
> list list,
> "mainGuy" text,
> "yearMade" int,
> PRIMARY KEY ((unique_movie_title, movie_maker), director)
> ) WITH CLUSTERING ORDER BY (director ASC)
> {noformat}
> Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
> argument:
> {noformat}
> SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
> "yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
> {noformat}
> returns a ReadFailure exception:
> {noformat}
> cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
> "unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
> "mainGuy" = null ALLOW FILTERING;
> ←[0;1;31mTraceback (most recent call last):
>   File "C:\Users\Kishan\.ccm\repository\3.0.0\bin\\cqlsh.py", line 1216, in 
> perform_simple_statement
> result = future.result()
>   File 
> "C:\Users\Kishan\.ccm\repository\3.0.0\bin\..\lib\cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip\cassandra-driver-3.0.0a3.post0-3f15725\cassandra\cluster.py",
>  line 3118, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'cons
> istency': 'ONE'}
> ←[0m
> {noformat}
> Cassandra log shows:
> {noformat}
> WARN  [SharedPool-Worker-2] 2015-11-16 13:51:00,259 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,10,main]: {}
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:581)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:243)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:233)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:227)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:293)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:288) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1692)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> 

[jira] [Commented] (CASSANDRA-10711) NoSuchElementException when executing empty batch.

2015-11-16 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15007570#comment-15007570
 ] 

Andrew Hust commented on CASSANDRA-10711:
-

- confirmed that 2.2 {{73a730f926d25a7d4f693507937b8565b701259c}} does not 
throw error
- confirmed both 3.0 {{c0480d8bbddf111e4cd7c67ef7c0daeec3ece2dc}} and trunk 
{{0010fce6d2c9a811eb66de077b69a83dce29a6ff}} throw same 
{{NoSuchElementException}} in cqlsh
- added [dtest|https://github.com/riptano/cassandra-dtest/pull/662] to verify 
fix when made

> NoSuchElementException when executing empty batch.
> --
>
> Key: CASSANDRA-10711
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10711
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0, OSS 42.1
>Reporter: Jaroslav Kamenik
> Fix For: 3.0.1, 3.1
>
>
> After upgrade to C* 3.0, it fails when executes empty batch:
> java.util.NoSuchElementException: null
> at java.util.ArrayList$Itr.next(ArrayList.java:854) ~[na:1.8.0_60]
> at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:737)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.executeWithoutConditions(BatchStatement.java:356)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:337)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:323)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:490)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:480)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.transport.messages.BatchMessage.execute(BatchMessage.java:217)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.0.0.jar:3.0.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [apache-cassandra-3.0.0.jar:3.0.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.0.jar:3.0.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10711) NoSuchElementException when executing empty batch.

2015-11-16 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10711:

Labels: triaged  (was: )

> NoSuchElementException when executing empty batch.
> --
>
> Key: CASSANDRA-10711
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10711
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0, OSS 42.1
>Reporter: Jaroslav Kamenik
>  Labels: triaged
> Fix For: 3.0.1, 3.1
>
>
> After upgrade to C* 3.0, it fails when executes empty batch:
> java.util.NoSuchElementException: null
> at java.util.ArrayList$Itr.next(ArrayList.java:854) ~[na:1.8.0_60]
> at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:737)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.executeWithoutConditions(BatchStatement.java:356)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:337)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:323)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:490)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:480)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.transport.messages.BatchMessage.execute(BatchMessage.java:217)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.0.0.jar:3.0.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [apache-cassandra-3.0.0.jar:3.0.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.0.jar:3.0.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10603) Fix CQL syntax errors in upgrade_through_versions_test

2015-10-29 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust resolved CASSANDRA-10603.
-
Resolution: Fixed

> Fix CQL syntax errors in upgrade_through_versions_test
> --
>
> Key: CASSANDRA-10603
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10603
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Andrew Hust
>
> In the {{cassandra_upgrade_2.1_to_3.0_proto_v3}} upgrade tests on CassCI, 
> some of the tests are failing with the following error:
> {code}
> 
> {code}
> The tests that fail this way [(at least as of this 
> run)|http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/]
>  are the following:
> {code}
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.parallel_upgrade_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.bootstrap_multidc_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.bootstrap_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.bootstrap_multidc_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.bootstrap_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.parallel_upgrade_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.parallel_upgrade_with_internode_ssl_test
> {code}
> There may be other tests in other protocol upgrade jobs that fail this way, 
> but I haven't dug through yet to see.
> Assigning to [~rhatch] since, afaik, you're the most likely person to 
> understand the problem. Feel free to reassign, of course.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10611) Upgrade test on 2.1->3.0 path fails with configuration problems

2015-10-29 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14981090#comment-14981090
 ] 

Andrew Hust commented on CASSANDRA-10611:
-

Fixed with PR: https://github.com/riptano/cassandra-dtest/pull/633

> Upgrade test on 2.1->3.0 path fails with configuration problems
> ---
>
> Key: CASSANDRA-10611
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10611
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Andrew Hust
> Fix For: 3.0.0
>
>
> The following test fails on the uprgrade path from 2.1 to 3.0:
> http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/upgrade_through_versions_test/TestUpgrade_from_3_0_latest_tag_to_3_0_HEAD/bootstrap_multidc_test/
> I believe it's basically a configuration error; the cluster likely just needs 
> to be reconfigured in the test:
> {code}
> code=2200 [Invalid query] message="User-defined functions are disabled in 
> cassandra.yaml - set enable_user_defined_functions=true to enable"
> {code}
> Assigning to [~rhatch] for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10611) Upgrade test on 2.1->3.0 path fails with configuration problems

2015-10-29 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust resolved CASSANDRA-10611.
-
Resolution: Fixed

> Upgrade test on 2.1->3.0 path fails with configuration problems
> ---
>
> Key: CASSANDRA-10611
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10611
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Andrew Hust
> Fix For: 3.0.0
>
>
> The following test fails on the uprgrade path from 2.1 to 3.0:
> http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/upgrade_through_versions_test/TestUpgrade_from_3_0_latest_tag_to_3_0_HEAD/bootstrap_multidc_test/
> I believe it's basically a configuration error; the cluster likely just needs 
> to be reconfigured in the test:
> {code}
> code=2200 [Invalid query] message="User-defined functions are disabled in 
> cassandra.yaml - set enable_user_defined_functions=true to enable"
> {code}
> Assigning to [~rhatch] for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10611) Upgrade test on 2.1->3.0 path fails with configuration problems

2015-10-29 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust reassigned CASSANDRA-10611:
---

Assignee: Andrew Hust  (was: Russ Hatch)

> Upgrade test on 2.1->3.0 path fails with configuration problems
> ---
>
> Key: CASSANDRA-10611
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10611
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Andrew Hust
> Fix For: 3.0.0
>
>
> The following test fails on the uprgrade path from 2.1 to 3.0:
> http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/upgrade_through_versions_test/TestUpgrade_from_3_0_latest_tag_to_3_0_HEAD/bootstrap_multidc_test/
> I believe it's basically a configuration error; the cluster likely just needs 
> to be reconfigured in the test:
> {code}
> code=2200 [Invalid query] message="User-defined functions are disabled in 
> cassandra.yaml - set enable_user_defined_functions=true to enable"
> {code}
> Assigning to [~rhatch] for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10603) Fix CQL syntax errors in upgrade_through_versions_test

2015-10-29 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust reassigned CASSANDRA-10603:
---

Assignee: Andrew Hust  (was: Russ Hatch)

> Fix CQL syntax errors in upgrade_through_versions_test
> --
>
> Key: CASSANDRA-10603
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10603
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Andrew Hust
>
> In the {{cassandra_upgrade_2.1_to_3.0_proto_v3}} upgrade tests on CassCI, 
> some of the tests are failing with the following error:
> {code}
> 
> {code}
> The tests that fail this way [(at least as of this 
> run)|http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/]
>  are the following:
> {code}
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.parallel_upgrade_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.bootstrap_multidc_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.bootstrap_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.bootstrap_multidc_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.bootstrap_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.parallel_upgrade_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.parallel_upgrade_with_internode_ssl_test
> {code}
> There may be other tests in other protocol upgrade jobs that fail this way, 
> but I haven't dug through yet to see.
> Assigning to [~rhatch] since, afaik, you're the most likely person to 
> understand the problem. Feel free to reassign, of course.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10603) Fix CQL syntax errors in upgrade_through_versions_test

2015-10-29 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14981081#comment-14981081
 ] 

Andrew Hust commented on CASSANDRA-10603:
-

Fixed with PR: https://github.com/riptano/cassandra-dtest/pull/633

> Fix CQL syntax errors in upgrade_through_versions_test
> --
>
> Key: CASSANDRA-10603
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10603
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Andrew Hust
>
> In the {{cassandra_upgrade_2.1_to_3.0_proto_v3}} upgrade tests on CassCI, 
> some of the tests are failing with the following error:
> {code}
> 
> {code}
> The tests that fail this way [(at least as of this 
> run)|http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/]
>  are the following:
> {code}
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.parallel_upgrade_with_internode_ssl_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.parallel_upgrade_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.bootstrap_multidc_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_2_1_HEAD.bootstrap_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.bootstrap_multidc_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.bootstrap_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.parallel_upgrade_test
> upgrade_through_versions_test.TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.parallel_upgrade_with_internode_ssl_test
> {code}
> There may be other tests in other protocol upgrade jobs that fail this way, 
> but I haven't dug through yet to see.
> Assigning to [~rhatch] since, afaik, you're the most likely person to 
> understand the problem. Feel free to reassign, of course.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10452) Fix resummable_bootstrap_test and bootstrap_with_reset_bootstrap_state_test

2015-10-27 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14976765#comment-14976765
 ] 

Andrew Hust commented on CASSANDRA-10452:
-

Confirmed tests are no longer flapping and pass successfully when ran without 
asserts enabled.  Closing.

> Fix resummable_bootstrap_test and bootstrap_with_reset_bootstrap_state_test
> ---
>
> Key: CASSANDRA-10452
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10452
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Yuki Morishita
> Fix For: 3.0.0
>
>
> {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} has been 
> flapping on CassCI lately:
> http://cassci.datastax.com/view/trunk/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/
> I have not been able to reproduce on OpenStack. I'm assigning [~yukim] for 
> now, but feel free to reassign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10522) counter upgrade dtest fails on 3.0 with JVM assertions disabled

2015-10-15 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959736#comment-14959736
 ] 

Andrew Hust commented on CASSANDRA-10522:
-

Confirmed that these tests (and duplicate jira tests) now pass and no exception 
is thrown.

Ran on:
yukim/10522: {{93783039918f8662760195e0f33c4cab20b17c8d}}

> counter upgrade dtest fails on 3.0 with JVM assertions disabled
> ---
>
> Key: CASSANDRA-10522
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10522
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Andrew Hust
>Assignee: Yuki Morishita
> Fix For: 3.0.0 rc2
>
>
> {{counter_tests.TestCounters.upgrade_test}}
> will fail when run on a cluster with JVM assertions disabled.  The tests will 
> hang when cassandra throws the following exception:
> {code}
> java.lang.IllegalStateException: No match found
>   at java.util.regex.Matcher.group(Matcher.java:536) ~[na:1.8.0_60]
>   at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:52) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:399)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.removeUnfinishedLeftovers(LifecycleTransaction.java:552)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:571)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StartupChecks$7.execute(StartupChecks.java:274) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:103) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:169) 
> [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:548)
>  [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:676) 
> [main/:na]
> {code}
> These tests both pass with/without JVM assertions on C* 2.2 and pass on 3.0 
> when assertions are enabled.
> Ran against:
> apache/cassandra-2.2: {{7cab3272455bdd16b639c510416ae339a8613414}}
> apache/cassandra-3.0: {{f21c888510b0dbbea1a63459476f2dc54093de63}}
> Ran with cmd:
> {{JVM_EXTRA_OPTS=-da PRINT_DEBUG=true nosetests -xsv 
> counter_tests.TestCounters.upgrade_test}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10522) counter upgrade dtest fails on 3.0 with JVM assertions disabled

2015-10-14 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957118#comment-14957118
 ] 

Andrew Hust commented on CASSANDRA-10522:
-

Both failures throw the same exception on startup and might be one issue.

> counter upgrade dtest fails on 3.0 with JVM assertions disabled
> ---
>
> Key: CASSANDRA-10522
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10522
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Andrew Hust
> Fix For: 3.0.0 rc2
>
>
> {{counter_tests.TestCounters.upgrade_test}}
> will fail when run on a cluster with JVM assertions disabled.  The tests will 
> hang when cassandra throws the following exception:
> {code}
> java.lang.IllegalStateException: No match found
>   at java.util.regex.Matcher.group(Matcher.java:536) ~[na:1.8.0_60]
>   at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:52) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:399)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.removeUnfinishedLeftovers(LifecycleTransaction.java:552)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:571)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StartupChecks$7.execute(StartupChecks.java:274) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:103) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:169) 
> [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:548)
>  [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:676) 
> [main/:na]
> {code}
> These tests both pass with/without JVM assertions on C* 2.2 and pass on 3.0 
> when assertions are enabled.
> Ran against:
> apache/cassandra-2.2: {{7cab3272455bdd16b639c510416ae339a8613414}}
> apache/cassandra-3.0: {{f21c888510b0dbbea1a63459476f2dc54093de63}}
> Ran with cmd:
> {{JVM_EXTRA_OPTS=-da PRINT_DEBUG=true nosetests -xsv 
> counter_tests.TestCounters.upgrade_test}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10521) snapshot dtests failing on 3.0 when JVM assertions disabled

2015-10-14 Thread Andrew Hust (JIRA)
Andrew Hust created CASSANDRA-10521:
---

 Summary: snapshot dtests failing on 3.0 when JVM assertions 
disabled
 Key: CASSANDRA-10521
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10521
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Andrew Hust
 Fix For: 3.0.0 rc2


{{snapshot_test.TestArchiveCommitlog.test_archive_commitlog_with_active_commitlog}}
and
{{snapshot_test.TestArchiveCommitlog.test_archive_commitlog_point_in_time_with_active_commitlog}}
will fail when run on a cluster with JVM assertions disabled.  The tests will 
hang when cassandra throws the following exception (in both cases):
{code}
java.lang.IllegalStateException: No match found
at java.util.regex.Matcher.group(Matcher.java:536) ~[na:1.8.0_60]
at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:52) 
~[main/:na]
at 
org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:399)
 ~[main/:na]
at 
org.apache.cassandra.db.lifecycle.LifecycleTransaction.removeUnfinishedLeftovers(LifecycleTransaction.java:552)
 ~[main/:na]
at 
org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:571)
 ~[main/:na]
at 
org.apache.cassandra.service.StartupChecks$7.execute(StartupChecks.java:274) 
~[main/:na]
at 
org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:103) 
~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:169) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:548) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:676) 
[main/:na]
{code}

These tests both pass with/without JVM assertions on C* 2.2 and pass on 3.0 
when assertions are enabled.

Ran against:
apache/cassandra-2.2: {{7cab3272455bdd16b639c510416ae339a8613414}}
apache/cassandra-3.0: {{f21c888510b0dbbea1a63459476f2dc54093de63}}

Ran with cmd:
{{JVM_EXTRA_OPTS=-da PRINT_DEBUG=true nosetests -xsv 
snapshot_test.py:TestArchiveCommitlog.test_archive_commitlog_point_in_time_with_active_commitlog}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10522) counter upgrade dtest fails on 3.0 with JVM assertions disabled

2015-10-14 Thread Andrew Hust (JIRA)
Andrew Hust created CASSANDRA-10522:
---

 Summary: counter upgrade dtest fails on 3.0 with JVM assertions 
disabled
 Key: CASSANDRA-10522
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10522
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Andrew Hust
 Fix For: 3.0.0 rc2


{{counter_tests.TestCounters.upgrade_test}}
will fail when run on a cluster with JVM assertions disabled.  The tests will 
hang when cassandra throws the following exception:
{code}
java.lang.IllegalStateException: No match found
at java.util.regex.Matcher.group(Matcher.java:536) ~[na:1.8.0_60]
at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:52) 
~[main/:na]
at 
org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:399)
 ~[main/:na]
at 
org.apache.cassandra.db.lifecycle.LifecycleTransaction.removeUnfinishedLeftovers(LifecycleTransaction.java:552)
 ~[main/:na]
at 
org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:571)
 ~[main/:na]
at 
org.apache.cassandra.service.StartupChecks$7.execute(StartupChecks.java:274) 
~[main/:na]
at 
org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:103) 
~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:169) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:548) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:676) 
[main/:na]
{code}

These tests both pass with/without JVM assertions on C* 2.2 and pass on 3.0 
when assertions are enabled.

Ran against:
apache/cassandra-2.2: {{7cab3272455bdd16b639c510416ae339a8613414}}
apache/cassandra-3.0: {{f21c888510b0dbbea1a63459476f2dc54093de63}}

Ran with cmd:
{{JVM_EXTRA_OPTS=-da PRINT_DEBUG=true nosetests -xsv 
counter_tests.TestCounters.upgrade_test}}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10521) snapshot dtests failing on 3.0 when JVM assertions disabled

2015-10-14 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust resolved CASSANDRA-10521.
-
Resolution: Duplicate

> snapshot dtests failing on 3.0 when JVM assertions disabled
> ---
>
> Key: CASSANDRA-10521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10521
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Andrew Hust
> Fix For: 3.0.0 rc2
>
>
> {{snapshot_test.TestArchiveCommitlog.test_archive_commitlog_with_active_commitlog}}
> and
> {{snapshot_test.TestArchiveCommitlog.test_archive_commitlog_point_in_time_with_active_commitlog}}
> will fail when run on a cluster with JVM assertions disabled.  The tests will 
> hang when cassandra throws the following exception (in both cases):
> {code}
> java.lang.IllegalStateException: No match found
>   at java.util.regex.Matcher.group(Matcher.java:536) ~[na:1.8.0_60]
>   at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:52) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:399)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.removeUnfinishedLeftovers(LifecycleTransaction.java:552)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:571)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StartupChecks$7.execute(StartupChecks.java:274) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:103) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:169) 
> [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:548)
>  [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:676) 
> [main/:na]
> {code}
> These tests both pass with/without JVM assertions on C* 2.2 and pass on 3.0 
> when assertions are enabled.
> Ran against:
> apache/cassandra-2.2: {{7cab3272455bdd16b639c510416ae339a8613414}}
> apache/cassandra-3.0: {{f21c888510b0dbbea1a63459476f2dc54093de63}}
> Ran with cmd:
> {{JVM_EXTRA_OPTS=-da PRINT_DEBUG=true nosetests -xsv 
> snapshot_test.py:TestArchiveCommitlog.test_archive_commitlog_point_in_time_with_active_commitlog}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10452) Fix flapping resummable_bootstrap_test

2015-10-12 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953215#comment-14953215
 ] 

Andrew Hust commented on CASSANDRA-10452:
-

{{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} and 
{{bootstrap_test.py:TestBootstrap.bootstrap_with_reset_bootstrap_state_test}} 
will consistently fail if jvm assertions are disabled.  

{{resumable_bootstrap_test}} will fail with this exception:
{code}
ERROR [main] 2015-10-12 09:21:58,088 CassandraDaemon.java:689 - Exception 
encountered during startup
java.lang.IllegalStateException: No match found
at java.util.regex.Matcher.group(Matcher.java:536) ~[na:1.8.0_60]
at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:52) 
~[main/:na]
at 
org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:399)
 ~[main/:na]
at 
org.apache.cassandra.db.lifecycle.LifecycleTransaction.removeUnfinishedLeftovers(LifecycleTransaction.java:552)
 ~[main/:na]
at 
org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:571)
 ~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:239) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:548) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:676) 
[main/:na]
{code}

{{bootstrap_with_reset_bootstrap_state_test}} will fail with this exception:
{code}
ERROR [STREAM-IN-/127.0.0.1] 2015-10-12 09:26:04,777 StreamSession.java:520 - 
[Stream #8d0ebd40-70f5-11e5-92fe-517de766f243] Streaming error occurred
java.io.EOFException: null
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
~[na:1.8.0_60]
at 
org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:142)
 ~[main/:na]
at 
org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:366)
 ~[main/:na]
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:376)
 ~[main/:na]
at 
org.apache.cassandra.streaming.StreamReader$StreamDeserializer.newPartition(StreamReader.java:198)
 ~[main/:na]
at 
org.apache.cassandra.streaming.StreamReader.writePartition(StreamReader.java:168)
 ~[main/:na]
at 
org.apache.cassandra.streaming.StreamReader.read(StreamReader.java:109) 
~[main/:na]
at 
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:50)
 ~[main/:na]
at 
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:39)
 ~[main/:na]
at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:59)
 ~[main/:na]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
 ~[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
{code}

Ran against:
apache/cassandra-3.0: {{345be2057383895f8d4c6b07a89357a0e87441be}}

Ran with command:
{{KEEP_LOGS=true JVM_EXTRA_OPTS=-da PRINT_DEBUG=true nosetests -xsv 
bootstrap_test.py:TestBootstrap.bootstrap_with_reset_bootstrap_state_test}}

> Fix flapping resummable_bootstrap_test
> --
>
> Key: CASSANDRA-10452
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10452
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Yuki Morishita
> Fix For: 3.0.0 rc2
>
>
> {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} has been 
> flapping on CassCI lately:
> http://cassci.datastax.com/view/trunk/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/
> I have not been able to reproduce on OpenStack. I'm assigning [~yukim] for 
> now, but feel free to reassign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10452) Fix resummable_bootstrap_test and bootstrap_with_reset_bootstrap_state_test

2015-10-12 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10452:

Summary: Fix resummable_bootstrap_test and 
bootstrap_with_reset_bootstrap_state_test  (was: Fix flapping 
resummable_bootstrap_test)

> Fix resummable_bootstrap_test and bootstrap_with_reset_bootstrap_state_test
> ---
>
> Key: CASSANDRA-10452
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10452
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Yuki Morishita
> Fix For: 3.0.0 rc2
>
>
> {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} has been 
> flapping on CassCI lately:
> http://cassci.datastax.com/view/trunk/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/
> I have not been able to reproduce on OpenStack. I'm assigning [~yukim] for 
> now, but feel free to reassign.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10360) UnsupportedOperationException when compacting system.size_estimates after 2.1 -> 2.2 -> 3.0 upgrade

2015-10-12 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953490#comment-14953490
 ] 

Andrew Hust commented on CASSANDRA-10360:
-

Confirmed that the above upgrade test script now passes without exceptions.

Ran with:
apache/cassandra-2.1: {{4acc3a69d319b0e7e00cbd37b27e988ebfa4df4f}}
apache/cassandra-2.2: {{0c051a46c54fd1a2f151e1a68f4556faca02be8d}}
pcmanus/10360: {{6a56057a4eaab01480fba6292c5a59d4c3d62c49}}

> UnsupportedOperationException when compacting system.size_estimates after 2.1 
> -> 2.2 -> 3.0 upgrade
> ---
>
> Key: CASSANDRA-10360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10360
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
>Assignee: Sylvain Lebresne
>Priority: Blocker
> Fix For: 3.0.0 rc2
>
> Attachments: size_estimates.2.tar.bz2, size_estimates.tar.bz2, 
> system.log.2.bz2, system.log.bz2
>
>
> When upgrading from 2.1 -> 2.2 -> 3.0 the system.size_estimates table will 
> get stuck in a compaction loop throwing:
> {code}
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.readNext(UnfilteredDeserializer.java:382)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:147)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:96)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.utils.MergeIterator$TrivialOneToOne.computeNext(MergeIterator.java:460)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:503)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:363)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.WrappingUnfilteredRowIterator.hasNext(WrappingUnfilteredRowIterator.java:80)
> at 
> org.apache.cassandra.db.rows.AlteringUnfilteredRowIterator.hasNext(AlteringUnfilteredRowIterator.java:72)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:100)
> at 
> org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:72)
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:223)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:539)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> It will only occur when upgrading thru 2.2.  Going from 2.1 -> 3.0 will not 
> surface the issue.  It can be reproduced with the following (note -- changing 
> jdks will need to be altered if you aren't on OSX)
> {code}
> #!/bin/sh
> echo "using java7 for cassandra-2.1 instance"
> export JAVA_HOME=$(/usr/libexec/java_home -v 1.7)
> ccm stop
> ccm remove upgrade_nodes
> ccm create -n 1 -v git:cassandra-2.1 

[jira] [Commented] (CASSANDRA-10434) Problem upgrading to 3.0 with UDA

2015-10-07 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947635#comment-14947635
 ] 

Andrew Hust commented on CASSANDRA-10434:
-

Confirmed patch fixes issue with exception post upgrade and UDA performs as 
expected post upgrade.  Once this has been merged I'll update the upgrading 
dtest to include UDA's in setup and verification.

Ran with:
apache/cassandra-2.2 {{be89dae3ecfd98b2170732c45d7f95807d5c19af}}
snazy/10434-uda-migration-3.0 {{b83a088f252e906faa9924def8e24997e072c109}}

> Problem upgrading to 3.0 with UDA
> -
>
> Key: CASSANDRA-10434
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10434
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Robert Stupp
> Fix For: 3.0.0 rc2
>
>
> Copy-pasting from [~nutbunnies] comment on CASSANDRA-9756:
> {quote}
> upgrading from 2.2 to 3.0 with a UDA defined will throw the exception below 
> and fail to start when upgraded to 3.0.
> Used:
> 2.2: {{ae9b7e05222b2a25eda5618cf9eb17103e4d6d8b}}
> 3.0: {{5c2912d1ce95aacdacb59ccc840b12cd9aa0c8f8}}
> {noformat}
> org.apache.cassandra.exceptions.UnrecognizedEntityException: Undefined name 
> function_name in where clause ('function_name = ?')
> at 
> org.apache.cassandra.cql3.Relation.toColumnDefinition(Relation.java:259) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.SingleColumnRelation.newEQRestriction(SingleColumnRelation.java:160)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.Relation.toRestriction(Relation.java:137) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:151)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepareRestrictions(SelectStatement.java:817)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:764)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:752)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:504)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:241)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:336)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.query(LegacySchemaMigrator.java:882)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readAggregateMetadata(LegacySchemaMigrator.java:849)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readAggregate(LegacySchemaMigrator.java:830)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readAggregates$216(LegacySchemaMigrator.java:823)
>  ~[main/:na]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_60]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readAggregates(LegacySchemaMigrator.java:823)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:166)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$207(LegacySchemaMigrator.java:154)
>  ~[main/:na]
> at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_60]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:154)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:223) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:542)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:668) 
> [main/:na]
> {noformat}
> Can be reproduced with:
> {noformat}
> ccm stop
> ccm remove uda_upgrade
> ccm create -n 1 -v git:cassandra-2.2 uda_upgrade
> ccm updateconf 'enable_user_defined_functions: true'
> ccm start
> cat << EOF | ccm node1 cqlsh
> create keyspace ks WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> USE ks;
> CREATE FUNCTION func_1(current int, candidate int)
> CALLED ON NULL INPUT
> RETURNS int LANGUAGE java AS
> 'if (current == null) return candidate; else return 
> Math.max(current, candidate);';
> CREATE AGGREGATE agg_1(int)
> SFUNC func_1
> STYPE int
> INITCOND null;
> EOF
> sleep 10
> echo "Draining all nodes"
> ccm node1 nodetool drain
> ccm stop
> 

[jira] [Updated] (CASSANDRA-10360) UnsupportedOperationException when compacting system.size_estimates after 2.1 -> 2.2 -> 3.0 upgrade

2015-10-07 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10360:

Attachment: size_estimates.2.tar.bz2
system.log.2.bz2

> UnsupportedOperationException when compacting system.size_estimates after 2.1 
> -> 2.2 -> 3.0 upgrade
> ---
>
> Key: CASSANDRA-10360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10360
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
>Assignee: Sylvain Lebresne
>Priority: Blocker
> Fix For: 3.0.0 rc2
>
> Attachments: size_estimates.2.tar.bz2, size_estimates.tar.bz2, 
> system.log.2.bz2, system.log.bz2
>
>
> When upgrading from 2.1 -> 2.2 -> 3.0 the system.size_estimates table will 
> get stuck in a compaction loop throwing:
> {code}
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.readNext(UnfilteredDeserializer.java:382)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:147)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:96)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.utils.MergeIterator$TrivialOneToOne.computeNext(MergeIterator.java:460)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:503)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:363)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.WrappingUnfilteredRowIterator.hasNext(WrappingUnfilteredRowIterator.java:80)
> at 
> org.apache.cassandra.db.rows.AlteringUnfilteredRowIterator.hasNext(AlteringUnfilteredRowIterator.java:72)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:100)
> at 
> org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:72)
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:223)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:539)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> It will only occur when upgrading thru 2.2.  Going from 2.1 -> 3.0 will not 
> surface the issue.  It can be reproduced with the following (note -- changing 
> jdks will need to be altered if you aren't on OSX)
> {code}
> #!/bin/sh
> echo "using java7 for cassandra-2.1 instance"
> export JAVA_HOME=$(/usr/libexec/java_home -v 1.7)
> ccm stop
> ccm remove upgrade_nodes
> ccm create -n 1 -v git:cassandra-2.1 upgrade_nodes
> ccm start
> ccm node1 stress write n=500K -rate threads=4 -mode native cql3
> sleep 10
> for cver in 3.0
> do
> echo "Draining all nodes"
> ccm node1 nodetool drain
> ccm stop
> echo "switching to java 8"
> export 

[jira] [Commented] (CASSANDRA-10360) UnsupportedOperationException when compacting system.size_estimates after 2.1 -> 2.2 -> 3.0 upgrade

2015-10-07 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947144#comment-14947144
 ] 

Andrew Hust commented on CASSANDRA-10360:
-

I ran your patch thru a 2.1->2.2->3.0 upgrade and it failed with a different 
exception.  I've attached the system.log and size_estimates sstables from this 
run.

Ran with:
apache/cassandra-2.1 {{1b08cbd817dea379ea84175381d3ef151fe65681}}
apache/cassandra-2.2: {{be89dae3ecfd98b2170732c45d7f95807d5c19af}}
pcmanus/10360: {{8f7d524efcbf0157697e67a3b4ff4b883b441a55}}

{code}
java.lang.AssertionError
at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.checkReady(UnfilteredDeserializer.java:346)
at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.nextIsRow(UnfilteredDeserializer.java:367)
at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.readNext(UnfilteredDeserializer.java:384)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:147)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:96)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:503)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:363)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.WrappingUnfilteredRowIterator.hasNext(WrappingUnfilteredRowIterator.java:80)
at 
org.apache.cassandra.db.rows.AlteringUnfilteredRowIterator.hasNext(AlteringUnfilteredRowIterator.java:72)
at 
org.apache.cassandra.db.rows.WrappingUnfilteredRowIterator.hasNext(WrappingUnfilteredRowIterator.java:80)
at 
org.apache.cassandra.db.rows.AlteringUnfilteredRowIterator.hasNext(AlteringUnfilteredRowIterator.java:72)
at 
org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:110)
at 
org.apache.cassandra.db.ColumnIndex.writeAndBuildIndex(ColumnIndex.java:52)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:148)
at 
org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
at 
org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
at 
org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:112)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:539)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

> UnsupportedOperationException when compacting system.size_estimates after 2.1 
> -> 2.2 -> 3.0 upgrade
> 

[jira] [Assigned] (CASSANDRA-10391) sstableloader fails with client SSL enabled with non-standard keystore/truststore location

2015-10-02 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust reassigned CASSANDRA-10391:
---

Assignee: Andrew Hust

> sstableloader fails with client SSL enabled with non-standard 
> keystore/truststore location
> --
>
> Key: CASSANDRA-10391
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10391
> Project: Cassandra
>  Issue Type: Bug
> Environment: [cqlsh 4.1.1 | Cassandra 2.0.14.425 | DSE 4.6.6 | CQL 
> spec 3.1.1 | Thrift protocol 19.39.0]
> [cqlsh 5.0.1 | Cassandra 2.1.8.689 | DSE 4.7.3 | CQL spec 3.2.0 | Native 
> protocol v3]
>Reporter: Jon Moses
>Assignee: Andrew Hust
>
> If client SSL is enabled, sstableloader is unable to access the keystore and 
> truststore if they are not in the expected locations.  I reproduce this issue 
> providing {{-f /path/to/cassandra.yaml}} as well as manually using the 
> {{-ks}} flag with the proper path to the keystore.
> For example:
> {noformat}
> client_encryption_options:
> enabled: true
> keystore: /var/tmp/.keystore
> {noformat}
> {noformat}
> # sstableloader -d 172.31.2.240,172.31.2.241 -f 
> /etc/dse/cassandra/cassandra.yaml Keyspace1/Standard1/
> Could not retrieve endpoint ranges:
> java.io.FileNotFoundException: /usr/share/dse/conf/.keystore
> Run with --debug to get full stack trace or --help to get help.
> #
> # sstableloader -d 172.31.2.240,172.31.2.241 -ks /var/tmp/.keystore 
> Keyspace1/Standard1/
> Could not retrieve endpoint ranges:
> java.io.FileNotFoundException: /usr/share/dse/conf/.keystore
> Run with --debug to get full stack trace or --help to get help.
> #
> {noformat}
> The full stack is:
> {noformat}
> # sstableloader -d 172.31.2.240,172.31.2.241 -f 
> /etc/dse/cassandra/cassandra.yaml --debug Keyspace1/Standard1/
> Could not retrieve endpoint ranges:
> java.io.FileNotFoundException: /usr/share/dse/conf/.keystore
> java.lang.RuntimeException: Could not retrieve endpoint ranges:
>   at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:283)
>   at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:144)
>   at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:95)
> Caused by: java.io.FileNotFoundException: /usr/share/dse/conf/.keystore
>   at 
> com.datastax.bdp.transport.client.TClientSocketFactory.getSSLSocket(TClientSocketFactory.java:128)
>   at 
> com.datastax.bdp.transport.client.TClientSocketFactory.openSocket(TClientSocketFactory.java:114)
>   at 
> com.datastax.bdp.transport.client.TDseClientTransportFactory.openTransport(TDseClientTransportFactory.java:186)
>   at 
> com.datastax.bdp.transport.client.TDseClientTransportFactory.openTransport(TDseClientTransportFactory.java:120)
>   at 
> com.datastax.bdp.transport.client.TDseClientTransportFactory.openTransport(TDseClientTransportFactory.java:111)
>   at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.createThriftClient(BulkLoader.java:302)
>   at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:254)
>   ... 2 more
> root@ip-172-31-2-240:/tmp/foo#
> {noformat}.
> If I copy the keystore to the expected location, I get the same error with 
> the truststore.
> {noformat}
> # sstableloader -d 172.31.2.240,172.31.2.241 -f 
> /etc/dse/cassandra/cassandra.yaml --debug Keyspace1/Standard1/
> Could not retrieve endpoint ranges:
> java.io.FileNotFoundException: /usr/share/dse/conf/.truststore
> java.lang.RuntimeException: Could not retrieve endpoint ranges:
>   at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:283)
>   at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:144)
>   at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:95)
> Caused by: java.io.FileNotFoundException: /usr/share/dse/conf/.truststore
>   at 
> com.datastax.bdp.transport.client.TClientSocketFactory.getSSLSocket(TClientSocketFactory.java:130)
>   at 
> com.datastax.bdp.transport.client.TClientSocketFactory.openSocket(TClientSocketFactory.java:114)
>   at 
> com.datastax.bdp.transport.client.TDseClientTransportFactory.openTransport(TDseClientTransportFactory.java:186)
>   at 
> com.datastax.bdp.transport.client.TDseClientTransportFactory.openTransport(TDseClientTransportFactory.java:120)
>   at 
> com.datastax.bdp.transport.client.TDseClientTransportFactory.openTransport(TDseClientTransportFactory.java:111)
>   at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.createThriftClient(BulkLoader.java:302)
>   at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:254)
>   ... 2 more
> #
> {noformat}
> If I copy the truststore, it finds them 

[jira] [Updated] (CASSANDRA-10391) sstableloader fails with client SSL enabled with non-standard keystore/truststore location

2015-10-02 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10391:

Reproduced In: 2.1.8, 2.0.14

> sstableloader fails with client SSL enabled with non-standard 
> keystore/truststore location
> --
>
> Key: CASSANDRA-10391
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10391
> Project: Cassandra
>  Issue Type: Bug
> Environment: [cqlsh 4.1.1 | Cassandra 2.0.14.425 | DSE 4.6.6 | CQL 
> spec 3.1.1 | Thrift protocol 19.39.0]
> [cqlsh 5.0.1 | Cassandra 2.1.8.689 | DSE 4.7.3 | CQL spec 3.2.0 | Native 
> protocol v3]
>Reporter: Jon Moses
>Assignee: Andrew Hust
>
> If client SSL is enabled, sstableloader is unable to access the keystore and 
> truststore if they are not in the expected locations.  I reproduce this issue 
> providing {{-f /path/to/cassandra.yaml}} as well as manually using the 
> {{-ks}} flag with the proper path to the keystore.
> For example:
> {noformat}
> client_encryption_options:
> enabled: true
> keystore: /var/tmp/.keystore
> {noformat}
> {noformat}
> # sstableloader -d 172.31.2.240,172.31.2.241 -f 
> /etc/dse/cassandra/cassandra.yaml Keyspace1/Standard1/
> Could not retrieve endpoint ranges:
> java.io.FileNotFoundException: /usr/share/dse/conf/.keystore
> Run with --debug to get full stack trace or --help to get help.
> #
> # sstableloader -d 172.31.2.240,172.31.2.241 -ks /var/tmp/.keystore 
> Keyspace1/Standard1/
> Could not retrieve endpoint ranges:
> java.io.FileNotFoundException: /usr/share/dse/conf/.keystore
> Run with --debug to get full stack trace or --help to get help.
> #
> {noformat}
> The full stack is:
> {noformat}
> # sstableloader -d 172.31.2.240,172.31.2.241 -f 
> /etc/dse/cassandra/cassandra.yaml --debug Keyspace1/Standard1/
> Could not retrieve endpoint ranges:
> java.io.FileNotFoundException: /usr/share/dse/conf/.keystore
> java.lang.RuntimeException: Could not retrieve endpoint ranges:
>   at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:283)
>   at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:144)
>   at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:95)
> Caused by: java.io.FileNotFoundException: /usr/share/dse/conf/.keystore
>   at 
> com.datastax.bdp.transport.client.TClientSocketFactory.getSSLSocket(TClientSocketFactory.java:128)
>   at 
> com.datastax.bdp.transport.client.TClientSocketFactory.openSocket(TClientSocketFactory.java:114)
>   at 
> com.datastax.bdp.transport.client.TDseClientTransportFactory.openTransport(TDseClientTransportFactory.java:186)
>   at 
> com.datastax.bdp.transport.client.TDseClientTransportFactory.openTransport(TDseClientTransportFactory.java:120)
>   at 
> com.datastax.bdp.transport.client.TDseClientTransportFactory.openTransport(TDseClientTransportFactory.java:111)
>   at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.createThriftClient(BulkLoader.java:302)
>   at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:254)
>   ... 2 more
> root@ip-172-31-2-240:/tmp/foo#
> {noformat}.
> If I copy the keystore to the expected location, I get the same error with 
> the truststore.
> {noformat}
> # sstableloader -d 172.31.2.240,172.31.2.241 -f 
> /etc/dse/cassandra/cassandra.yaml --debug Keyspace1/Standard1/
> Could not retrieve endpoint ranges:
> java.io.FileNotFoundException: /usr/share/dse/conf/.truststore
> java.lang.RuntimeException: Could not retrieve endpoint ranges:
>   at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:283)
>   at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:144)
>   at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:95)
> Caused by: java.io.FileNotFoundException: /usr/share/dse/conf/.truststore
>   at 
> com.datastax.bdp.transport.client.TClientSocketFactory.getSSLSocket(TClientSocketFactory.java:130)
>   at 
> com.datastax.bdp.transport.client.TClientSocketFactory.openSocket(TClientSocketFactory.java:114)
>   at 
> com.datastax.bdp.transport.client.TDseClientTransportFactory.openTransport(TDseClientTransportFactory.java:186)
>   at 
> com.datastax.bdp.transport.client.TDseClientTransportFactory.openTransport(TDseClientTransportFactory.java:120)
>   at 
> com.datastax.bdp.transport.client.TDseClientTransportFactory.openTransport(TDseClientTransportFactory.java:111)
>   at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.createThriftClient(BulkLoader.java:302)
>   at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:254)
>   ... 2 more
> #
> {noformat}
> If I copy the truststore, it finds them 

[jira] [Resolved] (CASSANDRA-9456) while starting cassandra using cassandra -f ; encountered an errror ERROR 11:46:42 Exception in thread Thread[MemtableFlushWriter:1,5,main]

2015-10-02 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust resolved CASSANDRA-9456.

Resolution: Cannot Reproduce

> while starting cassandra using cassandra -f ; encountered an errror ERROR 
> 11:46:42 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> ---
>
> Key: CASSANDRA-9456
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9456
> Project: Cassandra
>  Issue Type: Bug
>  Components: Config
> Environment: ubuntu14.10
>Reporter: naresh
> Fix For: 3.x
>
>
> I am using openjdk7 and build cassandra successfully using source code from 
> github(https://github.com/apache/cassandra.git)
>  
> afer successfuly I set the path for cassandra 
> and tried to start using cassandra -f 
> below is the errror encountered while startup
> {code}
> INFO  11:46:42 Token metadata:
> INFO  11:46:42 Enqueuing flush of local: 653 (0%) on-heap, 0 (0%) off-heap
> INFO  11:46:42 Writing Memtable-local@1257824677(110 serialized bytes, 3 ops, 
> 0%/0% of on/off-heap limit)
> ERROR 11:46:42 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:82) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:274)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:288)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:75)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:168) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:74)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:107)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:84)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:396)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:343)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:328) 
> ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>  ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1085)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_79]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9456) while starting cassandra using cassandra -f ; encountered an errror ERROR 11:46:42 Exception in thread Thread[MemtableFlushWriter:1,5,main]

2015-10-02 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14941829#comment-14941829
 ] 

Andrew Hust commented on CASSANDRA-9456:


I'm unable to reproduce this.  I was able to successful build and run using 
openjdk on ubuntu.  If you still see this error please reopen with openjdk 
version and git sha your building.

> while starting cassandra using cassandra -f ; encountered an errror ERROR 
> 11:46:42 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> ---
>
> Key: CASSANDRA-9456
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9456
> Project: Cassandra
>  Issue Type: Bug
>  Components: Config
> Environment: ubuntu14.10
>Reporter: naresh
> Fix For: 3.x
>
>
> I am using openjdk7 and build cassandra successfully using source code from 
> github(https://github.com/apache/cassandra.git)
>  
> afer successfuly I set the path for cassandra 
> and tried to start using cassandra -f 
> below is the errror encountered while startup
> {code}
> INFO  11:46:42 Token metadata:
> INFO  11:46:42 Enqueuing flush of local: 653 (0%) on-heap, 0 (0%) off-heap
> INFO  11:46:42 Writing Memtable-local@1257824677(110 serialized bytes, 3 ops, 
> 0%/0% of on/off-heap limit)
> ERROR 11:46:42 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:82) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:274)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:288)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:75)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:168) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:74)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:107)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:84)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:396)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:343)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:328) 
> ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>  ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1085)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_79]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10233) IndexOutOfBoundsException in HintedHandOffManager

2015-10-01 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14939903#comment-14939903
 ] 

Andrew Hust commented on CASSANDRA-10233:
-

I'll give the patch a look.  I'm also curious on how the broken hint got in 
there in the first place.  I would like to try and reproduce can you give more 
details of your upgrade procedure?  
- how many nodes?
- assuming rolling upgrade
- jdk change?
- roughly how long was each node unavailable
- gc_grace value of table with broken hint
- values of max_hint_window_in_ms, max_hints_delivery_threads, 
hinted_handoff_enabled, hinted_handoff_throttle_in_kb in cassandra.yaml
- what type of mutation was the hint without a target_id?

> IndexOutOfBoundsException in HintedHandOffManager
> -
>
> Key: CASSANDRA-10233
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10233
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.2.0
>Reporter: Omri Iluz
>Assignee: Andrew Hust
> Attachments: cassandra-2.1.8-10233-v2.txt, cassandra-2.1.8-10233.txt
>
>
> After upgrading our cluster to 2.2.0, the following error started showing 
> exectly every 10 minutes on every server in the cluster:
> {noformat}
> INFO  [CompactionExecutor:1381] 2015-08-31 18:31:55,506 
> CompactionTask.java:142 - Compacting (8e7e1520-500e-11e5-b1e3-e95897ba4d20) 
> [/cassandra/data/system/hints-2666e20573ef38b390fefecf96e8f0c7/la-540-big-Data.db:level=0,
>  ]
> INFO  [CompactionExecutor:1381] 2015-08-31 18:31:55,599 
> CompactionTask.java:224 - Compacted (8e7e1520-500e-11e5-b1e3-e95897ba4d20) 1 
> sstables to 
> [/cassandra/data/system/hints-2666e20573ef38b390fefecf96e8f0c7/la-541-big,] 
> to level=0.  1,544,495 bytes to 1,544,495 (~100% of original) in 93ms = 
> 15.838121MB/s.  0 total partitions merged to 4.  Partition merge counts were 
> {1:4, }
> ERROR [HintedHandoff:1] 2015-08-31 18:31:55,600 CassandraDaemon.java:182 - 
> Exception in thread Thread[HintedHandoff:1,1,main]
> java.lang.IndexOutOfBoundsException: null
>   at java.nio.Buffer.checkIndex(Buffer.java:538) ~[na:1.7.0_79]
>   at java.nio.HeapByteBuffer.getLong(HeapByteBuffer.java:410) 
> ~[na:1.7.0_79]
>   at org.apache.cassandra.utils.UUIDGen.getUUID(UUIDGen.java:106) 
> ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.db.HintedHandOffManager.scheduleAllDeliveries(HintedHandOffManager.java:515)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:88)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.db.HintedHandOffManager$1.run(HintedHandOffManager.java:168)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10420) Cassandra server should throw meaningfull exception when thrift_framed_transport_size_in_mb reached

2015-10-01 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14939914#comment-14939914
 ] 

Andrew Hust commented on CASSANDRA-10420:
-

Can you supply some additional information?  What version of cassandra and 
which language and driver version are you using.  Thanks.

> Cassandra server should throw meaningfull exception when 
> thrift_framed_transport_size_in_mb reached
> ---
>
> Key: CASSANDRA-10420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10420
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Yuan Yao
>
> In cassandra's configuration, we set "thrift_framed_transport_size_in_mb" as 
> 15
> When send data large than some threshold max value, java.net.SocketException: 
> Connection reset will be thrown out from server. This exception doesn't 
> deliver meaningful message. Client side can't detect what's wrong with the 
> request
> Please throw out meaningful exception in this case to client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10233) IndexOutOfBoundsException in HintedHandOffManager

2015-10-01 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14940540#comment-14940540
 ] 

Andrew Hust commented on CASSANDRA-10233:
-

[~eitikimura] as [~pauloricardomg] mentioned truncating the hints table will 
get things rolling again, just make sure to repair in near future to prevent 
possible dataloss or zombie records related to the mutations truncated out of 
the hints table.

[~fhsgoncalves] thanks, that helps a ton.  Couple more questions to help 
reproduce it:
- Just want to make sure order of operation is correct, the error showed up 
when you added additional nodes then persisted after upgrades?  
- Are all the new nodes in the second rack?  
- Are you running with vnodes?
- Are your keyspaces set with a replication of NetworkTopologyStrategy, 
SimpleStrategy or ?
- In the cassandra.yaml what is the value for: endpoint_snitch

> IndexOutOfBoundsException in HintedHandOffManager
> -
>
> Key: CASSANDRA-10233
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10233
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.2.0
>Reporter: Omri Iluz
>Assignee: Andrew Hust
> Attachments: cassandra-2.1.8-10233-v2.txt, cassandra-2.1.8-10233.txt
>
>
> After upgrading our cluster to 2.2.0, the following error started showing 
> exectly every 10 minutes on every server in the cluster:
> {noformat}
> INFO  [CompactionExecutor:1381] 2015-08-31 18:31:55,506 
> CompactionTask.java:142 - Compacting (8e7e1520-500e-11e5-b1e3-e95897ba4d20) 
> [/cassandra/data/system/hints-2666e20573ef38b390fefecf96e8f0c7/la-540-big-Data.db:level=0,
>  ]
> INFO  [CompactionExecutor:1381] 2015-08-31 18:31:55,599 
> CompactionTask.java:224 - Compacted (8e7e1520-500e-11e5-b1e3-e95897ba4d20) 1 
> sstables to 
> [/cassandra/data/system/hints-2666e20573ef38b390fefecf96e8f0c7/la-541-big,] 
> to level=0.  1,544,495 bytes to 1,544,495 (~100% of original) in 93ms = 
> 15.838121MB/s.  0 total partitions merged to 4.  Partition merge counts were 
> {1:4, }
> ERROR [HintedHandoff:1] 2015-08-31 18:31:55,600 CassandraDaemon.java:182 - 
> Exception in thread Thread[HintedHandoff:1,1,main]
> java.lang.IndexOutOfBoundsException: null
>   at java.nio.Buffer.checkIndex(Buffer.java:538) ~[na:1.7.0_79]
>   at java.nio.HeapByteBuffer.getLong(HeapByteBuffer.java:410) 
> ~[na:1.7.0_79]
>   at org.apache.cassandra.utils.UUIDGen.getUUID(UUIDGen.java:106) 
> ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.db.HintedHandOffManager.scheduleAllDeliveries(HintedHandOffManager.java:515)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:88)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.db.HintedHandOffManager$1.run(HintedHandOffManager.java:168)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9756) Cleanup UDA code after 6717

2015-10-01 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14940503#comment-14940503
 ] 

Andrew Hust edited comment on CASSANDRA-9756 at 10/1/15 10:09 PM:
--

Not sure if this should be a separate ticket but upgrading from 2.2 to 3.0 with 
a UDA defined will throw the exception below and fail to start when upgraded to 
3.0.

Used:
2.2: {{ae9b7e05222b2a25eda5618cf9eb17103e4d6d8b}}
3.0: {{5c2912d1ce95aacdacb59ccc840b12cd9aa0c8f8}}

{code}
org.apache.cassandra.exceptions.UnrecognizedEntityException: Undefined name 
function_name in where clause ('function_name = ?')
at 
org.apache.cassandra.cql3.Relation.toColumnDefinition(Relation.java:259) 
~[main/:na]
at 
org.apache.cassandra.cql3.SingleColumnRelation.newEQRestriction(SingleColumnRelation.java:160)
 ~[main/:na]
at org.apache.cassandra.cql3.Relation.toRestriction(Relation.java:137) 
~[main/:na]
at 
org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:151)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepareRestrictions(SelectStatement.java:817)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:764)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:752)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:504) 
~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:241)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:336)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.query(LegacySchemaMigrator.java:882)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readAggregateMetadata(LegacySchemaMigrator.java:849)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readAggregate(LegacySchemaMigrator.java:830)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readAggregates$216(LegacySchemaMigrator.java:823)
 ~[main/:na]
at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_60]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readAggregates(LegacySchemaMigrator.java:823)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:166)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$207(LegacySchemaMigrator.java:154)
 ~[main/:na]
at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_60]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:154)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77)
 ~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:223) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:542) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:668) 
[main/:na]
{code}

Can be reproduced with:
{code}
ccm stop
ccm remove uda_upgrade
ccm create -n 1 -v git:cassandra-2.2 uda_upgrade
ccm updateconf 'enable_user_defined_functions: true'
ccm start

cat << EOF | ccm node1 cqlsh
create keyspace ks WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
USE ks;
CREATE FUNCTION func_1(current int, candidate int)
CALLED ON NULL INPUT
RETURNS int LANGUAGE java AS
'if (current == null) return candidate; else return 
Math.max(current, candidate);';
CREATE AGGREGATE agg_1(int)
SFUNC func_1
STYPE int
INITCOND null;
EOF

sleep 10

echo "Draining all nodes"
ccm node1 nodetool drain
ccm stop

echo "Upgrading to git:cassandra-3.0"
ccm setdir -v git:cassandra-3.0
ccm start
echo "Sleeping for version migrations"
sleep 15
ccm checklogerror

ccm stop
{code}

//CC [~enigmacurry]


was (Author: nutbunnies):
Not sure if this should be a separate ticket but upgrading from 2.2 to 3.0 with 
a UDA defined will throw the exception below and fail to start when upgraded to 
3.0.

Used:
2.2: {{ae9b7e05222b2a25eda5618cf9eb17103e4d6d8b}}
3.0: {{5c2912d1ce95aacdacb59ccc840b12cd9aa0c8f8}}

{code}
org.apache.cassandra.exceptions.UnrecognizedEntityException: Undefined name 
function_name in where clause ('function_name = ?')
at 
org.apache.cassandra.cql3.Relation.toColumnDefinition(Relation.java:259) 
~[main/:na]
at 
org.apache.cassandra.cql3.SingleColumnRelation.newEQRestriction(SingleColumnRelation.java:160)
 ~[main/:na]
at 

[jira] [Commented] (CASSANDRA-9756) Cleanup UDA code after 6717

2015-10-01 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14940503#comment-14940503
 ] 

Andrew Hust commented on CASSANDRA-9756:


Not sure if this should be a separate ticket but upgrading from 2.2 to 3.0 with 
a UDA defined will throw the exception below and fail to start when upgraded to 
3.0.

Used:
2.2: {{ae9b7e05222b2a25eda5618cf9eb17103e4d6d8b}}
3.0: {{5c2912d1ce95aacdacb59ccc840b12cd9aa0c8f8}}

{code}
org.apache.cassandra.exceptions.UnrecognizedEntityException: Undefined name 
function_name in where clause ('function_name = ?')
at 
org.apache.cassandra.cql3.Relation.toColumnDefinition(Relation.java:259) 
~[main/:na]
at 
org.apache.cassandra.cql3.SingleColumnRelation.newEQRestriction(SingleColumnRelation.java:160)
 ~[main/:na]
at org.apache.cassandra.cql3.Relation.toRestriction(Relation.java:137) 
~[main/:na]
at 
org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:151)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepareRestrictions(SelectStatement.java:817)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:764)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:752)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:504) 
~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:241)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:336)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.query(LegacySchemaMigrator.java:882)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readAggregateMetadata(LegacySchemaMigrator.java:849)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readAggregate(LegacySchemaMigrator.java:830)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readAggregates$216(LegacySchemaMigrator.java:823)
 ~[main/:na]
at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_60]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readAggregates(LegacySchemaMigrator.java:823)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:166)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$207(LegacySchemaMigrator.java:154)
 ~[main/:na]
at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_60]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:154)
 ~[main/:na]
at 
org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77)
 ~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:223) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:542) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:668) 
[main/:na]
{code}

Can be reproduced with:
{code}
ccm stop
ccm remove uda_upgrade
ccm create -n 1 -v git:cassandra-2.2 uda_upgrade
ccm updateconf 'enable_user_defined_functions: true'
ccm start

cat << EOF | ccm node1 cqlsh
create keyspace ks WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
USE ks;
CREATE FUNCTION func_1(current int, candidate int)
CALLED ON NULL INPUT
RETURNS int LANGUAGE java AS
'if (current == null) return candidate; else return 
Math.max(current, candidate);';
CREATE AGGREGATE agg_1(int)
SFUNC func_1
STYPE int
INITCOND null;
EOF

sleep

echo "Draining all nodes"
ccm node1 nodetool drain
ccm stop

echo "Upgrading to git:cassandra-3.0"
ccm setdir -v git:cassandra-3.0
ccm start
echo "Sleeping for version migrations"
sleep 15
ccm checklogerror

ccm stop
{code}

//CC [~enigmacurry]

> Cleanup UDA code after 6717
> ---
>
> Key: CASSANDRA-9756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9756
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.x
>
>
> After CASSANDRA-6717 has landed, there should be some cleanup of UDF/UDA code 
> wrt load from schema tables and handling broken functions.
> /cc [~iamaleksey] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9756) Cleanup UDA code after 6717

2015-10-01 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-9756:
---
Fix Version/s: 3.0.0 rc2

> Cleanup UDA code after 6717
> ---
>
> Key: CASSANDRA-9756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9756
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.x, 3.0.0 rc2
>
>
> After CASSANDRA-6717 has landed, there should be some cleanup of UDF/UDA code 
> wrt load from schema tables and handling broken functions.
> /cc [~iamaleksey] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10360) UnsupportedOperationException when compacting system.size_estimates after 2.1 -> 2.2 -> 3.0 upgrade

2015-09-30 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10360:

Description: 
When upgrading from 2.1 -> 2.2 -> 3.0 the system.size_estimates table will get 
stuck in a compaction loop throwing:
{code}
java.lang.UnsupportedOperationException
at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.readNext(UnfilteredDeserializer.java:382)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:147)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:96)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.utils.MergeIterator$TrivialOneToOne.computeNext(MergeIterator.java:460)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:503)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:363)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.WrappingUnfilteredRowIterator.hasNext(WrappingUnfilteredRowIterator.java:80)
at 
org.apache.cassandra.db.rows.AlteringUnfilteredRowIterator.hasNext(AlteringUnfilteredRowIterator.java:72)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:100)
at 
org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:72)
at 
org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:223)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:539)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

It will only occur when upgrading thru 2.2.  Going from 2.1 -> 3.0 will not 
surface the issue.  It can be reproduced with the following (note -- changing 
jdks will need to be altered if you aren't on OSX)
{code}
#!/bin/sh

echo "using java7 for cassandra-2.1 instance"
export JAVA_HOME=$(/usr/libexec/java_home -v 1.7)

ccm stop
ccm remove upgrade_nodes
ccm create -n 1 -v git:cassandra-2.1 upgrade_nodes
ccm start
ccm node1 stress write n=500K -rate threads=4 -mode native cql3
sleep 10

for cver in 2.2 3.0
do
echo "Draining all nodes"
ccm node1 nodetool drain
ccm stop

echo "switching to java 8"
export JAVA_HOME=$(/usr/libexec/java_home -v 1.8)

echo "Upgrading to git:cassandra-$cver"
ccm setdir -v git:cassandra-$cver
ccm start
echo "Sleeping to all version migrations"
sleep 30
echo "Upgrading sstables"
ccm node1 nodetool upgradesstables
ccm node1 nodetool upgradesstables system
ccm node1 nodetool compact system

ccm node1 stress write n=500K -rate threads=4 -mode native cql3
sleep 10
done

echo "***"
echo "instead of creating churn to cause compaction naturally forcing 
compaction of system keyspace"
echo "***"
ccm node1 nodetool compact system
ccm stop
{code}

The example uses {{nodetool compact system}} but it will also occur with 
{{nodetool upgradesstables system}}.  I'm puzzled by that since the script runs 
{{upgradesstables}} on each iteration.  Is the system keyspace not 

[jira] [Updated] (CASSANDRA-10360) UnsupportedOperationException when compacting system.size_estimates after 2.1 -> 2.2 -> 3.0 upgrade

2015-09-30 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10360:

Description: 
When upgrading from 2.1 -> 2.2 -> 3.0 the system.size_estimates table will get 
stuck in a compaction loop throwing:
{code}
java.lang.UnsupportedOperationException
at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.readNext(UnfilteredDeserializer.java:382)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:147)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:96)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.utils.MergeIterator$TrivialOneToOne.computeNext(MergeIterator.java:460)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:503)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:363)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.WrappingUnfilteredRowIterator.hasNext(WrappingUnfilteredRowIterator.java:80)
at 
org.apache.cassandra.db.rows.AlteringUnfilteredRowIterator.hasNext(AlteringUnfilteredRowIterator.java:72)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:100)
at 
org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:72)
at 
org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:223)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:539)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

It will only occur when upgrading thru 2.2.  Going from 2.1 -> 3.0 will not 
surface the issue.  It can be reproduced with the following (note -- changing 
jdks will need to be altered if you aren't on OSX)
{code}
#!/bin/sh

echo "using java7 for cassandra-2.1 instance"
export JAVA_HOME=$(/usr/libexec/java_home -v 1.7)

ccm stop
ccm remove upgrade_nodes
ccm create -n 1 -v git:cassandra-2.1 upgrade_nodes
ccm start
ccm node1 stress write n=500K -rate threads=4 -mode native cql3
sleep 10

for cver in 3.0
do
echo "Draining all nodes"
ccm node1 nodetool drain
ccm stop

echo "switching to java 8"
export JAVA_HOME=$(/usr/libexec/java_home -v 1.8)

echo "Upgrading to git:cassandra-$cver"
ccm setdir -v git:cassandra-$cver
ccm start
echo "Sleeping to all version migrations"
sleep 30
echo "Upgrading sstables"
ccm node1 nodetool upgradesstables
ccm node1 nodetool upgradesstables system
ccm node1 nodetool compact system

ccm node1 stress write n=500K -rate threads=4 -mode native cql3
sleep 10
done

echo "***"
echo "instead of creating churn to cause compaction naturally forcing 
compaction of system keyspace"
echo "***"
ccm node1 nodetool compact system
ccm stop
{code}

The example uses {{nodetool compact system}} but it will also occur with 
{{nodetool upgradesstables system}}.  I'm puzzled by that since the script runs 
{{upgradesstables}} on each iteration.  Is the system keyspace not effected by 

[jira] [Comment Edited] (CASSANDRA-10360) UnsupportedOperationException when compacting system.size_estimates after 2.1 -> 2.2 -> 3.0 upgrade

2015-09-30 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14937461#comment-14937461
 ] 

Andrew Hust edited comment on CASSANDRA-10360 at 9/30/15 8:21 PM:
--

I have now reproduced this issue when upgrading from 2.1 to 3.0 when doing a 
rolling upgrade with cstar_perf.  Attached site_estimates sstables and system 
log from one of the nodes.  I'll work on getting a isolated script to reproduce.

UPDATE:  The above shell script has been updated to show failure from 2.1 -> 
3.0.  It just needed more activity on the stress operation to trigger the 
issue.  Change line {{for cver in 3.0}} to {{for cver in 2.2 3.0}} for the 
original upgrade path.  


was (Author: nutbunnies):
I have now reproduced this issue when upgrading from 2.1 to 3.0 when doing a 
rolling upgrade with cstar_perf.  Attached site_estimates sstables and system 
log from one of the nodes.  I'll work on getting a isolated script to reproduce.

> UnsupportedOperationException when compacting system.size_estimates after 2.1 
> -> 2.2 -> 3.0 upgrade
> ---
>
> Key: CASSANDRA-10360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10360
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
>Assignee: Sylvain Lebresne
>Priority: Blocker
> Fix For: 3.0.0 rc2
>
> Attachments: size_estimates.tar.bz2, system.log.bz2
>
>
> When upgrading from 2.1 -> 2.2 -> 3.0 the system.size_estimates table will 
> get stuck in a compaction loop throwing:
> {code}
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.readNext(UnfilteredDeserializer.java:382)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:147)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:96)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.utils.MergeIterator$TrivialOneToOne.computeNext(MergeIterator.java:460)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:503)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:363)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.WrappingUnfilteredRowIterator.hasNext(WrappingUnfilteredRowIterator.java:80)
> at 
> org.apache.cassandra.db.rows.AlteringUnfilteredRowIterator.hasNext(AlteringUnfilteredRowIterator.java:72)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:100)
> at 
> org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:72)
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:223)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:539)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> 

[jira] [Updated] (CASSANDRA-10360) UnsupportedOperationException when compacting system.size_estimates after 2.1 -> 2.2 -> 3.0 upgrade

2015-09-30 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10360:

Reproduced In: 3.0.0 rc1, 3.0 beta 2  (was: 3.0 beta 2)
 Priority: Blocker  (was: Major)

> UnsupportedOperationException when compacting system.size_estimates after 2.1 
> -> 2.2 -> 3.0 upgrade
> ---
>
> Key: CASSANDRA-10360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10360
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
>Assignee: Sylvain Lebresne
>Priority: Blocker
> Fix For: 3.0.0 rc2
>
>
> When upgrading from 2.1 -> 2.2 -> 3.0 the system.size_estimates table will 
> get stuck in a compaction loop throwing:
> {code}
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.readNext(UnfilteredDeserializer.java:382)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:147)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:96)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.utils.MergeIterator$TrivialOneToOne.computeNext(MergeIterator.java:460)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:503)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:363)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.WrappingUnfilteredRowIterator.hasNext(WrappingUnfilteredRowIterator.java:80)
> at 
> org.apache.cassandra.db.rows.AlteringUnfilteredRowIterator.hasNext(AlteringUnfilteredRowIterator.java:72)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:100)
> at 
> org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:72)
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:223)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:539)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> It will only occur when upgrading thru 2.2.  Going from 2.1 -> 3.0 will not 
> surface the issue.  It can be reproduced with the following (note -- changing 
> jdks will need to be altered if you aren't on OSX)
> {code}
> #!/bin/sh
> echo "using java7 for cassandra-2.1 instance"
> export JAVA_HOME=$(/usr/libexec/java_home -v 1.7)
> ccm stop
> ccm remove upgrade_nodes
> ccm create -n 1 -v git:cassandra-2.1 upgrade_nodes
> ccm start
> ccm node1 stress write n=5K -rate threads=4 -mode native cql3
> sleep 10
> for cver in 2.2 3.0
> do
>   echo "Draining all nodes"
>   ccm node1 nodetool drain
>   ccm stop
>   echo "switching to java 8"
>   export JAVA_HOME=$(/usr/libexec/java_home -v 1.8)
>   echo "Upgrading to git:cassandra-$cver"
>   ccm setdir 

[jira] [Updated] (CASSANDRA-10360) UnsupportedOperationException when compacting system.size_estimates after 2.1 -> 2.2 -> 3.0 upgrade

2015-09-30 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10360:

Attachment: system.log.bz2
size_estimates.tar.bz2

> UnsupportedOperationException when compacting system.size_estimates after 2.1 
> -> 2.2 -> 3.0 upgrade
> ---
>
> Key: CASSANDRA-10360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10360
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
>Assignee: Sylvain Lebresne
>Priority: Blocker
> Fix For: 3.0.0 rc2
>
> Attachments: size_estimates.tar.bz2, system.log.bz2
>
>
> When upgrading from 2.1 -> 2.2 -> 3.0 the system.size_estimates table will 
> get stuck in a compaction loop throwing:
> {code}
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.readNext(UnfilteredDeserializer.java:382)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:147)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:96)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.utils.MergeIterator$TrivialOneToOne.computeNext(MergeIterator.java:460)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:503)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:363)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.WrappingUnfilteredRowIterator.hasNext(WrappingUnfilteredRowIterator.java:80)
> at 
> org.apache.cassandra.db.rows.AlteringUnfilteredRowIterator.hasNext(AlteringUnfilteredRowIterator.java:72)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:100)
> at 
> org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:72)
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:223)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:539)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> It will only occur when upgrading thru 2.2.  Going from 2.1 -> 3.0 will not 
> surface the issue.  It can be reproduced with the following (note -- changing 
> jdks will need to be altered if you aren't on OSX)
> {code}
> #!/bin/sh
> echo "using java7 for cassandra-2.1 instance"
> export JAVA_HOME=$(/usr/libexec/java_home -v 1.7)
> ccm stop
> ccm remove upgrade_nodes
> ccm create -n 1 -v git:cassandra-2.1 upgrade_nodes
> ccm start
> ccm node1 stress write n=5K -rate threads=4 -mode native cql3
> sleep 10
> for cver in 2.2 3.0
> do
>   echo "Draining all nodes"
>   ccm node1 nodetool drain
>   ccm stop
>   echo "switching to java 8"
>   export JAVA_HOME=$(/usr/libexec/java_home -v 1.8)
>   echo "Upgrading to 

[jira] [Commented] (CASSANDRA-10360) UnsupportedOperationException when compacting system.size_estimates after 2.1 -> 2.2 -> 3.0 upgrade

2015-09-30 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14937461#comment-14937461
 ] 

Andrew Hust commented on CASSANDRA-10360:
-

I have now reproduced this issue when upgrading from 2.1 to 3.0 when doing a 
rolling upgrade with cstar_perf.  Attached site_estimates sstables and system 
log from one of the nodes.  I'll work on getting a isolated script to reproduce.

> UnsupportedOperationException when compacting system.size_estimates after 2.1 
> -> 2.2 -> 3.0 upgrade
> ---
>
> Key: CASSANDRA-10360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10360
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
>Assignee: Sylvain Lebresne
>Priority: Blocker
> Fix For: 3.0.0 rc2
>
> Attachments: size_estimates.tar.bz2, system.log.bz2
>
>
> When upgrading from 2.1 -> 2.2 -> 3.0 the system.size_estimates table will 
> get stuck in a compaction loop throwing:
> {code}
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.readNext(UnfilteredDeserializer.java:382)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:147)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:96)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.utils.MergeIterator$TrivialOneToOne.computeNext(MergeIterator.java:460)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:503)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:363)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.WrappingUnfilteredRowIterator.hasNext(WrappingUnfilteredRowIterator.java:80)
> at 
> org.apache.cassandra.db.rows.AlteringUnfilteredRowIterator.hasNext(AlteringUnfilteredRowIterator.java:72)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:100)
> at 
> org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:72)
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:223)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:539)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> It will only occur when upgrading thru 2.2.  Going from 2.1 -> 3.0 will not 
> surface the issue.  It can be reproduced with the following (note -- changing 
> jdks will need to be altered if you aren't on OSX)
> {code}
> #!/bin/sh
> echo "using java7 for cassandra-2.1 instance"
> export JAVA_HOME=$(/usr/libexec/java_home -v 1.7)
> ccm stop
> ccm remove upgrade_nodes
> ccm create -n 1 -v git:cassandra-2.1 upgrade_nodes
> ccm start
> ccm node1 stress write n=5K -rate threads=4 -mode native cql3
> sleep 10

[jira] [Updated] (CASSANDRA-10360) UnsupportedOperationException when compacting system.size_estimates after 2.1 -> 2.2 -> 3.0 upgrade

2015-09-16 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10360:

Description: 
When upgrading from 2.1 -> 2.2 -> 3.0 the system.size_estimates table will get 
stuck in a compaction loop throwing:
{code}
java.lang.UnsupportedOperationException
at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.readNext(UnfilteredDeserializer.java:382)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:147)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:96)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.utils.MergeIterator$TrivialOneToOne.computeNext(MergeIterator.java:460)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:503)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:363)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.WrappingUnfilteredRowIterator.hasNext(WrappingUnfilteredRowIterator.java:80)
at 
org.apache.cassandra.db.rows.AlteringUnfilteredRowIterator.hasNext(AlteringUnfilteredRowIterator.java:72)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:100)
at 
org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:72)
at 
org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:223)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:539)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

It will only occur when upgrading thru 2.2.  Going from 2.1 -> 3.0 will not 
surface the issue.  It can be reproduced with the following (note -- changing 
jdks will need to be altered if you aren't on OSX)
{code}
#!/bin/sh

echo "using java7 for cassandra-2.1 instance"
export JAVA_HOME=$(/usr/libexec/java_home -v 1.7)

ccm stop
ccm remove upgrade_nodes
ccm create -n 1 -v git:cassandra-2.1 upgrade_nodes
ccm start
ccm node1 stress write n=5K -rate threads=4 -mode native cql3
sleep 10

for cver in 2.2 3.0
do
  echo "Draining all nodes"
  ccm node1 nodetool drain
  ccm stop

  echo "switching to java 8"
  export JAVA_HOME=$(/usr/libexec/java_home -v 1.8)

  echo "Upgrading to git:cassandra-$cver"
  ccm setdir -v git:cassandra-$cver
  ccm start
  echo "Sleeping to all version migrations"
  sleep 30
  echo "Upgrading sstables"
  ccm node1 nodetool upgradesstables

  ccm node1 stress write n=5K -rate threads=4 -mode native cql3
  sleep 10
done

echo "***"
echo "instead of creating churn to cause compaction naturally forcing 
compaction of system keyspace"
echo "***"
ccm node1 nodetool compact system
ccm stop
{code}

The example uses {{nodetool compact system}} but it will also occur with 
{{nodetool upgradesstables system}}.  I'm puzzled by that since the script runs 
{{upgradesstables}} on each iteration.  Is the system keyspace not effected by 
the command without arguments?

Ran against:
2.1: {{e889ee408bec5330c312ff6b72a81a0012fdf2a5}}
2.2: 

[jira] [Created] (CASSANDRA-10360) UnsupportedOperationException when compacting system.size_estimates after 2.1 -> 2.2 -> 3.0 upgrade

2015-09-16 Thread Andrew Hust (JIRA)
Andrew Hust created CASSANDRA-10360:
---

 Summary: UnsupportedOperationException when compacting 
system.size_estimates after 2.1 -> 2.2 -> 3.0 upgrade
 Key: CASSANDRA-10360
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10360
 Project: Cassandra
  Issue Type: Bug
Reporter: Andrew Hust


When upgrading from 2.1 -> 2.2 -> 3.0 the system.size_estimates table will get 
stuck in a compaction loop throwing:
{code}
java.lang.UnsupportedOperationException
at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.readNext(UnfilteredDeserializer.java:382)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:147)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.computeNext(SSTableSimpleIterator.java:96)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.utils.MergeIterator$TrivialOneToOne.computeNext(MergeIterator.java:460)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:503)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:363)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.WrappingUnfilteredRowIterator.hasNext(WrappingUnfilteredRowIterator.java:80)
at 
org.apache.cassandra.db.rows.AlteringUnfilteredRowIterator.hasNext(AlteringUnfilteredRowIterator.java:72)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:100)
at 
org.apache.cassandra.db.partitions.PurgingPartitionIterator.hasNext(PurgingPartitionIterator.java:72)
at 
org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:223)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:539)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

It will only occur when upgrading thru 2.2.  Going from 2.1 -> 3.0 will not 
surface the issue.  It can be reproduced with the following (note -- changing 
jdks will need to be altered if you aren't on OSX)
{code}
#!/bin/sh

echo "using java7 for cassandra-2.1 instance"
export JAVA_HOME=$(/usr/libexec/java_home -v 1.7)

ccm stop
ccm remove upgrade_nodes
ccm create -n 1 -v git:cassandra-2.1 upgrade_nodes
ccm start
ccm node1 stress write n=5K -rate threads=4 -mode native cql3
sleep 10

for cver in 2.2 3.0
do
  echo "Draining all nodes"
  ccm node1 nodetool drain
  ccm stop

  echo "switching to java 8"
  export JAVA_HOME=$(/usr/libexec/java_home -v 1.8)

  echo "Upgrading to git:cassandra-$cver"
  ccm setdir -v git:cassandra-$cver
  ccm start
  echo "Sleeping to all version migrations"
  sleep 30
  echo "Upgrading sstables"
  ccm node1 nodetool upgradesstables

  ccm node1 stress write n=5K -rate threads=4 -mode native cql3
  sleep 10
done

echo "***"
echo "instead of creating churn to cause compaction naturally forcing 
compaction of system keyspace"
echo "***"
ccm node1 nodetool compact system
ccm stop
{code}

The example uses {{nodetool compact system}} but it will also occur with 
{{nodetool upgradesstables system}}.  I'm puzzled by that since the script runs 

[jira] [Created] (CASSANDRA-10350) cqlsh describe keyspace output no longers keeps indexes in sorted order

2015-09-15 Thread Andrew Hust (JIRA)
Andrew Hust created CASSANDRA-10350:
---

 Summary: cqlsh describe keyspace output no longers keeps indexes 
in sorted order
 Key: CASSANDRA-10350
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10350
 Project: Cassandra
  Issue Type: Bug
Reporter: Andrew Hust


cqlsh command {{describe keyspace }} no longer keeps indexes in alpha 
sorted order.  This was caught with a dtest on 
[cassci|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe/].

Tested on: C* {{b4544846def2bdd00ff841c7e3d9f2559620827b}}

Can be reproduced with the following:
{code}
ccm stop
ccm remove describe_order
ccm create -n 1 -v git:cassandra-2.2 describe_order
ccm start
cat << EOF | ccm node1 cqlsh
CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
USE ks1;
CREATE TABLE ks1.test (id int, col int, val text, val2 text, val3 text, PRIMARY 
KEY(id, col));
CREATE INDEX ix0 ON ks1.test (col);
CREATE INDEX ix3 ON ks1.test (val3);
CREATE INDEX ix2 ON ks1.test (val2);
CREATE INDEX ix1 ON ks1.test (val);
DESCRIBE KEYSPACE ks1;
EOF

ccm stop
ccm setdir -v git:cassandra-3.0
ccm start
sleep 15
cat << EOF | ccm node1 cqlsh
DESCRIBE KEYSPACE ks1;
EOF

ccm stop
{code}

Ouput on <= cassandra-2.2:
{code}
CREATE INDEX ix0 ON ks1.test (col);
CREATE INDEX ix1 ON ks1.test (val);
CREATE INDEX ix2 ON ks1.test (val2);
CREATE INDEX ix3 ON ks1.test (val3);
{code}

Output on cassandra-3.0:
{code}
CREATE INDEX ix2 ON ks1.test (val2);
CREATE INDEX ix3 ON ks1.test (val3);
CREATE INDEX ix0 ON ks1.test (col);
CREATE INDEX ix1 ON ks1.test (val);
{code}

//CC [~enigmacurry]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7771) Allow multiple 2ndary index on the same column

2015-09-03 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14729220#comment-14729220
 ] 

Andrew Hust commented on CASSANDRA-7771:


When creating separate indexes on both the key and value of a map column the 
ddl for the table in cqlsh only contains the index on the value.  Both indexes 
are functional and queries return expected results.  When querying metadata 
from the python driver (3.0.0a2) both indexes are present and using the 
function as_cql_query produces the correct ddl.  This might just be an out of 
date python lib in cqlsh.

Tested on C*: {{66b0e1d7889d0858753c6e364e77d86fe278eee4}}

Can be reproduced with the following shell commands and ccm:
{code}
ccm remove 2i_test
ccm create -n 1 -v git:cassandra-3.0 -s 2i_test
ccm start

cat << EOF | ccm node1 cqlsh
CREATE KEYSPACE index_test_ks WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
USE index_test_ks;
CREATE TABLE tbl1 (id uuid primary key, ds map, c1 int);
INSERT INTO tbl1 (id, ds, c1) values (uuid(), {'foo': 1, 'bar': 2}, 1);
INSERT INTO tbl1 (id, ds, c1) values (uuid(), {'faz': 1, 'baz': 2}, 2);
CREATE INDEX ix_tbl1_map_values ON tbl1(ds);
CREATE INDEX ix_tbl1_map_keys ON tbl1(keys(ds));

SELECT * FROM tbl1 where ds contains 1;
SELECT * FROM tbl1 where ds contains key 'foo';

// ***
// DDL only has ix_tbl1_map_values present
// ***
DESC TABLE tbl1;

// ***
// system_schema.indexes is correct
// ***
SELECT * FROM system_schema.indexes;
EOF
ccm stop
{code}

Example output:
{code}
CREATE TABLE index_test_ks.tbl1 (
id uuid PRIMARY KEY,
c1 int,
ds map
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
CREATE INDEX ix_tbl1_map_values ON index_test_ks.tbl1 (ds);


 keyspace_name | table_name | index_name | index_type | options 
 | target_columns | target_type
---++++--++-
 index_test_ks |   tbl1 |   ix_tbl1_map_keys | COMPOSITES |   
{'index_keys': ''} | {'ds'} |  COLUMN
 index_test_ks |   tbl1 | ix_tbl1_map_values | COMPOSITES | 
{'index_values': ''} | {'ds'} |  COLUMN
{code}

> Allow multiple 2ndary index on the same column
> --
>
> Key: CASSANDRA-7771
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7771
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Sylvain Lebresne
>Assignee: Sam Tunnicliffe
>  Labels: client-impacting
> Fix For: 3.0 beta 1
>
>
> Currently, the code assumes that we'll only have one 2ndary index per column. 
> This has been reasonable so far but stop being it with CASSANDRA-6382 (you 
> might want to index multiple fields of the same UDT column) and 
> CASSANDRA-7458 (you may want to have one "normal" index an multiple 
> functional index for the same column). So we should consider removing that 
> assumption in the code, which is mainly 2 places:
> # in the schema: each ColumnDefinition only has infos for one index. This 
> part should probably be tackled in CASSANDRA-6717 so I'm marking this issue 
> as a follow-up of CASSANDRA-6717.
> # in the 2ndary index API: this is the part I'm suggesting we fix in this 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-9459) SecondaryIndex API redesign

2015-09-03 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-9459:
---
Comment: was deleted

(was: When creating separate indexes on both the key and value of a map column 
the ddl for the table in cqlsh only contains the index on the value.  Both 
indexes are functional and queries return expected results.  When querying 
metadata from the python driver (3.0.0a2) both indexes are present and using 
the function as_cql_query produces the correct ddl.  This might just be an out 
of date python lib in cqlsh.

Tested on C*: {{66b0e1d7889d0858753c6e364e77d86fe278eee4}}

Can be reproduced with the following shell commands and ccm:
{code}
ccm remove 2i_test
ccm create -n 1 -v git:cassandra-3.0 -s 2i_test
ccm start

cat << EOF | ccm node1 cqlsh
CREATE KEYSPACE index_test_ks WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
USE index_test_ks;
CREATE TABLE tbl1 (id uuid primary key, ds map, c1 int);
INSERT INTO tbl1 (id, ds, c1) values (uuid(), {'foo': 1, 'bar': 2}, 1);
INSERT INTO tbl1 (id, ds, c1) values (uuid(), {'faz': 1, 'baz': 2}, 2);
CREATE INDEX ix_tbl1_map_values ON tbl1(ds);
CREATE INDEX ix_tbl1_map_keys ON tbl1(keys(ds));

SELECT * FROM tbl1 where ds contains 1;
SELECT * FROM tbl1 where ds contains key 'foo';

// ***
// DDL only has ix_tbl1_map_values present
// ***
DESC TABLE tbl1;

// ***
// system_schema.indexes is correct
// ***
SELECT * FROM system_schema.indexes;
EOF
ccm stop
{code}

Example output:
{code}
CREATE TABLE index_test_ks.tbl1 (
id uuid PRIMARY KEY,
c1 int,
ds map
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
CREATE INDEX ix_tbl1_map_values ON index_test_ks.tbl1 (ds);


 keyspace_name | table_name | index_name | index_type | options 
 | target_columns | target_type
---++++--++-
 index_test_ks |   tbl1 |   ix_tbl1_map_keys | COMPOSITES |   
{'index_keys': ''} | {'ds'} |  COLUMN
 index_test_ks |   tbl1 | ix_tbl1_map_values | COMPOSITES | 
{'index_values': ''} | {'ds'} |  COLUMN
{code})

> SecondaryIndex API redesign
> ---
>
> Key: CASSANDRA-9459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9459
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.0 beta 1
>
>
> For some time now the index subsystem has been a pain point and in large part 
> this is due to the way that the APIs and principal classes have grown 
> organically over the years. It would be a good idea to conduct a wholesale 
> review of the area and see if we can come up with something a bit more 
> coherent.
> A few starting points:
> * There's a lot in AbstractPerColumnSecondaryIndex & its subclasses which 
> could be pulled up into SecondaryIndexSearcher (note that to an extent, this 
> is done in CASSANDRA-8099).
> * SecondayIndexManager is overly complex and several of its functions should 
> be simplified/re-examined. The handling of which columns are indexed and 
> index selection on both the read and write paths are somewhat dense and 
> unintuitive.
> * The SecondaryIndex class hierarchy is rather convoluted and could use some 
> serious rework.
> There are a number of outstanding tickets which we should be able to roll 
> into this higher level one as subtasks (but I'll defer doing that until 
> getting into the details of the redesign):
> * CASSANDRA-7771
> * CASSANDRA-8103
> * CASSANDRA-9041
> * CASSANDRA-4458
> * CASSANDRA-8505
> Whilst they're not hard dependencies, I propose that this be done on top of 
> both CASSANDRA-8099 and CASSANDRA-6717. The former largely because the 
> storage engine changes may facilitate a friendlier index API, but also 
> because of the changes to SIS mentioned above. As for 6717, the changes to 
> schema tables there will help facilitate CASSANDRA-7771.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9459) SecondaryIndex API redesign

2015-09-02 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727859#comment-14727859
 ] 

Andrew Hust commented on CASSANDRA-9459:


When creating separate indexes on both the key and value of a map column the 
ddl for the table in cqlsh only contains the index on the value.  Both indexes 
are functional and queries return expected results.  When querying metadata 
from the python driver (3.0.0a2) both indexes are present and using the 
function as_cql_query produces the correct ddl.  This might just be an out of 
date python lib in cqlsh.

Tested on C*: {{66b0e1d7889d0858753c6e364e77d86fe278eee4}}

Can be reproduced with the following shell commands and ccm:
{code}
ccm remove 2i_test
ccm create -n 1 -v git:cassandra-3.0 -s 2i_test
ccm start

cat << EOF | ccm node1 cqlsh
CREATE KEYSPACE index_test_ks WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
USE index_test_ks;
CREATE TABLE tbl1 (id uuid primary key, ds map, c1 int);
INSERT INTO tbl1 (id, ds, c1) values (uuid(), {'foo': 1, 'bar': 2}, 1);
INSERT INTO tbl1 (id, ds, c1) values (uuid(), {'faz': 1, 'baz': 2}, 2);
CREATE INDEX ix_tbl1_map_values ON tbl1(ds);
CREATE INDEX ix_tbl1_map_keys ON tbl1(keys(ds));

SELECT * FROM tbl1 where ds contains 1;
SELECT * FROM tbl1 where ds contains key 'foo';

// ***
// DDL only has ix_tbl1_map_values present
// ***
DESC TABLE tbl1;

// ***
// system_schema.indexes is correct
// ***
SELECT * FROM system_schema.indexes;
EOF
ccm stop
{code}

Example output:
{code}
CREATE TABLE index_test_ks.tbl1 (
id uuid PRIMARY KEY,
c1 int,
ds map
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
CREATE INDEX ix_tbl1_map_values ON index_test_ks.tbl1 (ds);


 keyspace_name | table_name | index_name | index_type | options 
 | target_columns | target_type
---++++--++-
 index_test_ks |   tbl1 |   ix_tbl1_map_keys | COMPOSITES |   
{'index_keys': ''} | {'ds'} |  COLUMN
 index_test_ks |   tbl1 | ix_tbl1_map_values | COMPOSITES | 
{'index_values': ''} | {'ds'} |  COLUMN
{code}

> SecondaryIndex API redesign
> ---
>
> Key: CASSANDRA-9459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9459
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.0 beta 1
>
>
> For some time now the index subsystem has been a pain point and in large part 
> this is due to the way that the APIs and principal classes have grown 
> organically over the years. It would be a good idea to conduct a wholesale 
> review of the area and see if we can come up with something a bit more 
> coherent.
> A few starting points:
> * There's a lot in AbstractPerColumnSecondaryIndex & its subclasses which 
> could be pulled up into SecondaryIndexSearcher (note that to an extent, this 
> is done in CASSANDRA-8099).
> * SecondayIndexManager is overly complex and several of its functions should 
> be simplified/re-examined. The handling of which columns are indexed and 
> index selection on both the read and write paths are somewhat dense and 
> unintuitive.
> * The SecondaryIndex class hierarchy is rather convoluted and could use some 
> serious rework.
> There are a number of outstanding tickets which we should be able to roll 
> into this higher level one as subtasks (but I'll defer doing that until 
> getting into the details of the redesign):
> * CASSANDRA-7771
> * CASSANDRA-8103
> * CASSANDRA-9041
> * CASSANDRA-4458
> * CASSANDRA-8505
> Whilst they're not hard dependencies, I propose that this be done on top of 
> both CASSANDRA-8099 and CASSANDRA-6717. The former largely because the 
> storage engine changes may facilitate a friendlier index API, but also 
> because of the changes to SIS mentioned above. As for 6717, the changes to 
> schema tables there will help facilitate CASSANDRA-7771.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10250) Executing lots of schema alters concurrently can lead to dropped alters

2015-09-01 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10250:

Description: 
A recently added 
[dtest|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/132/testReport/junit/concurrent_schema_changes_test/TestConcurrentSchemaChanges/create_lots_of_schema_churn_test/]
 has been flapping on cassci and has exposed an issue with running lots of 
schema alterations concurrently.  The failures occur on healthy clusters but 
seem to occur at higher rates when 1 node is down during the alters.

The test executes the following – 440 total commands:
-   Create 20 new tables
-   Drop 7 columns one at time across 20 tables
-   Add 7 columns on at time across 20 tables
-   Add one column index on each of the 7 columns on 20 tables

Outcome is random. Majority of the failures are dropped columns still being 
present, but new columns and indexes have been observed to be incorrect.  The 
logs are don’t have exceptions and the columns/indexes that are incorrect don’t 
seem to follow a pattern.  Running a {{nodetool describecluster}} on each node 
shows the same schema id on all nodes.

Attached is a python script extracted from the dtest.  Running against a local 
3 node cluster will reproduce the issue (with enough runs – fails ~20% on my 
machine).

Also attached is the node logs from a run with when a dropped column 
(alter_me_7 table, column s1) is still present.  Checking the system_schema 
tables for this case shows the s1 column in both the columns and drop_columns 
tables.

This has been flapping on cassci on versions 2+ and doesn’t seem to be related 
to changes in 3.0.  More testing needs to be done though.

//cc [~enigmacurry]

  was:
A recently added 
[dtest|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/132/testReport/junit/concurrent_schema_changes_test/TestConcurrentSchemaChanges/create_lots_of_schema_churn_test/]
 has been flapping on cassci and has exposed an issue with running lots of 
schema alterations concurrently.  The failures occur on healthy clusters but 
seem to occur at higher rates when 1 node is down during the alters.

The test executes the following – 440 total commands:
-   Create 20 new tables
-   Drop 7 columns one at time across 20 tables
-   Add 7 columns on at time across 20 tables
-   Add one column index on each of the 7 columns on 20 tables

Outcome is random. Majority of the failures are dropped columns still being 
present, but new columns and indexes have been observed to be incorrect.  The 
logs are don’t have exceptions and the columns/indexes that are incorrect don’t 
seem to follow a pattern.  Running a {{nodetool describecluster}} on each node 
shows the same schema id on all nodes.

Attached is a python script extracted from the dtest.  Running against a local 
3 node cluster will reproduce the issue (with enough runs – fails ~20% on my 
machine).

Also attached is the node logs from a run with when a dropped column 
(alter_me_7 table, column s1) is still present.  Checking the system_schema 
tables for this case shows the s1 column in both the columns and drop_columns 
tables.

This has been flapping on cassci on versions 2+ and doesn’t seem to be related 
to changes in 3.0.  More testing needs to be done though.



> Executing lots of schema alters concurrently can lead to dropped alters
> ---
>
> Key: CASSANDRA-10250
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10250
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
> Attachments: concurrent_schema_changes.py, node1.log, node2.log, 
> node3.log
>
>
> A recently added 
> [dtest|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/132/testReport/junit/concurrent_schema_changes_test/TestConcurrentSchemaChanges/create_lots_of_schema_churn_test/]
>  has been flapping on cassci and has exposed an issue with running lots of 
> schema alterations concurrently.  The failures occur on healthy clusters but 
> seem to occur at higher rates when 1 node is down during the alters.
> The test executes the following – 440 total commands:
> - Create 20 new tables
> - Drop 7 columns one at time across 20 tables
> - Add 7 columns on at time across 20 tables
> - Add one column index on each of the 7 columns on 20 tables
> Outcome is random. Majority of the failures are dropped columns still being 
> present, but new columns and indexes have been observed to be incorrect.  The 
> logs are don’t have exceptions and the columns/indexes that are incorrect 
> don’t seem to follow a pattern.  Running a {{nodetool describecluster}} on 
> each node shows the same schema id on all nodes.
> Attached is a python script extracted from the 

[jira] [Updated] (CASSANDRA-10250) Executing lots of schema alters concurrently can lead to dropped alters

2015-09-01 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10250:

Description: 
A recently added 
[dtest|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/132/testReport/junit/concurrent_schema_changes_test/TestConcurrentSchemaChanges/create_lots_of_schema_churn_test/]
 has been flapping on cassci and has exposed an issue with running lots of 
schema alterations concurrently.  The failures occur on healthy clusters but 
seem to occur at higher rates when 1 node is down during the alters.

The test executes the following – 440 total commands:
-   Create 20 new tables
-   Drop 7 columns one at time across 20 tables
-   Add 7 columns one at time across 20 tables
-   Add one column index on each of the 7 columns on 20 tables

Outcome is random. Majority of the failures are dropped columns still being 
present, but new columns and indexes have been observed to be incorrect.  The 
logs are don’t have exceptions and the columns/indexes that are incorrect don’t 
seem to follow a pattern.  Running a {{nodetool describecluster}} on each node 
shows the same schema id on all nodes.

Attached is a python script extracted from the dtest.  Running against a local 
3 node cluster will reproduce the issue (with enough runs – fails ~20% on my 
machine).

Also attached is the node logs from a run with when a dropped column 
(alter_me_7 table, column s1) is still present.  Checking the system_schema 
tables for this case shows the s1 column in both the columns and drop_columns 
tables.

This has been flapping on cassci on versions 2+ and doesn’t seem to be related 
to changes in 3.0.  More testing needs to be done though.

//cc [~enigmacurry]

  was:
A recently added 
[dtest|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/132/testReport/junit/concurrent_schema_changes_test/TestConcurrentSchemaChanges/create_lots_of_schema_churn_test/]
 has been flapping on cassci and has exposed an issue with running lots of 
schema alterations concurrently.  The failures occur on healthy clusters but 
seem to occur at higher rates when 1 node is down during the alters.

The test executes the following – 440 total commands:
-   Create 20 new tables
-   Drop 7 columns one at time across 20 tables
-   Add 7 columns on at time across 20 tables
-   Add one column index on each of the 7 columns on 20 tables

Outcome is random. Majority of the failures are dropped columns still being 
present, but new columns and indexes have been observed to be incorrect.  The 
logs are don’t have exceptions and the columns/indexes that are incorrect don’t 
seem to follow a pattern.  Running a {{nodetool describecluster}} on each node 
shows the same schema id on all nodes.

Attached is a python script extracted from the dtest.  Running against a local 
3 node cluster will reproduce the issue (with enough runs – fails ~20% on my 
machine).

Also attached is the node logs from a run with when a dropped column 
(alter_me_7 table, column s1) is still present.  Checking the system_schema 
tables for this case shows the s1 column in both the columns and drop_columns 
tables.

This has been flapping on cassci on versions 2+ and doesn’t seem to be related 
to changes in 3.0.  More testing needs to be done though.

//cc [~enigmacurry]


> Executing lots of schema alters concurrently can lead to dropped alters
> ---
>
> Key: CASSANDRA-10250
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10250
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
> Attachments: concurrent_schema_changes.py, node1.log, node2.log, 
> node3.log
>
>
> A recently added 
> [dtest|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/132/testReport/junit/concurrent_schema_changes_test/TestConcurrentSchemaChanges/create_lots_of_schema_churn_test/]
>  has been flapping on cassci and has exposed an issue with running lots of 
> schema alterations concurrently.  The failures occur on healthy clusters but 
> seem to occur at higher rates when 1 node is down during the alters.
> The test executes the following – 440 total commands:
> - Create 20 new tables
> - Drop 7 columns one at time across 20 tables
> - Add 7 columns one at time across 20 tables
> - Add one column index on each of the 7 columns on 20 tables
> Outcome is random. Majority of the failures are dropped columns still being 
> present, but new columns and indexes have been observed to be incorrect.  The 
> logs are don’t have exceptions and the columns/indexes that are incorrect 
> don’t seem to follow a pattern.  Running a {{nodetool describecluster}} on 
> each node shows the same schema id on all nodes.
> Attached is a python script 

[jira] [Commented] (CASSANDRA-10250) Executing lots of schema alters concurrently can lead to dropped alters

2015-09-01 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726721#comment-14726721
 ] 

Andrew Hust commented on CASSANDRA-10250:
-

example script run -- missing c4 column
{code}
❯ python concurrent_schema_changes.py
creating base tables to be added/altered
executing creation of tables, add/drop column and index creation
sleeping 20 to make sure things are settled
verifing schema status
Errors found:
alter_me_8 expected c1 -> c7, id, got: [u'c1', u'c2', u'c3', u'c5', u'c6', 
u'c7', u'id']
{code}

> Executing lots of schema alters concurrently can lead to dropped alters
> ---
>
> Key: CASSANDRA-10250
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10250
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
> Attachments: concurrent_schema_changes.py, node1.log, node2.log, 
> node3.log
>
>
> A recently added 
> [dtest|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/132/testReport/junit/concurrent_schema_changes_test/TestConcurrentSchemaChanges/create_lots_of_schema_churn_test/]
>  has been flapping on cassci and has exposed an issue with running lots of 
> schema alterations concurrently.  The failures occur on healthy clusters but 
> seem to occur at higher rates when 1 node is down during the alters.
> The test executes the following – 440 total commands:
> - Create 20 new tables
> - Drop 7 columns one at time across 20 tables
> - Add 7 columns on at time across 20 tables
> - Add one column index on each of the 7 columns on 20 tables
> Outcome is random. Majority of the failures are dropped columns still being 
> present, but new columns and indexes have been observed to be incorrect.  The 
> logs are don’t have exceptions and the columns/indexes that are incorrect 
> don’t seem to follow a pattern.  Running a {{nodetool describecluster}} on 
> each node shows the same schema id on all nodes.
> Attached is a python script extracted from the dtest.  Running against a 
> local 3 node cluster will reproduce the issue (with enough runs – fails ~20% 
> on my machine).
> Also attached is the node logs from a run with when a dropped column 
> (alter_me_7 table, column s1) is still present.  Checking the system_schema 
> tables for this case shows the s1 column in both the columns and drop_columns 
> tables.
> This has been flapping on cassci on versions 2+ and doesn’t seem to be 
> related to changes in 3.0.  More testing needs to be done though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10250) Executing lots of schema alters concurrently can lead to dropped alters

2015-09-01 Thread Andrew Hust (JIRA)
Andrew Hust created CASSANDRA-10250:
---

 Summary: Executing lots of schema alters concurrently can lead to 
dropped alters
 Key: CASSANDRA-10250
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10250
 Project: Cassandra
  Issue Type: Bug
Reporter: Andrew Hust


A recently added 
[dtest|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/132/testReport/junit/concurrent_schema_changes_test/TestConcurrentSchemaChanges/create_lots_of_schema_churn_test/]
 has been flapping on cassci and has exposed an issue with running lots of 
schema alterations concurrently.  The failures occur on healthy clusters but 
seem to occur at higher rates when 1 node is down during the alters.

The test executes the following – 440 total commands:
-   Create 20 new tables
-   Drop 7 columns one at time across 20 tables
-   Add 7 columns on at time across 20 tables
-   Add one column index on each of the 7 columns on 20 tables

Outcome is random. Majority of the failures are dropped columns still being 
present, but new columns and indexes have been observed to be incorrect.  The 
logs are don’t have exceptions and the columns/indexes that are incorrect don’t 
seem to follow a pattern.  Running a {{nodetool describecluster}} on each node 
shows the same schema id on all nodes.

Attached is a python script extracted from the dtest.  Running against a local 
3 node cluster will reproduce the issue (with enough runs – fails ~20% on my 
machine).

Also attached is the node logs from a run with when a dropped column 
(alter_me_7 table, column s1) is still present.  Checking the system_schema 
tables for this case shows the s1 column in both the columns and drop_columns 
tables.

This has been flapping on cassci on versions 2+ and doesn’t seem to be related 
to changes in 3.0.  More testing needs to be done though.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10250) Executing lots of schema alters concurrently can lead to dropped alters

2015-09-01 Thread Andrew Hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Hust updated CASSANDRA-10250:

Attachment: node3.log
node2.log
node1.log
concurrent_schema_changes.py

> Executing lots of schema alters concurrently can lead to dropped alters
> ---
>
> Key: CASSANDRA-10250
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10250
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andrew Hust
> Attachments: concurrent_schema_changes.py, node1.log, node2.log, 
> node3.log
>
>
> A recently added 
> [dtest|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/132/testReport/junit/concurrent_schema_changes_test/TestConcurrentSchemaChanges/create_lots_of_schema_churn_test/]
>  has been flapping on cassci and has exposed an issue with running lots of 
> schema alterations concurrently.  The failures occur on healthy clusters but 
> seem to occur at higher rates when 1 node is down during the alters.
> The test executes the following – 440 total commands:
> - Create 20 new tables
> - Drop 7 columns one at time across 20 tables
> - Add 7 columns on at time across 20 tables
> - Add one column index on each of the 7 columns on 20 tables
> Outcome is random. Majority of the failures are dropped columns still being 
> present, but new columns and indexes have been observed to be incorrect.  The 
> logs are don’t have exceptions and the columns/indexes that are incorrect 
> don’t seem to follow a pattern.  Running a {{nodetool describecluster}} on 
> each node shows the same schema id on all nodes.
> Attached is a python script extracted from the dtest.  Running against a 
> local 3 node cluster will reproduce the issue (with enough runs – fails ~20% 
> on my machine).
> Also attached is the node logs from a run with when a dropped column 
> (alter_me_7 table, column s1) is still present.  Checking the system_schema 
> tables for this case shows the s1 column in both the columns and drop_columns 
> tables.
> This has been flapping on cassci on versions 2+ and doesn’t seem to be 
> related to changes in 3.0.  More testing needs to be done though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10156) Creating Materialized views concurrently leads to missing data

2015-08-27 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14717657#comment-14717657
 ] 

Andrew Hust commented on CASSANDRA-10156:
-

Confirmed patch corrects problem:

Failed as expected on:
origin/trunk : {{f744b6c05537b84b60f53917fafb32bc1d8b1b03}}

Passes on:
tjake/10156 : {{0edab864030817d1fe3739832a72453f084179b4}}

Once this is merged I'll un-skip the 
[dtest|https://github.com/riptano/cassandra-dtest/blob/master/concurrent_schema_changes_test.py#L243]

 Creating Materialized views concurrently leads to missing data
 --

 Key: CASSANDRA-10156
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10156
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: T Jake Luciani
 Fix For: 3.0 beta 2

 Attachments: mv_test_bad.sh, mv_test_good.sh


 [~nutbunnies] was writing dtests that create multiple tables concurrently. He 
 also wrote a test that creates multiple MV but has not been able to get it 
 works properly. After some debugging outside of dtest, it seems that there is 
 an issue if we create more than 1 MV at the same time. There is no errors in 
 the log but the MV are never entirely populated and are missing data.
 I've attached 2 scripts:
 [^mv_test_bad.sh]: is the one that reproduce the issue. It creates 4 MVs at 
 the same time. At the end, some data are missing in the MVs and there is 
 nothing in system.hints or system.batchlog.
 [^mv_test_good.sh]: is the same script but that waits 10 seconds between each 
 MV creation, which results in 4 MVs with all the data.
 Some more notes from Andrew:
 {code}
 - lowering the number of rows inserted below ~1000 won't exhibit the 
 inconsistent behavior
 - adding more columns/MV make it worse -- more of the MVs counts are 
 consistently wrong
 - multiple runs will range in disagreement -- usually one of the MVs is 
 correct though
 - the describe cluster and system.mv* queries always look good
 {code}
 Thanks Andrew for finding this bug and providing the test scripts! 
 //cc [~carlyeks] [~tjake] [~enigmacurry]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6717) Modernize schema tables

2015-08-18 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701618#comment-14701618
 ] 

Andrew Hust commented on CASSANDRA-6717:


There appears to be a couple issues with UDT and UDF migration from 2.2-3.0.  
Both seem to be related to the number of nodes in the cluster during the 
upgrade.  More nodes cause more duplication of field data but it's inconsistent 
and can range from the correct value to the number of nodes.  I haven’t been 
able reproduce on single or 2 node clusters.  I’ve verified that the metadata 
returned from the driver does match the reality of the system_schema.functions 
and system_schema.user_types tables.

I've extracted the failures from the upgrade test to simplify and remove other 
test influences.
UDT:field_names duplication can be reproduced with this 
[gist|https://gist.github.com/nutbunnies/4d1dcfe96556d0218b15] 
UDF:argument_names duplication can be reproduced with this 
[gist|https://gist.github.com/nutbunnies/51b61d245f00106b1641]

Both can also be reproduced by enabling the establish/verify methods in the 
upgrade dtest in this [PR|https://github.com/riptano/cassandra-dtest/pull/447].

This was done with C* at {{06c130e3cb85577041b475084400c08c505d8f9e}} and the 
python-driver at {{caf58cc5e72afe664759b50dbcc92993902beb3e}}


 Modernize schema tables
 ---

 Key: CASSANDRA-6717
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6717
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
  Labels: client-impacting, doc-impacting
 Fix For: 3.0 beta 2


 There is a few problems/improvements that can be done with the way we store 
 schema:
 # CASSANDRA-4988: as explained on the ticket, storing the comparator is now 
 redundant (or almost, we'd need to store whether the table is COMPACT or not 
 too, which we don't currently is easy and probably a good idea anyway), it 
 can be entirely reconstructed from the infos in schema_columns (the same is 
 true of key_validator and subcomparator, and replacing default_validator by a 
 COMPACT_VALUE column in all case is relatively simple). And storing the 
 comparator as an opaque string broke concurrent updates of sub-part of said 
 comparator (concurrent collection addition or altering 2 separate clustering 
 columns typically) so it's really worth removing it.
 # CASSANDRA-4603: it's time to get rid of those ugly json maps. I'll note 
 that schema_keyspaces is a problem due to its use of COMPACT STORAGE, but I 
 think we should fix it once and for-all nonetheless (see below).
 # For CASSANDRA-6382 and to allow indexing both map keys and values at the 
 same time, we'd need to be able to have more than one index definition for a 
 given column.
 # There is a few mismatches in table options between the one stored in the 
 schema and the one used when declaring/altering a table which would be nice 
 to fix. The compaction, compression and replication maps are one already 
 mentioned from CASSANDRA-4603, but also for some reason 
 'dclocal_read_repair_chance' in CQL is called just 'local_read_repair_chance' 
 in the schema table, and 'min/max_compaction_threshold' are column families 
 option in the schema but just compaction options for CQL (which makes more 
 sense).
 None of those issues are major, and we could probably deal with them 
 independently but it might be simpler to just fix them all in one shot so I 
 wanted to sum them all up here. In particular, the fact that 
 'schema_keyspaces' uses COMPACT STORAGE is annoying (for the replication map, 
 but it may limit future stuff too) which suggest we should migrate it to a 
 new, non COMPACT table. And while that's arguably a detail, it wouldn't hurt 
 to rename schema_columnfamilies to schema_tables for the years to come since 
 that's the prefered vernacular for CQL.
 Overall, what I would suggest is to move all schema tables to a new keyspace, 
 named 'schema' for instance (or 'system_schema' but I prefer the shorter 
 version), and fix all the issues above at once. Since we currently don't 
 exchange schema between nodes of different versions, all we'd need to do that 
 is a one shot startup migration, and overall, I think it could be simpler for 
 clients to deal with one clear migration than to have to handle minor 
 individual changes all over the place. I also think it's somewhat cleaner 
 conceptually to have schema tables in their own keyspace since they are 
 replicated through a different mechanism than other system tables.
 If we do that, we could, for instance, migrate to the following schema tables 
 (details up for discussion of course):
 {noformat}
 CREATE TYPE user_type (
   name text,
   column_names listtext,
   column_types listtext
 )
 CREATE TABLE keyspaces (
   

[jira] [Commented] (CASSANDRA-6717) Modernize schema tables

2015-08-07 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662105#comment-14662105
 ] 

Andrew Hust commented on CASSANDRA-6717:


In adding additional table 
[dtests|https://github.com/riptano/cassandra-dtest/pull/444/files] I came 
across a o.a.c.e.UnrecognizedEntityException when migrating aggregates from 2.2 
to 3.0.  This failure can be exercised outside of the upgrade dtest with this 
[gist|https://gist.github.com/nutbunnies/3a544826cc44c92a3131].


 Modernize schema tables
 ---

 Key: CASSANDRA-6717
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6717
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Sam Tunnicliffe
  Labels: client-impacting, doc-impacting
 Fix For: 3.0 beta 1


 There is a few problems/improvements that can be done with the way we store 
 schema:
 # CASSANDRA-4988: as explained on the ticket, storing the comparator is now 
 redundant (or almost, we'd need to store whether the table is COMPACT or not 
 too, which we don't currently is easy and probably a good idea anyway), it 
 can be entirely reconstructed from the infos in schema_columns (the same is 
 true of key_validator and subcomparator, and replacing default_validator by a 
 COMPACT_VALUE column in all case is relatively simple). And storing the 
 comparator as an opaque string broke concurrent updates of sub-part of said 
 comparator (concurrent collection addition or altering 2 separate clustering 
 columns typically) so it's really worth removing it.
 # CASSANDRA-4603: it's time to get rid of those ugly json maps. I'll note 
 that schema_keyspaces is a problem due to its use of COMPACT STORAGE, but I 
 think we should fix it once and for-all nonetheless (see below).
 # For CASSANDRA-6382 and to allow indexing both map keys and values at the 
 same time, we'd need to be able to have more than one index definition for a 
 given column.
 # There is a few mismatches in table options between the one stored in the 
 schema and the one used when declaring/altering a table which would be nice 
 to fix. The compaction, compression and replication maps are one already 
 mentioned from CASSANDRA-4603, but also for some reason 
 'dclocal_read_repair_chance' in CQL is called just 'local_read_repair_chance' 
 in the schema table, and 'min/max_compaction_threshold' are column families 
 option in the schema but just compaction options for CQL (which makes more 
 sense).
 None of those issues are major, and we could probably deal with them 
 independently but it might be simpler to just fix them all in one shot so I 
 wanted to sum them all up here. In particular, the fact that 
 'schema_keyspaces' uses COMPACT STORAGE is annoying (for the replication map, 
 but it may limit future stuff too) which suggest we should migrate it to a 
 new, non COMPACT table. And while that's arguably a detail, it wouldn't hurt 
 to rename schema_columnfamilies to schema_tables for the years to come since 
 that's the prefered vernacular for CQL.
 Overall, what I would suggest is to move all schema tables to a new keyspace, 
 named 'schema' for instance (or 'system_schema' but I prefer the shorter 
 version), and fix all the issues above at once. Since we currently don't 
 exchange schema between nodes of different versions, all we'd need to do that 
 is a one shot startup migration, and overall, I think it could be simpler for 
 clients to deal with one clear migration than to have to handle minor 
 individual changes all over the place. I also think it's somewhat cleaner 
 conceptually to have schema tables in their own keyspace since they are 
 replicated through a different mechanism than other system tables.
 If we do that, we could, for instance, migrate to the following schema tables 
 (details up for discussion of course):
 {noformat}
 CREATE TYPE user_type (
   name text,
   column_names listtext,
   column_types listtext
 )
 CREATE TABLE keyspaces (
   name text PRIMARY KEY,
   durable_writes boolean,
   replication mapstring, string,
   user_types mapstring, user_type
 )
 CREATE TYPE trigger_definition (
   name text,
   options maptex, text
 )
 CREATE TABLE tables (
   keyspace text,
   name text,
   id uuid,
   table_type text, // COMPACT, CQL or SUPER
   dropped_columns maptext, bigint,
   triggers maptext, trigger_definition,
   // options
   comment text,
   compaction maptext, text,
   compression maptext, text,
   read_repair_chance double,
   dclocal_read_repair_chance double,
   gc_grace_seconds int,
   caching text,
   rows_per_partition_to_cache text,
   default_time_to_live int,
   min_index_interval int,
   max_index_interval int,
   speculative_retry text,
   populate_io_cache_on_flush boolean,
   bloom_filter_fp_chance double
   

[jira] [Commented] (CASSANDRA-6717) Modernize schema tables

2015-08-05 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14658951#comment-14658951
 ] 

Andrew Hust commented on CASSANDRA-6717:


I added some 
[dtests|https://github.com/riptano/cassandra-dtest/commit/c346c6b87ec081956330e5b3cab2178e4c5b8a23]
 to check schema metadata -- anything additional that should be added?


 Modernize schema tables
 ---

 Key: CASSANDRA-6717
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6717
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
  Labels: client-impacting, doc-impacting
 Fix For: 3.0 beta 1


 There is a few problems/improvements that can be done with the way we store 
 schema:
 # CASSANDRA-4988: as explained on the ticket, storing the comparator is now 
 redundant (or almost, we'd need to store whether the table is COMPACT or not 
 too, which we don't currently is easy and probably a good idea anyway), it 
 can be entirely reconstructed from the infos in schema_columns (the same is 
 true of key_validator and subcomparator, and replacing default_validator by a 
 COMPACT_VALUE column in all case is relatively simple). And storing the 
 comparator as an opaque string broke concurrent updates of sub-part of said 
 comparator (concurrent collection addition or altering 2 separate clustering 
 columns typically) so it's really worth removing it.
 # CASSANDRA-4603: it's time to get rid of those ugly json maps. I'll note 
 that schema_keyspaces is a problem due to its use of COMPACT STORAGE, but I 
 think we should fix it once and for-all nonetheless (see below).
 # For CASSANDRA-6382 and to allow indexing both map keys and values at the 
 same time, we'd need to be able to have more than one index definition for a 
 given column.
 # There is a few mismatches in table options between the one stored in the 
 schema and the one used when declaring/altering a table which would be nice 
 to fix. The compaction, compression and replication maps are one already 
 mentioned from CASSANDRA-4603, but also for some reason 
 'dclocal_read_repair_chance' in CQL is called just 'local_read_repair_chance' 
 in the schema table, and 'min/max_compaction_threshold' are column families 
 option in the schema but just compaction options for CQL (which makes more 
 sense).
 None of those issues are major, and we could probably deal with them 
 independently but it might be simpler to just fix them all in one shot so I 
 wanted to sum them all up here. In particular, the fact that 
 'schema_keyspaces' uses COMPACT STORAGE is annoying (for the replication map, 
 but it may limit future stuff too) which suggest we should migrate it to a 
 new, non COMPACT table. And while that's arguably a detail, it wouldn't hurt 
 to rename schema_columnfamilies to schema_tables for the years to come since 
 that's the prefered vernacular for CQL.
 Overall, what I would suggest is to move all schema tables to a new keyspace, 
 named 'schema' for instance (or 'system_schema' but I prefer the shorter 
 version), and fix all the issues above at once. Since we currently don't 
 exchange schema between nodes of different versions, all we'd need to do that 
 is a one shot startup migration, and overall, I think it could be simpler for 
 clients to deal with one clear migration than to have to handle minor 
 individual changes all over the place. I also think it's somewhat cleaner 
 conceptually to have schema tables in their own keyspace since they are 
 replicated through a different mechanism than other system tables.
 If we do that, we could, for instance, migrate to the following schema tables 
 (details up for discussion of course):
 {noformat}
 CREATE TYPE user_type (
   name text,
   column_names listtext,
   column_types listtext
 )
 CREATE TABLE keyspaces (
   name text PRIMARY KEY,
   durable_writes boolean,
   replication mapstring, string,
   user_types mapstring, user_type
 )
 CREATE TYPE trigger_definition (
   name text,
   options maptex, text
 )
 CREATE TABLE tables (
   keyspace text,
   name text,
   id uuid,
   table_type text, // COMPACT, CQL or SUPER
   dropped_columns maptext, bigint,
   triggers maptext, trigger_definition,
   // options
   comment text,
   compaction maptext, text,
   compression maptext, text,
   read_repair_chance double,
   dclocal_read_repair_chance double,
   gc_grace_seconds int,
   caching text,
   rows_per_partition_to_cache text,
   default_time_to_live int,
   min_index_interval int,
   max_index_interval int,
   speculative_retry text,
   populate_io_cache_on_flush boolean,
   bloom_filter_fp_chance double
   memtable_flush_period_in_ms int,
   PRIMARY KEY (keyspace, name)
 )
 CREATE TYPE index_definition (
   name text,
   index_type text,
   options maptext, text
 

[jira] [Commented] (CASSANDRA-6717) Modernize schema tables

2015-07-31 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14649507#comment-14649507
 ] 

Andrew Hust commented on CASSANDRA-6717:


I’ve started adding dtests for this and ran into a small issue.  With the 
schema_* tables being renamed some of them conflict with the cql describe 
command.  Example when scoped to the system_schema keyspace running `desc 
tables` gives a list of tables in the keyspace instead of the cql for the 
tables table.  Using `desc system_schema.tables` works as expected.

Does any documentation or ? need to be updated to explain that behavior?


 Modernize schema tables
 ---

 Key: CASSANDRA-6717
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6717
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
  Labels: client-impacting, doc-impacting
 Fix For: 3.0 beta 1


 There is a few problems/improvements that can be done with the way we store 
 schema:
 # CASSANDRA-4988: as explained on the ticket, storing the comparator is now 
 redundant (or almost, we'd need to store whether the table is COMPACT or not 
 too, which we don't currently is easy and probably a good idea anyway), it 
 can be entirely reconstructed from the infos in schema_columns (the same is 
 true of key_validator and subcomparator, and replacing default_validator by a 
 COMPACT_VALUE column in all case is relatively simple). And storing the 
 comparator as an opaque string broke concurrent updates of sub-part of said 
 comparator (concurrent collection addition or altering 2 separate clustering 
 columns typically) so it's really worth removing it.
 # CASSANDRA-4603: it's time to get rid of those ugly json maps. I'll note 
 that schema_keyspaces is a problem due to its use of COMPACT STORAGE, but I 
 think we should fix it once and for-all nonetheless (see below).
 # For CASSANDRA-6382 and to allow indexing both map keys and values at the 
 same time, we'd need to be able to have more than one index definition for a 
 given column.
 # There is a few mismatches in table options between the one stored in the 
 schema and the one used when declaring/altering a table which would be nice 
 to fix. The compaction, compression and replication maps are one already 
 mentioned from CASSANDRA-4603, but also for some reason 
 'dclocal_read_repair_chance' in CQL is called just 'local_read_repair_chance' 
 in the schema table, and 'min/max_compaction_threshold' are column families 
 option in the schema but just compaction options for CQL (which makes more 
 sense).
 None of those issues are major, and we could probably deal with them 
 independently but it might be simpler to just fix them all in one shot so I 
 wanted to sum them all up here. In particular, the fact that 
 'schema_keyspaces' uses COMPACT STORAGE is annoying (for the replication map, 
 but it may limit future stuff too) which suggest we should migrate it to a 
 new, non COMPACT table. And while that's arguably a detail, it wouldn't hurt 
 to rename schema_columnfamilies to schema_tables for the years to come since 
 that's the prefered vernacular for CQL.
 Overall, what I would suggest is to move all schema tables to a new keyspace, 
 named 'schema' for instance (or 'system_schema' but I prefer the shorter 
 version), and fix all the issues above at once. Since we currently don't 
 exchange schema between nodes of different versions, all we'd need to do that 
 is a one shot startup migration, and overall, I think it could be simpler for 
 clients to deal with one clear migration than to have to handle minor 
 individual changes all over the place. I also think it's somewhat cleaner 
 conceptually to have schema tables in their own keyspace since they are 
 replicated through a different mechanism than other system tables.
 If we do that, we could, for instance, migrate to the following schema tables 
 (details up for discussion of course):
 {noformat}
 CREATE TYPE user_type (
   name text,
   column_names listtext,
   column_types listtext
 )
 CREATE TABLE keyspaces (
   name text PRIMARY KEY,
   durable_writes boolean,
   replication mapstring, string,
   user_types mapstring, user_type
 )
 CREATE TYPE trigger_definition (
   name text,
   options maptex, text
 )
 CREATE TABLE tables (
   keyspace text,
   name text,
   id uuid,
   table_type text, // COMPACT, CQL or SUPER
   dropped_columns maptext, bigint,
   triggers maptext, trigger_definition,
   // options
   comment text,
   compaction maptext, text,
   compression maptext, text,
   read_repair_chance double,
   dclocal_read_repair_chance double,
   gc_grace_seconds int,
   caching text,
   rows_per_partition_to_cache text,
   default_time_to_live int,
   min_index_interval int,
   max_index_interval int,
 

[jira] [Commented] (CASSANDRA-9884) Error on encrypted node communication upgrading from 2.1.6 to 2.2.0

2015-07-29 Thread Andrew Hust (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14646629#comment-14646629
 ] 

Andrew Hust commented on CASSANDRA-9884:


This appears to be just a 2.2 issue and not related to the process of upgrading 
from 2.1.x to 2.2.  As far as I can tell no new configuration is need for 
internode SSL communication on 2.2 and a fresh 2.2 cluster shows this failure.  

Applying the above patch does resolve the issue in my testing.  I have added a 
simple internode SSL verification test and updated the upgrading test to run 
with/without internode SSL to casasandra-dtest.  
https://github.com/riptano/cassandra-dtest/pull/427


 Error on encrypted node communication upgrading from 2.1.6 to 2.2.0
 ---

 Key: CASSANDRA-9884
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9884
 Project: Cassandra
  Issue Type: Bug
  Components: Config, Core
 Environment: Ubuntu 14.04.2 LTS 64 bits.
 Java version 1.8.0_45
 Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
 Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
Reporter: Carlos Scheidecker
Priority: Critical
  Labels: security
 Fix For: 2.2.x


 After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues.
 The configuration had not changed from one version to the other, the JVM is 
 still the same however on 2.2.0 it is erroring. I am yet to investigate the 
 source code for it. But for now, this is the information I have to share on 
 it:
 My JVM is java version 1.8.0_45
 Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
 Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
 Ubuntu 14.04.2 LTS is on all nodes, they are the same.
 Below is the encryption settings from cassandra.yaml of all nodes.
 I am using the same keystore and trustore as I had used before on 2.1.6
 # Enable or disable inter-node encryption
 # Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
 # users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
 # suite for authentication, key exchange and encryption of the actual data 
 transfers.
 # Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
 # NOTE: No custom encryption options are enabled at the moment
 # The available internode options are : all, none, dc, rack
 #
 # If set to dc cassandra will encrypt the traffic between the DCs
 # If set to rack cassandra will encrypt the traffic between the racks
 #
 # The passwords used in these options must match the passwords used when 
 generating
 # the keystore and truststore.  For instructions on generating these files, 
 see:
 # 
 http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
 #
 server_encryption_options:
 internode_encryption: all
 keystore: /etc/cassandra/certs/node.keystore
 keystore_password: mypasswd
 truststore: /etc/cassandra/certs/global.truststore
 truststore_password: mypasswd
 # More advanced defaults below:
 # protocol: TLS
 # algorithm: SunX509
 # store_type: JKS
 cipher_suites: 
 [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
 require_client_auth: false
 # enable or disable client/server encryption.
 Nodes cannot talk to each other as per SSL errors bellow.
 WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
 SSLFactory.java:163 - Filtering out 
 TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
  as it isnt supported by the socket
 ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
 OutboundTcpConnection.java:229 - error processing a message intended for 
 /192.168.1.31
 java.lang.NullPointerException: null
   at 
 com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213) 
 ~[guava-16.0.jar:na]
   at 
 org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.init(BufferedDataOutputStreamPlus.java:74)
  ~[apache-cassandra-2.2.0.jar:2.2.0]
   at 
 org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
  ~[apache-cassandra-2.2.0.jar:2.2.0]
   at 
 org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
  ~[apache-cassandra-2.2.0.jar:2.2.0]
 ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
 OutboundTcpConnection.java:316 - error writing to /192.168.1.31
 java.lang.NullPointerException: null
   at 
 org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:323)
  [apache-cassandra-2.2.0.jar:2.2.0]
   at