Indeed.  I did throw a comment on 11990 - not sure if that triggers emails
to those participants, but was hoping someone would take a look.

On Sat, Nov 5, 2016 at 2:26 AM, DuyHai Doan <doanduy...@gmail.com> wrote:

> So from code review, the error message you get from the log is coming from
> the CASSANDRA-11990:  https://github.com/ifesdjeen/cassandra/commit/
> dc4ae57f452e19adbe5a6a2c85f8a4b5a24d4103#diff-
> eae81aa3b81f9b1e07b109c446447a50R357
>
> Now, it's just the consequence of the problem (throwing an assertion
> error), we have to dig further to understand why we fall into this situation
>
> On Sat, Nov 5, 2016 at 5:15 AM, Jonathan Haddad <j...@jonhaddad.com> wrote:
>
>> Can you file a Jira for this? Would be good to make sure 3.10 doesn't get
>> released with this bug.
>> On Fri, Nov 4, 2016 at 6:11 PM Voytek Jarnot <voytek.jar...@gmail.com>
>> wrote:
>>
>>> Thought I'd follow-up to myself, in case anyone else comes across this
>>> problem.  I found a reasonably easy test case to reproduce the problem:
>>>
>>> This works in 3.9, but doesn't work in 3.10-snapshot:
>>>
>>> CREATE KEYSPACE vjtest WITH replication = {'class': 'SimpleStrategy',
>>> 'replication_factor': '1'};
>>> use vjtest ;
>>> create table tester(id1 text, id2 text, id3 text, val1 text, primary
>>> key((id1, id2), id3));
>>> create custom index tester_idx_val1 on tester(val1) using '
>>> org.apache.cassandra.index.sasi.SASIIndex';
>>> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','1-3','asdf');
>>> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','2-3','asdf');
>>> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','3-3','asdf');
>>> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','4-3','asdf');
>>> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','5-3','asdf');
>>> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','6-3','asdf');
>>> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','7-3','asdf');
>>> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','8-3','asdf');
>>> insert into tester(id1,id2,id3, val1) values ('1-1','1-2','9-3','asdf');
>>>
>>> That's it - when Cassandra tries to flush all hell breaks loose (well,
>>> maaybe not, but an unhandled error gets logged).  Also, the index doesn't
>>> actually work subsequently.
>>>
>>> On Fri, Nov 4, 2016 at 3:58 PM, Voytek Jarnot <voytek.jar...@gmail.com>
>>> wrote:
>>>
>>> Wondering if anyone has encountered the same...
>>>
>>> Full story and stacktraces below, short version is that creating a SASI
>>> index fails for me when running a 3.10-SNAPSHOT build. One caveat: creating
>>> the index on an empty table doesn't fail; however, soon after I start
>>> pumping data into the table similar problems occur.
>>>
>>> I created CASSANDRA-12877 for this, but am beginning to suspect it might
>>> be related to CASSANDRA-11990.  The thing that's throwing me is that I
>>> can't seem to duplicate this with a simple test table.
>>>
>>> Background:
>>>
>>> Ended up building/loading a 3.10-SNAPSHOT to try to get past
>>> CASSANDRA-11670, CASSANDRA-12223, and CASSANDRA-12689.
>>>
>>> 1) built/installed 3.10-SNAPSHOT from git branch cassandra-3.X
>>> 2) created keyspace (SimpleStrategy, RF 1)
>>> 3) created table: (simplified version below, many more valX columns
>>> present)
>>>
>>> CREATE TABLE test_table (
>>>     id1 text,
>>>     id2 text,
>>>     id3 date,
>>>     id4 timestamp,
>>>     id5 text,
>>>     val1 text,
>>>     val2 text,
>>>     val3 text,
>>>     task_id text,
>>>     val4 text,
>>>     val5 text,
>>>     PRIMARY KEY ((id1, id2), id3, id4, id5)
>>> ) WITH CLUSTERING ORDER BY (id3 DESC, id4 DESC, id5 ASC)
>>>
>>> 4) created materialized view:
>>>
>>> CREATE MATERIALIZED VIEW test_table_by_task_id AS
>>>     SELECT *
>>>     FROM test_table
>>>     WHERE id1 IS NOT NULL AND id2 IS NOT NULL AND id3 IS NOT NULL AND
>>> id4 IS NOT NULL AND id5 IS NOT NULL AND task_id IS NOT NULL
>>>     PRIMARY KEY (task_id, id3, id4, id1, id2, id5)
>>>     WITH CLUSTERING ORDER BY (id3 DESC, id4 DESC, id1 ASC, id2 ASC, id5
>>> ASC)
>>>
>>> 5) inserted 27 million "rows" (i.e., unique values for id5)
>>> 6) create index attempt
>>>
>>> create custom index idx_test_table_val5 on test_table(val5) using '
>>> org.apache.cassandra.index.sasi.SASIIndex';
>>>
>>> 7) no error in cqlsh, but system.log shows many of the following:
>>>
>>> INFO  [SASI-General:1] 2016-11-04 13:46:47,578
>>> PerSSTableIndexWriter.java:277 - Flushed index segment
>>> /mydir/cassandra/apache-cassandra-3.10-SNAPSHOT/data/data/
>>> mykeyspace/test_table-133dd090a2b411e6b1bf6df2a1af06f0/mc-
>>> 149-big-SI_idx_test_table_val5.db_0, took 869 ms.
>>> ERROR [SASI-General:1] 2016-11-04 13:46:47,584 CassandraDaemon.java:229
>>> - Exception in thread Thread[SASI-General:1,5,main]
>>> java.lang.AssertionError: cannot have more than 8 overflow collisions
>>> per leaf, but had: 12
>>>     at org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilde
>>> r$Leaf.createOverflowEntry(AbstractTokenTreeBuilder.java:357)
>>> ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilde
>>> r$Leaf.createEntry(AbstractTokenTreeBuilder.java:346)
>>> ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.disk.DynamicTokenTreeBuilder
>>> $DynamicLeaf.serializeData(DynamicTokenTreeBuilder.java:180)
>>> ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilde
>>> r$Leaf.serialize(AbstractTokenTreeBuilder.java:306)
>>> ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.disk.AbstractTokenTreeBuilde
>>> r.write(AbstractTokenTreeBuilder.java:90) ~[apache-cassandra-3.10-SNAPSH
>>> OT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$Muta
>>> bleDataBlock.flushAndClear(OnDiskIndexBuilder.java:629)
>>> ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$Muta
>>> bleLevel.flush(OnDiskIndexBuilder.java:446)
>>> ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder$Muta
>>> bleLevel.add(OnDiskIndexBuilder.java:433) ~[apache-cassandra-3.10-SNAPSH
>>> OT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.addT
>>> erm(OnDiskIndexBuilder.java:207) ~[apache-cassandra-3.10-SNAPSH
>>> OT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.fini
>>> sh(OnDiskIndexBuilder.java:293) ~[apache-cassandra-3.10-SNAPSH
>>> OT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.fini
>>> sh(OnDiskIndexBuilder.java:258) ~[apache-cassandra-3.10-SNAPSH
>>> OT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder.fini
>>> sh(OnDiskIndexBuilder.java:241) ~[apache-cassandra-3.10-SNAPSH
>>> OT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$
>>> Index.lambda$scheduleSegmentFlush$0(PerSSTableIndexWriter.java:267)
>>> ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> ~[na:1.8.0_101]
>>>     at 
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> ~[na:1.8.0_101]
>>>     at 
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> [na:1.8.0_101]
>>>     at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
>>>
>>> As well as some of these:
>>>
>>> ERROR [CompactionExecutor:3] 2016-11-04 13:49:13,142
>>> DataTracker.java:168 - Can't open index file at
>>> /mydir/cassandra/apache-cassandra-3.10-SNAPSHOT/data/data/
>>> mykeyspace/test_table-133dd090a2b411e6b1bf6df2a1af06f0/mc-
>>> 300-big-SI_idx_test_table_val5.db, skipping.
>>> java.lang.IllegalArgumentException: position: 3472329188772431788,
>>> limit: 8180
>>>     at 
>>> org.apache.cassandra.index.sasi.utils.MappedBuffer.position(MappedBuffer.java:106)
>>> ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at 
>>> org.apache.cassandra.index.sasi.disk.OnDiskIndex.<init>(OnDiskIndex.java:147)
>>> ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at 
>>> org.apache.cassandra.index.sasi.SSTableIndex.<init>(SSTableIndex.java:62)
>>> ~[apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at 
>>> org.apache.cassandra.index.sasi.conf.DataTracker.getIndexes(DataTracker.java:150)
>>> [apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at 
>>> org.apache.cassandra.index.sasi.conf.DataTracker.update(DataTracker.java:69)
>>> [apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at 
>>> org.apache.cassandra.index.sasi.conf.ColumnIndex.update(ColumnIndex.java:147)
>>> [apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at org.apache.cassandra.index.sasi.SASIIndexBuilder.completeSST
>>> able(SASIIndexBuilder.java:156) [apache-cassandra-3.10-SNAPSHO
>>> T.jar:3.10-SNAPSHOT]
>>>     at 
>>> org.apache.cassandra.index.sasi.SASIIndexBuilder.build(SASIIndexBuilder.java:125)
>>> [apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at 
>>> org.apache.cassandra.db.compaction.CompactionManager$14.run(CompactionManager.java:1583)
>>> [apache-cassandra-3.10-SNAPSHOT.jar:3.10-SNAPSHOT]
>>>     at 
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> [na:1.8.0_101]
>>>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> [na:1.8.0_101]
>>>     at 
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> [na:1.8.0_101]
>>>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> [na:1.8.0_101]
>>>     at 
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> [na:1.8.0_101]
>>>     at 
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> [na:1.8.0_101]
>>>     at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
>>>
>>>
>>>
>

Reply via email to