[ https://issues.apache.org/jira/browse/CASSANDRA-15208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16909212#comment-16909212 ]
Jeremy Hanna commented on CASSANDRA-15208: ------------------------------------------ Just curious, why would you list the same data directory multiple times? > Listing the same data directory multiple times can result in an > java.lang.AssertionError: null on startup > --------------------------------------------------------------------------------------------------------- > > Key: CASSANDRA-15208 > URL: https://issues.apache.org/jira/browse/CASSANDRA-15208 > Project: Cassandra > Issue Type: Bug > Components: Local/Config > Reporter: Damien Stevenson > Priority: Normal > > Listing the same data directory multiple times in the yaml can result in an > java.lang.AssertionError: null on startup. > This error will only happen if Cassandra was stopped part way through an > sstable operation (i.e a compaction) and is restarted > Error: > {noformat} > Exception (java.lang.AssertionError) encountered during startup: null > java.lang.AssertionError > at > org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplica(LogReplicaSet.java:63) > at java.util.ArrayList.forEach(ArrayList.java:1257) > at > org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplicas(LogReplicaSet.java:57) > at org.apache.cassandra.db.lifecycle.LogFile.<init>(LogFile.java:147) > at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:95) > at > org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:476) > at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) > at java.util.HashMap$EntrySpliterator.tryAdvance(HashMap.java:1717) > at > java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) > at > java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498) > at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485) > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) > at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230) > at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196) > at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) > at java.util.stream.ReferencePipeline.allMatch(ReferencePipeline.java:454) > at > org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:471) > at > org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:438) > at > org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:430) > at > org.apache.cassandra.db.lifecycle.LifecycleTransaction.removeUnfinishedLeftovers(LifecycleTransaction.java:549) > at > org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:658) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:275) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:620) > at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:732) > ERROR o.a.c.service.CassandraDaemon Exception encountered during startup > java.lang.AssertionError: null > at > org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplica(LogReplicaSet.java:63) > ~[apache-cassandra-3.11.4.jar:3.11.4] > at java.util.ArrayList.forEach(ArrayList.java:1257) ~[na:1.8.0_171] > at > org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplicas(LogReplicaSet.java:57) > ~[apache-cassandra-3.11.4.jar:3.11.4] > at org.apache.cassandra.db.lifecycle.LogFile.<init>(LogFile.java:147) > ~[apache-cassandra-3.11.4.jar:3.11.4] > at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:95) > ~[apache-cassandra-3.11.4.jar:3.11.4] > at > org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:476) > ~[apache-cassandra-3.11.4.jar:3.11> > at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) > ~[na:1.8.0_171] > at java.util.HashMap$EntrySpliterator.tryAdvance(HashMap.java:1717) > ~[na:1.8.0_171] > at > java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) > ~[na:1.8.0_171] > at > java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498) > ~[na:1.8.0_171] > at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485) > ~[na:1.8.0_171] > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) > ~[na:1.8.0_171] > at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230) > ~[na:1.8.0_171] > at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196) > ~[na:1.8.0_171] > at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) > ~[na:1.8.0_171] > at java.util.stream.ReferencePipeline.allMatch(ReferencePipeline.java:454) > ~[na:1.8.0_171] > at > org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:471) > ~[apache-cassandra-3.11.4.jar:3.11> > at > org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:438) > ~[apache-cassandra-3.11.4.jar:3.11.4] > at > org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:430) > ~[apache-cassandra-3.11.4.jar:3.11.4] > at > org.apache.cassandra.db.lifecycle.LifecycleTransaction.removeUnfinishedLeftovers(LifecycleTransaction.java:549) > ~[apache-cassandra-3.11.4.jar:3.11.4] > at > org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:658) > ~[apache-cassandra-3.11.4.jar:3.11.4] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:275) > [apache-cassandra-3.11.4.jar:3.11.4] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:620) > [apache-cassandra-3.11.4.jar:3.11.4] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:732) > [apache-cassandra-3.11.4.jar:3.11.4]{noformat} > Steps to recreate this behaviour: > * Write data to Cassandra, wait for a largish compaction to start. > * Kill Cassandra part way through the compaction > * Check for the existence of a transaction log file in the data directory > {noformat} > cd /var/lib/cassandra/data > find . -type f -iname '*.log' > ./single/table0-3c161330a68d11e9af45010f0154fff5/md_txn_compaction_b492cd30-a68d-11e9-b727-010f0154fff5.log{noformat} > * Duplicate the data directory entry in the yaml: > {noformat} > data_file_directories: > - /var/lib/cassandra/data > - /var/lib/cassandra/data{noformat} > * Restart Cassandra. > I have tested that this behaviour exists in the following versions of > Cassandra: > * 3.11.4 > * 3.0.15 > Workaround: > * Remove the duplicate data directory entry from the Cassandra yaml > Possible solutions: > * Validate and filter data_file_directories input to ensure each data > directory is unique > * Fail gracefully, with a more meaningful error message. > -- This message was sent by Atlassian JIRA (v7.6.14#76016) --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org