[jira] [Commented] (CASSANDRA-6596) Split out outgoing stream throughput within a DC and inter-DC
[ https://issues.apache.org/jira/browse/CASSANDRA-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044558#comment-14044558 ] Thomas Vachon commented on CASSANDRA-6596: -- I second this. It's a huge win for us but we can't go to 2.1 yet > Split out outgoing stream throughput within a DC and inter-DC > - > > Key: CASSANDRA-6596 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6596 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Jeremy Hanna >Assignee: Vijay >Priority: Minor > Fix For: 2.1 beta1 > > Attachments: 0001-CASSANDRA-6596.patch > > > Currently the outgoing stream throughput setting doesn't differentiate > between when it goes to another node in the same DC and when it goes to > another DC across a potentially bandwidth limited link. It would be nice to > have that split out so that it could be tuned for each type of link. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-4492) HintsColumnFamily compactions hang when using multithreaded compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534973#comment-13534973 ] Thomas Vachon commented on CASSANDRA-4492: -- {quote}Looking at the code it seems there are two places where HintedHandoffManager calls a user defined compact() for all sstable{quote} Well that would explain why everytime I start and I get hints, I get every sstable compacted > HintsColumnFamily compactions hang when using multithreaded compaction > -- > > Key: CASSANDRA-4492 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4492 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0.11 >Reporter: Jason Harvey >Priority: Minor > Attachments: jstack.txt > > > Running into an issue on a 6 node ring running 1.0.11 where HintsColumnFamily > compactions often hang indefinitely when using multithreaded compaction. > Nothing of note in the logs. In some cases, the compaction hangs before a tmp > sstable is even created. > I've wiped out every hints sstable and restarted several times. The issue > always comes back rather quickly and predictably. The compactions sometimes > complete if the hint sstables are very small. Disabling multithreaded > compaction stops this issue from occurring. > Compactions of all other CFs seem to work just fine. > This ring was upgraded from 1.0.7. I didn't keep any hints from the upgrade. > I should note that the ring gets a huge amount of writes, and as a result the > HintedHandoff rows get be quite wide. I didn't see any large-row compaction > notices when the compaction was hanging (perhaps the bug was triggered by > incremental compaction?). After disabling multithreaded compaction, several > of the rows that were successfully compacted were over 1GB. > Here is the output I see from compactionstats where a compaction has hung. > The 'bytes compacted' column never changes. > {code} > pending tasks: 1 > compaction typekeyspace column family bytes compacted > bytes total progress >Compaction systemHintsColumnFamily 268082 >464784758 0.06% > {code} > The hung thread stack is as follows: (full jstack attached, as well) > {code} > "CompactionExecutor:37" daemon prio=10 tid=0x063df800 nid=0x49d9 > waiting on condition [0x7eb8c6ffa000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00050f2e0e58> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Deserializer.computeNext(ParallelCompactionIterable.java:329) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Deserializer.computeNext(ParallelCompactionIterable.java:281) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:147) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:126) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:100) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Unwrapper.computeNext(ParallelCompactionIterable.java:101) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Unwrapper.computeNext(ParallelCompactionIterable.java:88) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > com.google.common.collect.Iterators$7.computeNext(Iterators.java:614) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:141) > at > org.apache.cassandra.db.compacti
[jira] [Comment Edited] (CASSANDRA-4492) HintsColumnFamily compactions hang when using multithreaded compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534973#comment-13534973 ] Thomas Vachon edited comment on CASSANDRA-4492 at 12/18/12 3:35 PM: {quote}Looking at the code it seems there are two places where HintedHandoffManager calls a user defined compact() for all sstable{quote} Well that would explain why everytime I restart and I get hints, I get every sstable compacted was (Author: tvachon): {quote}Looking at the code it seems there are two places where HintedHandoffManager calls a user defined compact() for all sstable{quote} Well that would explain why everytime I start and I get hints, I get every sstable compacted > HintsColumnFamily compactions hang when using multithreaded compaction > -- > > Key: CASSANDRA-4492 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4492 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0.11 >Reporter: Jason Harvey >Priority: Minor > Attachments: jstack.txt > > > Running into an issue on a 6 node ring running 1.0.11 where HintsColumnFamily > compactions often hang indefinitely when using multithreaded compaction. > Nothing of note in the logs. In some cases, the compaction hangs before a tmp > sstable is even created. > I've wiped out every hints sstable and restarted several times. The issue > always comes back rather quickly and predictably. The compactions sometimes > complete if the hint sstables are very small. Disabling multithreaded > compaction stops this issue from occurring. > Compactions of all other CFs seem to work just fine. > This ring was upgraded from 1.0.7. I didn't keep any hints from the upgrade. > I should note that the ring gets a huge amount of writes, and as a result the > HintedHandoff rows get be quite wide. I didn't see any large-row compaction > notices when the compaction was hanging (perhaps the bug was triggered by > incremental compaction?). After disabling multithreaded compaction, several > of the rows that were successfully compacted were over 1GB. > Here is the output I see from compactionstats where a compaction has hung. > The 'bytes compacted' column never changes. > {code} > pending tasks: 1 > compaction typekeyspace column family bytes compacted > bytes total progress >Compaction systemHintsColumnFamily 268082 >464784758 0.06% > {code} > The hung thread stack is as follows: (full jstack attached, as well) > {code} > "CompactionExecutor:37" daemon prio=10 tid=0x063df800 nid=0x49d9 > waiting on condition [0x7eb8c6ffa000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00050f2e0e58> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Deserializer.computeNext(ParallelCompactionIterable.java:329) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Deserializer.computeNext(ParallelCompactionIterable.java:281) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:147) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:126) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:100) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Unwrapper.computeNext(ParallelCompactionIterable.java:101) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Unwrapper.computeNext(ParallelCompactionIterable.java:88) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > com.google.common.collect.Iterators$7.computeNext(Iterators.java:614) >
[jira] [Commented] (CASSANDRA-4492) HintsColumnFamily compactions hang when using multithreaded compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13529284#comment-13529284 ] Thomas Vachon commented on CASSANDRA-4492: -- No, update heavy though > HintsColumnFamily compactions hang when using multithreaded compaction > -- > > Key: CASSANDRA-4492 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4492 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0.11 >Reporter: Jason Harvey >Priority: Minor > Attachments: jstack.txt > > > Running into an issue on a 6 node ring running 1.0.11 where HintsColumnFamily > compactions often hang indefinitely when using multithreaded compaction. > Nothing of note in the logs. In some cases, the compaction hangs before a tmp > sstable is even created. > I've wiped out every hints sstable and restarted several times. The issue > always comes back rather quickly and predictably. The compactions sometimes > complete if the hint sstables are very small. Disabling multithreaded > compaction stops this issue from occurring. > Compactions of all other CFs seem to work just fine. > This ring was upgraded from 1.0.7. I didn't keep any hints from the upgrade. > I should note that the ring gets a huge amount of writes, and as a result the > HintedHandoff rows get be quite wide. I didn't see any large-row compaction > notices when the compaction was hanging (perhaps the bug was triggered by > incremental compaction?). After disabling multithreaded compaction, several > of the rows that were successfully compacted were over 1GB. > Here is the output I see from compactionstats where a compaction has hung. > The 'bytes compacted' column never changes. > {code} > pending tasks: 1 > compaction typekeyspace column family bytes compacted > bytes total progress >Compaction systemHintsColumnFamily 268082 >464784758 0.06% > {code} > The hung thread stack is as follows: (full jstack attached, as well) > {code} > "CompactionExecutor:37" daemon prio=10 tid=0x063df800 nid=0x49d9 > waiting on condition [0x7eb8c6ffa000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00050f2e0e58> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Deserializer.computeNext(ParallelCompactionIterable.java:329) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Deserializer.computeNext(ParallelCompactionIterable.java:281) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:147) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:126) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:100) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Unwrapper.computeNext(ParallelCompactionIterable.java:101) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Unwrapper.computeNext(ParallelCompactionIterable.java:88) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > com.google.common.collect.Iterators$7.computeNext(Iterators.java:614) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:141) > at > org.apache.cassandra.db.compaction.CompactionManager$7.call(CompactionManager.java:395) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) >
[jira] [Commented] (CASSANDRA-4492) HintsColumnFamily compactions hang when using multithreaded compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13529251#comment-13529251 ] Thomas Vachon commented on CASSANDRA-4492: -- [~jbellis] no I can't. We turned off MT compaction and they have been compacted away. > HintsColumnFamily compactions hang when using multithreaded compaction > -- > > Key: CASSANDRA-4492 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4492 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0.11 >Reporter: Jason Harvey >Priority: Minor > Attachments: jstack.txt > > > Running into an issue on a 6 node ring running 1.0.11 where HintsColumnFamily > compactions often hang indefinitely when using multithreaded compaction. > Nothing of note in the logs. In some cases, the compaction hangs before a tmp > sstable is even created. > I've wiped out every hints sstable and restarted several times. The issue > always comes back rather quickly and predictably. The compactions sometimes > complete if the hint sstables are very small. Disabling multithreaded > compaction stops this issue from occurring. > Compactions of all other CFs seem to work just fine. > This ring was upgraded from 1.0.7. I didn't keep any hints from the upgrade. > I should note that the ring gets a huge amount of writes, and as a result the > HintedHandoff rows get be quite wide. I didn't see any large-row compaction > notices when the compaction was hanging (perhaps the bug was triggered by > incremental compaction?). After disabling multithreaded compaction, several > of the rows that were successfully compacted were over 1GB. > Here is the output I see from compactionstats where a compaction has hung. > The 'bytes compacted' column never changes. > {code} > pending tasks: 1 > compaction typekeyspace column family bytes compacted > bytes total progress >Compaction systemHintsColumnFamily 268082 >464784758 0.06% > {code} > The hung thread stack is as follows: (full jstack attached, as well) > {code} > "CompactionExecutor:37" daemon prio=10 tid=0x063df800 nid=0x49d9 > waiting on condition [0x7eb8c6ffa000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00050f2e0e58> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Deserializer.computeNext(ParallelCompactionIterable.java:329) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Deserializer.computeNext(ParallelCompactionIterable.java:281) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:147) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:126) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:100) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Unwrapper.computeNext(ParallelCompactionIterable.java:101) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Unwrapper.computeNext(ParallelCompactionIterable.java:88) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > com.google.common.collect.Iterators$7.computeNext(Iterators.java:614) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:141) > at > org.apache.cassandra.db.compaction.CompactionManager$7.call(CompactionManager.java:395) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) >
[jira] [Commented] (CASSANDRA-4492) HintsColumnFamily compactions hang when using multithreaded compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506806#comment-13506806 ] Thomas Vachon commented on CASSANDRA-4492: -- This actually is severe. Since they hang, it blocks all schema changes. > HintsColumnFamily compactions hang when using multithreaded compaction > -- > > Key: CASSANDRA-4492 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4492 > Project: Cassandra > Issue Type: Bug >Affects Versions: 1.0.11 >Reporter: Jason Harvey >Priority: Minor > Attachments: jstack.txt > > > Running into an issue on a 6 node ring running 1.0.11 where HintsColumnFamily > compactions often hang indefinitely when using multithreaded compaction. > Nothing of note in the logs. In some cases, the compaction hangs before a tmp > sstable is even created. > I've wiped out every hints sstable and restarted several times. The issue > always comes back rather quickly and predictably. The compactions sometimes > complete if the hint sstables are very small. Disabling multithreaded > compaction stops this issue from occurring. > Compactions of all other CFs seem to work just fine. > This ring was upgraded from 1.0.7. I didn't keep any hints from the upgrade. > I should note that the ring gets a huge amount of writes, and as a result the > HintedHandoff rows get be quite wide. I didn't see any large-row compaction > notices when the compaction was hanging (perhaps the bug was triggered by > incremental compaction?). After disabling multithreaded compaction, several > of the rows that were successfully compacted were over 1GB. > Here is the output I see from compactionstats where a compaction has hung. > The 'bytes compacted' column never changes. > {code} > pending tasks: 1 > compaction typekeyspace column family bytes compacted > bytes total progress >Compaction systemHintsColumnFamily 268082 >464784758 0.06% > {code} > The hung thread stack is as follows: (full jstack attached, as well) > {code} > "CompactionExecutor:37" daemon prio=10 tid=0x063df800 nid=0x49d9 > waiting on condition [0x7eb8c6ffa000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00050f2e0e58> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Deserializer.computeNext(ParallelCompactionIterable.java:329) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Deserializer.computeNext(ParallelCompactionIterable.java:281) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:147) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:126) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:100) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Unwrapper.computeNext(ParallelCompactionIterable.java:101) > at > org.apache.cassandra.db.compaction.ParallelCompactionIterable$Unwrapper.computeNext(ParallelCompactionIterable.java:88) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > com.google.common.collect.Iterators$7.computeNext(Iterators.java:614) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > at > org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:141) > at > org.apache.cassandra.db.compaction.CompactionManager$7.call(CompactionManager.java:395) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.co
[jira] [Created] (CASSANDRA-3103) Slashes added before Node Names
Slashes added before Node Names --- Key: CASSANDRA-3103 URL: https://issues.apache.org/jira/browse/CASSANDRA-3103 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.8.4 Environment: Debian 6.0.2 Reporter: Thomas Vachon Since 0.8.4, node names are now preceded by a '/'. This did not occur in 0.8.3 and I see no reference to the change in the ChangeLog. This is breaking the Ruby Cassandra Gem (we had to patch it) and I would imagine other systems. See a sample from output.log INFO 13:20:58,105 Node /10.2.115.166 has restarted, now UP again INFO 13:20:58,106 InetAddress /10.2.115.166 is now UP INFO 13:20:58,107 Node /10.2.115.166 state jump to normal INFO 13:20:58,113 Node /10.34.141.11 has restarted, now UP again INFO 13:20:58,113 InetAddress /10.34.141.11 is now UP INFO 13:20:58,113 Node /10.34.141.11 state jump to normal INFO 13:20:58,114 Node /10.100.219.107 has restarted, now UP again INFO 13:20:58,115 InetAddress /10.100.219.107 is now UP INFO 13:20:58,115 Node /10.100.219.107 state jump to normal Before it would just list the IP, not with the / in front. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira