Re: any way to get nodetool proxyhistograms data for an entire cluster?

2014-11-19 Thread Clint Kelly
Thanks for the reply.

We have DSE so I can use opscenter.  I was just looking for something more
precise than the graphs that I get from opscenter.

On Wed, Nov 19, 2014 at 5:53 PM, Rahul Neelakantan  wrote:

> So what do use as a good alternate to it?
>
> Rahul Neelakantan
>
> On Nov 19, 2014, at 8:48 PM, Robert Coli  wrote:
>
> On Wed, Nov 19, 2014 at 3:22 PM, Clint Kelly 
> wrote:
>
>> Is there any way (other than me cooking up a little script) to
>> automatically get the proxyhistogram stats for my entire cluster?
>>
>
> OpsCenter might expose this as an aggregate, and can be used with free
> Apache Cassandra. Notice that I say "might" because I have no idea, because
> I don't use OpsCenter. :D
>
> =Rob
>
>
>


Question: How to monitor the QPS in Cassandra local node or cluster

2014-11-19 Thread luolee.me
Hi, everyone,
I want to monitor the Cassandra cluster using Zabbix, but I have no idea about 
hot monitor the QPS on local Cassandra node ?
I search the internet but haven't any result about how to get the QPS.
anyone had any idea?

Thanks!

querying data from Cassandra through the Spark SQL Thrift JDBC server

2014-11-19 Thread Mohammed Guller
Hi - I was curious if anyone is using the Spark SQL Thrift JDBC server with 
Cassandra. It would be great be if you could share how you got it working? For 
example, what config changes have to be done in hive-site.xml, what additional 
jars are required, etc.?

I have a Spark app that can programmatically query data from Cassandra using 
Spark SQL and Spark-Cassandra-Connector. No problem there, but I couldn't find 
any documentation for using the Thrift JDBC server for querying data from 
Cassandra.

Thanks,
Mohammed



Re: any way to get nodetool proxyhistograms data for an entire cluster?

2014-11-19 Thread Rahul Neelakantan
So what do use as a good alternate to it?

Rahul Neelakantan

> On Nov 19, 2014, at 8:48 PM, Robert Coli  wrote:
> 
>> On Wed, Nov 19, 2014 at 3:22 PM, Clint Kelly  wrote:
>> Is there any way (other than me cooking up a little script) to automatically 
>> get the proxyhistogram stats for my entire cluster?
> 
> OpsCenter might expose this as an aggregate, and can be used with free Apache 
> Cassandra. Notice that I say "might" because I have no idea, because I don't 
> use OpsCenter. :D
> 
> =Rob
>  


Re: Trying to build Cassandra for FreeBSD 10.1

2014-11-19 Thread Michael Shuler

On 11/18/2014 04:58 PM, William Arbaugh wrote:

Happy to do so - but the ticket indicates that FreeBSD is unsupported and thus 
this is unlikely to get fixed.


I'm the person that said that in the JIRA ticket  :)  I also quoted it 
to indicate that it's really not officially "unsupported" - it's unix 
and it should "Just Work".  If there's a way to track down the problem 
and fix it, let's do that!  I'm simply asking for help in identifying 
the problem, since I don't spend much^Hany time in FreeBSD these days, 
so I'm going to rely on folks that are using it.  That would be y'all.  ;)


--
Warm regards,
Michael


https://issues.apache.org/jira/browse/CASSANDRA-8325




Re: any way to get nodetool proxyhistograms data for an entire cluster?

2014-11-19 Thread Robert Coli
On Wed, Nov 19, 2014 at 3:22 PM, Clint Kelly  wrote:

> Is there any way (other than me cooking up a little script) to
> automatically get the proxyhistogram stats for my entire cluster?
>

OpsCenter might expose this as an aggregate, and can be used with free
Apache Cassandra. Notice that I say "might" because I have no idea, because
I don't use OpsCenter. :D

=Rob


any way to get nodetool proxyhistograms data for an entire cluster?

2014-11-19 Thread Clint Kelly
If I run this tool on a given host, it shows me stats for only the cases
where that host was the coordinator node, correct?

Is there any way (other than me cooking up a little script) to
automatically get the proxyhistogram stats for my entire cluster?

-Clint


Re: read repair across DC and latency

2014-11-19 Thread Jimmy Lin
Tyler,
thanks for the detail explanation.
Still have few questions in my mind

#
When you said send "read digest request" to the rest of the replica, do you
mean all replica(s) in current and other DC? or just the one last replica
in my current DC and one of the co-ordinate node in other DC?

(our read and write is all "local_quorum" of replication factor of 3,
local_dc_repair_chance=0))

#
Sending "read digest request" to other DC, happen sequently correct? If
network latency between DC is bad during time, will that affect overall
read latency?

#
We observe that one of our cql query perform okay during normal load, but
degrade greatly when we have batch of  same cql(looking for the exact
columns and key) sending to server in short period of time(say 100 of them
within a sec).
Our other table or keyspace don't see any latency drop during the time, so
i am not sure we are hitting the capacity yet. So we suspect read_repair
chance may have something to do wit it.
Anything we can look into and see what may cause the latency spike when we
have large number of same cql hitting the server?

Thanks






On Wed, Nov 19, 2014 at 7:49 AM, Tyler Hobbs  wrote:

>
> On Sun, Nov 16, 2014 at 5:13 PM, Jimmy Lin  wrote:
>
>> I have read  that read repair suppose to be running as background, but
>> does the co-ordinator node need to wait for the response(along with other
>> normal read tasks) before return the entire result back to the caller?
>>
>
> For the 10% of requests where read repair is triggered, the coordinator
> will send a request to every replica.  (A data request to two replicas,
> digest requests to the rest.)  Once enough replicas have replied to satisfy
> the consistency level, the result will be returned to the client; if
> there's a mismatch in the responses from the replicas, a blocking repair
> will be performed before responding to the client.  Later, in the
> background, the coordinator will check the remaining responses from
> replicas to see if they match up.  If any of them do not, they will be
> repaired in the background.
>
>
>>
>> #
>> how a high rate of read repair impact performance? I read something that
>> it will impact through put but not latency, how so?
>>
>
> That's correct, it should impact throughput but not necessarily latency.
> Throughput is lower because more replicas have to do work, but latency is
> unaffected (unless you're hitting capacity) because blocking repair only
> happens under the same conditions that it normally does.
>
>
>>
>> #
>> is it safe to even just  make read_repair_chance = 0?
>> (since we are mostly talking to one DC, the other DC most of the time
>> serve as backup/emergency )
>>
>
> Sure, it's safe enough.  People use read repair for different reasons.
> Some would say that RR keeps their other datacenter's caches warm. Others
> rely on it in place of normal repairs (which is not particularly safe, but
> if your consistency requirements allow for it, it's fine).  If you're
> running regular repairs anyway, it's safe to turn off read repair.
>
>
> --
> Tyler Hobbs
> DataStax 
>


Re: Force purging of tombstones

2014-11-19 Thread Robert Coli
On Tue, Nov 18, 2014 at 12:41 AM, Rahul Neelakantan  wrote:

> Is this page incorrect then and needs to be updated or am I interpreting
> it incorrectly ?
>
>
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_about_deletes_c.html
>
> Particularly this sentence
>
> "After data is marked with a tombstone, the data is automatically removed
> during the normal compaction
> 
>  and
> repair processes"
>

Yes, it is incorrect. Repair does not remove anything. I have bcc:ed docs
AT datastax for their information.

=Rob


Re: Cassandra backup via snapshots in production

2014-11-19 Thread Robert Coli
On Tue, Nov 18, 2014 at 6:50 AM, Ngoc Minh VO 
wrote:

>   We are looking for a solution to backup data in our C* cluster (v2.0.x,
> 16 nodes, 4 x 500GB SSD, RF = 6 over 2 datacenters).
>
> The main purpose is to protect us from human errors (eg. unexpected
> manipulations: delete, drop tables, …).
>

https://github.com/JeremyGrosser/tablesnap

=Rob


Re: Deduplicating data on a node (RF=1)

2014-11-19 Thread Robert Coli
On Tue, Nov 18, 2014 at 10:04 AM, Alain Vandendorpe 
wrote:

> Rob - thanks for that, I was wondering whether either of those would
> successfully deduplicate the data. We were hypothesizing that a
> decommission would merely stream the duplicates out as well as though they
> were valid data - is this not the case?
>

That's a good question and actually your hypothesis is correct, so that is
not in fact a solution. D'oh! :D

After some discussion just now a colleague is suggesting we force them to
> L0[1] - would you agree this should be equivalent to option 2, albeit with
> downtime?
>

 Yes, I think your colleague's suggestion is the only possible LCS
equivalent of option 2.

Hopefully you are in a version in which it is easy to set the LCS level of
SSTables, and have plenty of spare iops..

=Rob


Re: Removing commit log files

2014-11-19 Thread Robert Coli
On Tue, Nov 18, 2014 at 6:30 PM, Jacob Rhoden  wrote:

> Is it correct to assume that if you do a “nodetool drain” on a node and
> then shutdown a node, you can safely remove all commit logs on that node as
> long as all nodes are up?
>

Assuming you are in a version where nodetool drain actually works, yes.
Most nodetool drain failures actually result in over-replay, not
under-flushing, so probably you are even ok in those versions.


> I have some VPS’s with low amounts of disk space that could do with it
> being recovered, I also assume this means startup time for that node will
> be drastically faster.
>

Yes, though you're trading shutdown time for startup time.


> I have also experienced in the past, cases where if my tables had been
> altered, a restart would fail due to commit logs not being able to be
> replayed. I assume draining and removing the commit logs would be a good
> thing to do if you are worried about that bug occurring again (I don’t know
> if that bug was fixed or not).
>

Yes.

=Rob
http://twitter.com/rcolidba


Re: Repair completes successfully but data is still inconsistent

2014-11-19 Thread Robert Coli
On Wed, Nov 19, 2014 at 5:18 AM, André Cruz  wrote:

> Each node has 4-9 of these exceptions as it is going down after being
> drained. It seems Cassandra was trying to delete an sstable. Can this be
> related?
>

That seems plausible, though the versions of the files you indicate have
the versions of the data now suggests that the file was eventually
successfully deleted.

My hunch is that you originally triggered this by picking up some obsolete
SSTables during the 1.2 era. Probably if you clean up the existing zombies
you will not encounter them again, unless you encounter another "obsolete
sstables marked live" bug. I agree that your compaction exceptions in the
1.2 era are likely implicated.

=Rob


Re: sstables keep growing on cassandra 2.1

2014-11-19 Thread Colin Kuo
Hi,

Can you please firstly check the "nodetool compactionstats" during repair?
I'm afraid that minor compaction may be blocked by whatever tasks that
causes the number of SStable keep growing.

On Sat, Nov 15, 2014 at 7:47 AM, James Derieg 
wrote:

> Hi everyone,
> I'm hoping someone can help me with a weird issue on Cassandra 2.1.
> The sstables on my cluster keep growing to a huge number when I run a
> nodetool repair.  On the attached graph, I ran a manual 'nodetool compact'
> on each node in the cluster, which brought them back down to a low number
> of sstables.  Then I immediately ran a nodetool repair, and the sstables
> jumped back up.  Has anyone seen this behavior?  Is this expected? I have
> some 2.0 clusters in the same environment, and they don't do this.
> Thanks in advance for your help.
> ᐧ
>


Re: A tale of a node that never joins...

2014-11-19 Thread Stan Lemon
We are currently using 2.0.11

Thanks,
Stan


> Hello Stan
>
>  Which version of Cassandra are you using ? There are some known issues of
> streaming failure that prevent a node from finishing joining
>
>  Regards
>
> On Wed, Nov 19, 2014 at 3:57 PM, Stan Lemon  wrote:
>
> > Hello,
> > I'm working on a two data center cluster with 12 nodes in each data
> > center. I recently wanted to add a thirteenth node to one of the data
> > centers to try and validate some load improvements to our hardware
> > configuration. I added the node following DataStax directions (
> >
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_add_node_to_cluster_t.html
)
> > and the node appeared to bootstrap correctly and start joining.
> >
> > I monitored the load and watched it increase, periodically checking
iotop
> > to make sure there was still a pulse. Eventually the load topped out at
> > roughly 85% of the average of the other nodes, iotop showed lots of
> > activity.  After a few hours iotop stopped showing activity and the
node's
> > load had gone down a small amount, ~50-100mb.  Average load on the other
> > nodes is about ~550gb
> >
> > The first time I tried this I let the process run through the weekend,
> > periodically checking on it.  Something happened Monday morning which
> > caused Cassandra to die, so I restarted the process. The load
immediately
> > began growing, eventually doubling that 85% marker and settling in
around
> > ~935gb, way more than any other node. When it reached this point it did
the
> > same thing though, basically stalled out.
> >
> > The whole time nodetool status just showed "UJ".
> >
> > Finally I aborted and cleared the node's data directory and started
over,
> > but again experienced the same stall out at the 85% mark. The node tool
no
> > time at all to get to that point, it was only a few hours. It's not been
> > sitting at 85% for roughly 20 hours and iotop shows no activity.
> >
> > I am wondering a few things...
> > 1. What's going on?
> > 2. How do I get more information about what is happening with the join
> > process?
> > 3. Has anyone seen this before?
> >
> > Thanks for your help,
> > Stan
> >
> >


Re: A tale of a node that never joins...

2014-11-19 Thread DuyHai Doan
Hello Stan

 Which version of Cassandra are you using ? There are some known issues of
streaming failure that prevent a node from finishing joining

 Regards

On Wed, Nov 19, 2014 at 3:57 PM, Stan Lemon  wrote:

> Hello,
> I'm working on a two data center cluster with 12 nodes in each data
> center. I recently wanted to add a thirteenth node to one of the data
> centers to try and validate some load improvements to our hardware
> configuration. I added the node following DataStax directions (
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_add_node_to_cluster_t.html)
> and the node appeared to bootstrap correctly and start joining.
>
> I monitored the load and watched it increase, periodically checking iotop
> to make sure there was still a pulse. Eventually the load topped out at
> roughly 85% of the average of the other nodes, iotop showed lots of
> activity.  After a few hours iotop stopped showing activity and the node's
> load had gone down a small amount, ~50-100mb.  Average load on the other
> nodes is about ~550gb
>
> The first time I tried this I let the process run through the weekend,
> periodically checking on it.  Something happened Monday morning which
> caused Cassandra to die, so I restarted the process. The load immediately
> began growing, eventually doubling that 85% marker and settling in around
> ~935gb, way more than any other node. When it reached this point it did the
> same thing though, basically stalled out.
>
> The whole time nodetool status just showed "UJ".
>
> Finally I aborted and cleared the node's data directory and started over,
> but again experienced the same stall out at the 85% mark. The node tool no
> time at all to get to that point, it was only a few hours. It's not been
> sitting at 85% for roughly 20 hours and iotop shows no activity.
>
> I am wondering a few things...
> 1. What's going on?
> 2. How do I get more information about what is happening with the join
> process?
> 3. Has anyone seen this before?
>
> Thanks for your help,
> Stan
>
>


A tale of a node that never joins...

2014-11-19 Thread Stan Lemon
Hello,
I'm working on a two data center cluster with 12 nodes in each data center.
I recently wanted to add a thirteenth node to one of the data centers to
try and validate some load improvements to our hardware configuration. I
added the node following DataStax directions (
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_add_node_to_cluster_t.html)
and the node appeared to bootstrap correctly and start joining.

I monitored the load and watched it increase, periodically checking iotop
to make sure there was still a pulse. Eventually the load topped out at
roughly 85% of the average of the other nodes, iotop showed lots of
activity.  After a few hours iotop stopped showing activity and the node's
load had gone down a small amount, ~50-100mb.  Average load on the other
nodes is about ~550gb

The first time I tried this I let the process run through the weekend,
periodically checking on it.  Something happened Monday morning which
caused Cassandra to die, so I restarted the process. The load immediately
began growing, eventually doubling that 85% marker and settling in around
~935gb, way more than any other node. When it reached this point it did the
same thing though, basically stalled out.

The whole time nodetool status just showed "UJ".

Finally I aborted and cleared the node's data directory and started over,
but again experienced the same stall out at the 85% mark. The node tool no
time at all to get to that point, it was only a few hours. It's not been
sitting at 85% for roughly 20 hours and iotop shows no activity.

I am wondering a few things...
1. What's going on?
2. How do I get more information about what is happening with the join
process?
3. Has anyone seen this before?

Thanks for your help,
Stan


cassandra-stress: Clarification on yaml profile needed

2014-11-19 Thread Preussner, Jens
Hi all,
can someone point me to the latest documentation on how a yaml profile has to 
look for the latest cassandra-stress?
There seem to be some differences between the format described in the blog 
(http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema)
 and the format used in examples at git 
(https://github.com/apache/cassandra/blob/cassandra-2.1/tools/cqlstress-example.yaml)
 especially in the insert-block.

Maybe I'm just missing something.

Is anyone experienced to help me understand how cassandra-stress generates 
batches from the profile? How many batches are generated and how does 
cassandra-stress determine the number of rows/batch and the number of total 
rows in a partition? How is the interplay between population and cluster 
definitions in the columnspec and the insert block?

Thanks a lot!

Best regards,
Jens Preußner

Max Planck Institute for Heart and Lung Research
ECCPS Bioinformatics Service
Ludwigstraße 43 - FGI
61231 Bad Nauheim



Re: Repair completes successfully but data is still inconsistent

2014-11-19 Thread André Cruz
On 19 Nov 2014, at 11:37, André Cruz  wrote:
> 
> All the nodes were restarted on 21-23 October, for the upgrade (1.2.16 -> 
> 1.2.19) I mentioned. The delete happened after. I should also point out that 
> we were experiencing problems related to CASSANDRA-4206 and CASSANDRA-7808.

Another possible cause are these exceptions I found in the log as the nodes 
were shutdown and brought up with the new version:

INFO [RMI TCP Connection(270364)-10.134.101.18] 2014-10-21 15:04:00,867 
StorageService.java (line 939) DRAINED
ERROR [CompactionExecutor:15173] 2014-10-21 15:04:01,923 CassandraDaemon.java 
(line 191) Exception in thread Thread[CompactionExecutor:15173,1,main]
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@1dad30c0 
rejected from 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@555b9c78[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 14052]
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(Unknown 
Source)
at java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source)
at 
java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(Unknown 
Source)
at java.util.concurrent.ScheduledThreadPoolExecutor.submit(Unknown 
Source)
at 
org.apache.cassandra.io.sstable.SSTableDeletingTask.schedule(SSTableDeletingTask.java:65)
at 
org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:976)
at 
org.apache.cassandra.db.DataTracker.removeOldSSTablesSize(DataTracker.java:370)
at org.apache.cassandra.db.DataTracker.postReplace(DataTracker.java:335)
at org.apache.cassandra.db.DataTracker.replace(DataTracker.java:329)
at 
org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:232)
at 
org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:995)
at 
org.apache.cassandra.db.compaction.CompactionTask.replaceCompactedSSTables(CompactionTask.java:270)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:230)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:208)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)


Each node has 4-9 of these exceptions as it is going down after being drained. 
It seems Cassandra was trying to delete an sstable. Can this be related?

Best regards,
André Cruz

Re: Repair completes successfully but data is still inconsistent

2014-11-19 Thread André Cruz
On 19 Nov 2014, at 00:43, Robert Coli  wrote:
> 
> @OP : can you repro if you run a major compaction between the deletion and 
> the tombstone collection?

This happened in production and, AFAIK, for the first time in a system that has 
been running for 2 years. We have upgraded the Cassandra version last month so 
there’s that difference, but the upgrade happened before the original delete of 
this column.

I have found more examples of zombie columns like this (aprox 30k columns of a 
1.2M total) and they are all in this same row of this CF. I should point out 
that we have a sister CF where we do similar inserts/deletes, but it uses STCS, 
and it doesn’t exhibit this problem. 

I don’t think I can reproduce this easily in a test environment.

> 
> Basically, I am conjecturing that a compaction bug or one of the handful of 
> "unmask previously deleted data" bugs are resulting in the unmasking of a 
> non-tombstone row which is sitting in a SStable.
> 
> OP could also support this conjecture by running sstablekeys on other 
> SSTables on "3rd replica" and determining what masked values there are for 
> the row prior to deletion. If the data is sitting in an old SStable, this is 
> suggestive.

There are 3 sstables that have this row on the 3rd replica:

Disco-NamespaceFile2-ic-5337-Data.db.json - Has the column tombstone
Disco-NamespaceFile2-ic-5719-Data.db.json - Has no value for this column
Disco-NamespaceFile2-ic-5748-Data.db.json - Has the original value

> 
> One last question for OP would be whether the nodes were restarted during the 
> time period this bug was observed. An assortment of the "unmask previously 
> deleted data" bugs come from "dead" sstables in the data directory being 
> marked "live" on a restart.

All the nodes were restarted on 21-23 October, for the upgrade (1.2.16 -> 
1.2.19) I mentioned. The delete happened after. I should also point out that we 
were experiencing problems related to CASSANDRA-4206 and CASSANDRA-7808.

ERROR 15:01:51,885 Exception in thread Thread[CompactionExecutor:15172,1,main]
java.lang.AssertionError: originally calculated column size of 78041151 but now 
it is 78041303
at 
org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:135)
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:208)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

We saw that line multiple times in the logs, always for the same row because 
the 78041151 and 78041303 even though the data seemed fine. Could that row be 
the one experiencing problems now? Maybe with the upgrade the new Cassandra 
correctly compacted this row and all hell broke loose?

If so, is there a easy way to fix this? Shouldn’t repair also propagate this 
zombie column to the other nodes?

Thank you and best regards,
André Cruz




Re: Working with legacy data via CQL

2014-11-19 Thread Erik Forsberg
On 2014-11-19 01:37, Robert Coli wrote:
> 
> Thanks, I can reproduce the issue with that, and I should be able to
> look into it tomorrow.  FWIW, I believe the issue is server-side,
> not in the driver.  I may be able to suggest a workaround once I
> figure out what's going on.
> 
> 
> Is there a JIRA tracking this issue? I like being aware of potential
> issues with "legacy tables" ... :D

I created one, just for you! :-)

https://issues.apache.org/jira/browse/CASSANDRA-8339

\EF