have some wide partitions that contain many of your rows.
>
> Chris Lohfink
>
> On Wed, Jul 27, 2016 at 1:44 PM, Luke Jolly <l...@getadmiral.com> wrote:
>
>> I have a table that I'm storing ad impression data in with every row
>> being an impression. I want to get
I have a table that I'm storing ad impression data in with every row being
an impression. I want to get a count of total rows / impressions. I know
that there is in the ball park of 200-400 million rows in this table and
from my reading "Number of keys" in the output of cfstats should be a
25, 2016 at 3:11 PM Luke Jolly <l...@getadmiral.com> wrote:
> So I figured out the main cause of the problem. The seed node was
> itself. That's what got it in a weird state. The second part was that I
> didn't know the default repair is incremental as I was accidently looking
ing RF is 2 so both nodes should have all
>>> partitions. Luke, are you sure the repair is succeeding? You don't have
>>> other keyspaces/duplicate data/extra data in your cassandra data directory?
>>> Also, you could try querying on the node with less data to confirm if it
>
ce if tombstones are moved
> around during repair, but I didnt find evidence of it. However I see no
> reason to because if the node didnt have data then streaming tombstones
> does not make a lot of sense.
>
> Regards,
> Bhuvan
>
> On Tue, May 24, 2016 at 11:06 PM, Luke
, kurt Greaves <k...@instaclustr.com> wrote:
> Do you have 1 node in each DC or 2? If you're saying you have 1 node in
> each DC then a RF of 2 doesn't make sense. Can you clarify on what your set
> up is?
>
> On 23 May 2016 at 19:31, Luke Jolly <l...@getadmiral.com> wro
I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and
gce-us-east1. I increased the replication factor of gce-us-central1 from 1
to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for
the node switched to 100% as it should but the Load showed that it didn't
actually