, Faraz Mateen <fmat...@an10.io> wrote:
> Thanks for the response guys.
>
> Let me try setting token ranges manually and move the data again to
> correct nodes. Will update with the outcome soon.
>
>
> On Tue, Apr 17, 2018 at 5:42 AM, kurt greaves <k...@instaclustr.com>
andard
> recommendation for full repairs). Inconsistency can result from a whole
> range of conditions from nodes being down the cluster being overloaded to
> network issues.
>
> Cheers
> Ben
>
> On Tue, 6 Mar 2018 at 22:18 Faraz Mateen <fmat...@an10.io> wrote:
>
>> Tha
Hi everyone,
I seem to have hit a problem in which writing to cassandra through a python
script fails and also occasionally causes cassandra node to crash. Here are
the details of my problem.
I have a python based streaming application that reads data from kafka at a
high rate and pushes it to
assandra.html
>
>
>
>
>
> On Tue, Mar 13, 2018 at 5:17 PM, Goutham reddy <goutham.chiru...@gmail.com
> > wrote:
>
>> Faraz,
>> Can you share your code snippet, how you are trying to save the entity
>> objects into cassandra.
>>
>> Thanks and Regard
unable to hold data in
the memory for 128 ms considering that I have 30 GB of RAM for each node.
On Wed, Mar 14, 2018 at 2:24 PM, Faraz Mateen <fmat...@an10.io> wrote:
> Thanks for the response.
>
> Here is the output of "DESCRIBE" on my table
>
> http
;>
>> What is strange is why the tables dont show up if the keyspaces are
>> visible. Shouldnt that be a meta data that can be edited once and then be
>> visible?
>>
>> Affan
>>
>> - Affan
>>
>> On Thu, Apr 5, 2018 at 7:55 PM, Michael Shule
ableloader or remote
seeding are also a couple of options but they will take a lot of time. Does
anyone know an easier way to shift all my data to new setup on DC/OS?
--
Faraz Mateen
s?
On Tue, Apr 10, 2018 at 4:28 PM, Faraz Mateen <fmat...@an10.io> wrote:
> Sorry for the late reply. I was trying to figure out some other approach
> to it.
>
> @Kurt - My previous cluster has 3 nodes but replication factor is 2. I am
> not exactly sure how I would han
it
> won't be a replica for all the data in those SSTables and consequently
> you'll lose data (or it simply won't be available).
>
>
--
Faraz Mateen
Hi everyone,
I am trying to use spark to process a large cassandra table (~402 million
entries and 84 columns) but I am getting inconsistent results. Initially
the requirement was to copy some columns from this table to another table.
After copying the data, I noticed that some entries in the
;> spark can be taken out of the equation.
>>
>> Now, while you are running these queries is there another process or
>> thread that is writing also at the same time ? If yes then your results are
>> fine but If it's not, you may want to try nodetool flush first and then
11 matches
Mail list logo