operations. However, we
are usually able to generate the list of keys outside of Cassandra…
Sean Durity – Cassandra Admin, Home Depot
From: Pavel Velikhov [mailto:pavel.velik...@gmail.com]
Sent: Thursday, February 12, 2015 4:23 AM
To: user@cassandra.apache.org
Subject: Re: Two problems
On Feb 12, 2015, at 12:37 AM, Robert Coli rc...@eventbrite.com wrote:
On Wed, Feb 11, 2015 at 2:22 AM, Pavel Velikhov pavel.velik...@gmail.com
mailto:pavel.velik...@gmail.com wrote:
2. While trying to update the full dataset with a simple transformation
(again via python driver),
On Wed, Feb 11, 2015 at 2:22 AM, Pavel Velikhov pavel.velik...@gmail.com
wrote:
2. While trying to update the full dataset with a simple transformation
(again via python driver), single node and clustered Cassandra run out of
memory no matter what settings I try, even I put a lot of sleeps
Hi Carlos,
I tried on a single node and a 4-node cluster. On the 4-node cluster I setup
the tables with replication factor = 2.
I usually iterate over a subset, but it can be about ~40% right now. Some of my
column values could be quite big… I remember I was exporting to csv and I had
to
Hello Pavel,
What is the size of the Cluster (# of nodes)? And you need to iterate over
the full 1TB every time you do the update? Or just parts of it?
IMO information is short to make any kind of assessment of the problem you
are having.
I can suggest to try a 2.0.x (or 2.1.1) release to see
Update should not be a problem because no read is done, so no need to pull
the data out.
Is that row bigger than your memory capacity (Or HEAP size)? For dealing
with large heaps you can refer to this ticket: CASSANDRA-8150. It provides
some nice tips.
If someone else can share experience would