Thanks, Rob, for clarifying!
- Takenori
(2013/09/18 10:01), Robert Coli wrote:
On Tue, Sep 17, 2013 at 5:46 PM, Takenori Sato ts...@cloudian.com
mailto:ts...@cloudian.com wrote:
So in fact, incremental backup of Cassandra is just hard link
all the new SSTable files being generated
Hi,
1) I will expect same row key could show up in both sstable2json
output, as this one row exists in both SSTable files, right?
Yes.
2) If so, what is the boundary? Will Cassandra guarantee the column
level as the boundary? What I mean is that for one column's data, it
will be
From the Jira,
One possibility is that getToken of OPP can return hex value if it
fails to encode bytes to UTF-8 instead of throwing error. By this system
tables seem to be working fine with OPP.
This looks like an option to try for me.
Thanks!
(2013/08/23 20:44), Vara Kumar wrote:
For
Hi Victor,
As Andrey said, running cleanup doesn't work as you expect.
The reason I need to clean things is that I wont need most of my
inserted data on the next day.
Deleted objects(columns/records) become deletable from sstable file when
they get expired(after gc_grace_seconds).
Such
Hi,
We found this issue is specific to 1.0.1 through 1.0.8, which was fixed
at 1.0.9.
https://issues.apache.org/jira/browse/CASSANDRA-4023
So by upgrading, we will see a reasonable performnace no matter how
large row we have!
Thanks,
Takenori
(2013/02/05 2:29), aaron morton wrote:
Yes,