Hmm... cassandra fundamental key features like fault tolerant, durable and
replication. Just out of curiousity, why would you want to do backup?
/Jason
On Sat, Dec 7, 2013 at 3:31 AM, Robert Coli rc...@eventbrite.com wrote:
On Fri, Dec 6, 2013 at 6:41 AM, Amalrik Maia
One typical reason is to protect against human error.
On 7.12.2013, at 11.09, Jason Wee peich...@gmail.com wrote:
Hmm... cassandra fundamental key features like fault tolerant, durable and
replication. Just out of curiousity, why would you want to do backup?
/Jason
On Sat, Dec 7,
If you lose RF + 1 nodes the data that is replicated to only these nodes is
gone, good idea to have a recent backup than. Another situation is when you
deploy a bug in the software and start writing crap data to Cassandra.
Replication does not help and depending on the situation you need to
I have not use tablesnap but it appears that it does not necessarily depend
upon taking a cassandra snapshot. The example given in their documentation
shows the source folder as /var/lib/cassandra/data/GiantKeyspace, which is
the root of the GiantKeyspace keyspace. But, snapshots operate at the
I finally got the math right for the partition index after tracing through
SSTableWriter.IndexWriter.append(DecoratedKey key, RowIndexEntry
indexEntry). I should also note that I am working off of the source for
1.2.9. Here is the break down for what gets written to disk in the append()
call (my
Nice work John. If you learn any more, please share.
S
On Sat, Dec 7, 2013 at 11:50 AM, John Sanda john.sa...@gmail.com wrote:
I finally got the math right for the partition index after tracing through
SSTableWriter.IndexWriter.append(DecoratedKey key, RowIndexEntry
indexEntry). I should
I have found that in (limited) practice that it's fairly hard to estimate
due to compression and compaction behaviour. I think measuring and
extrapolating (with an understanding of the datastructures) is the most
effective.
Tim
Sent from my phone
On 6 Dec 2013 20:54, John Sanda
Thanks Nate. I hadn't noticed that and it definitely explains it.
It'd be nice to see that called out much more clearly. As we found out the
implications can be severe!
-Josh
On Thursday, December 5, 2013 at 11:30 AM, Nate McCall wrote:
Per the 256mb to 5mb change, check the very last
If you are really set on using Cassandra as a cache, I would recommend
disabling durable writes for the keyspace(s)[0]. This will bypass the
commitlog (the flushing/rotation of which my be a good-sized portion of
your performance problems given the number of tables).
[0]
I am trying to insert into Cassandra database using Datastax Java driver.
But everytime I am getting below exception at `prBatchInsert.bind` line-
com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type
for value 1 of CQL type text, expecting class java.lang.String but class
As the comment in your code suggests, you need to cast the array passed to the
bind method as Object[]. This is true anytime you pass an array to a varargs
method.
On Dec 7, 2013 4:01 PM, Techy Teck comptechge...@gmail.com wrote:
I am trying to insert into Cassandra database using Datastax
BoundStatement query = prBatchInsert.bind(userId,
attributes.values().toArray(new *String*[attributes.size()]))
On 12/07/2013 03:59 PM, Techy Teck wrote:
I am trying to insert into Cassandra database using Datastax Java
driver. But everytime I am getting below exception at
It is definately unexpected and can be very impactful to reset such
impprtant settings.
On Saturday, December 7, 2013, Josh Dzielak j...@keen.io wrote:
Thanks Nate. I hadn't noticed that and it definitely explains it.
It'd be nice to see that called out much more clearly. As we found out
the
13 matches
Mail list logo