We're considering switching from using Redis to Cassandra to store
short lived (~1 hour) session tokens, in order to reduce the number of
data storage engines we have to manage.
Can anyone foresee any problems with the following approach:
1) Use the TTL functionality in Cassandra to remove old
Personally I believe that you do not have to do these steps just to perform
the restart. I know the node will start faster if drained before shutdown
but according to my experience these steps make the restart process
slightly longer (I mean stop + start phase, total). So if it is really
about
This sounds like a good use case for
http://www.datastax.com/dev/blog/datetieredcompactionstrategy
http://www.datastax.com/dev/blog/datetieredcompactionstrategy
On Dec 1, 2014, at 3:07 AM, Phil Wise p...@advancedtelematic.com wrote:
We're considering switching from using Redis to Cassandra
I don't think DateTiered will help here, since there's no clustering key
defined. This is a pretty straightforward workload, I've done something
similar.
Are you overwriting the session on every request? Or just writing it once?
On Mon Dec 01 2014 at 6:45:14 AM Matt Brown m...@mattnworb.com
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
The session will be written once at create time, and never modified
after that. Will that affect things?
Thank you
- -Phil
On 01.12.2014 15:58, Jonathan Haddad wrote:
I don't think DateTiered will help here, since there's no
clustering key
Since the session tokens are random, perhaps computing a shard from each
one and using it as the partition key would be a good idea.
I would also use uuid v1 to get ordering.
With such a small amount of data, only a few shards would be needed.
On Mon, Dec 1, 2014 at 10:08 AM, Phil Wise
On Sun, Nov 30, 2014 at 9:02 PM, Kevin Burton bur...@spinn3r.com wrote:
It has the following command which make sense:
root@cssa01:~# nodetool -h cssa01.michalski.im disablegossiproot@cssa01:~#
nodetool -h cssa01.michalski.im disablethriftroot@cssa01:~# nodetool -h
cssa01.michalski.im
On Fri, Nov 28, 2014 at 6:46 AM, Reynald BOURTEMBOURG
reynald.bourtembo...@esrf.fr wrote:
We have a three nodes cluster running Cassandra 2.1.2 on Linux Debian 7.
More than 2 hours later, I executed nodetool repair on one of the nodes
(cass2).
It started to repair the keyspace we created
On Thu, Nov 27, 2014 at 1:24 AM, Spico Florin spicoflo...@gmail.com wrote:
I have another question. What about the following scenario: two
Cassandra instances installed on different cloud providers (EC2, Flexiant)?
How do you synchronize them? Can you use some internal tools or do I have
to
On Sun, Nov 30, 2014 at 10:15 PM, Neha Trivedi nehajtriv...@gmail.com
wrote:
I need to Add new Node and remove existing node.
What is the purpose of this action? Is the old node defective, and being
replaced 1:1 with the new node?
=Rob
Here's a snitch we use for this situation - it uses a property file if it
exists, but falls back to EC2 autodiscovery if it is missing.
https://github.com/barchart/cassandra-plugins/blob/master/src/main/java/com/barchart/cassandra/plugins/snitch/GossipingPropertyFileWithEC2FallbackSnitch.java
On
I don't know what the advantage would be of using this sharding system. I
would recommend just going with a simple k-v table as the OP suggested.
On Mon Dec 01 2014 at 7:18:51 AM Laing, Michael michael.la...@nytimes.com
wrote:
Since the session tokens are random, perhaps computing a shard from
Sharding lets you use the row cache effectively in 2.1.
But like everything, one should test :)
On Mon, Dec 1, 2014 at 1:56 PM, Jonathan Haddad j...@jonhaddad.com wrote:
I don't know what the advantage would be of using this sharding system. I
would recommend just going with a simple k-v
On Thu, Nov 27, 2014 at 2:38 AM, André Cruz andre.c...@co.sapo.pt wrote:
On 26 Nov 2014, at 19:07, Robert Coli rc...@eventbrite.com wrote:
Yes. Do you know if 5748 was created as a result of compaction or via a
flush from a memtable?
It was the result of a compaction:
Ok, so in theory
On Thu, Nov 27, 2014 at 2:34 AM, Jens Rantil jens.ran...@tink.se wrote:
Late answer; You can find my backup script here:
https://gist.github.com/JensRantil/a8150e998250edfcd1a3
Why not use the much more robustly designed and maintained community based
project, tablesnap?
On Sun, Nov 30, 2014 at 8:44 PM, Dong Dai daidon...@gmail.com wrote:
The question is can I expect a better performance using the BulkLoader
this way comparing with using Batch insert?
You just asked if writing once (via streaming) is likely to be
significantly more efficient than writing
On Fri, Nov 28, 2014 at 12:55 PM, Paulo Ricardo Motta Gomes
paulo.mo...@chaordicsystems.com wrote:
We restart the whole cluster every 1 or 2 months, to avoid machines
getting into this crazy state. We tried tuning GC size and parameters,
different cassandra versions (1.1, 1.2, 2.0), but this
Thanks Rob,
I guess you mean that BulkLoader is done by streaming whole SSTable to remote
servers, so it is faster?
The documentation says that all the rows in the SSTable will be inserted into
the new cluster conforming to the replication strategy of that cluster. This
gives me a felling
Is there a page explaining what happens at server side when using
SSTableLoader?
I'm seeking the answers of the following questions:
1. What's about the existing data in the table? From my test, the data
in sstable files will be applied to the existing data. Am I right?
- The new
On Mon, Dec 1, 2014 at 12:10 PM, Dong Dai daidon...@gmail.com wrote:
I guess you mean that BulkLoader is done by streaming whole SSTable to
remote servers, so it is faster?
Well, it's not exactly whole SSTable but yes, that's the sort of
statement I'm making. [1]
The documentation says
I'm running Cassandra 2.1.0.
I was attempting to drop two keyspaces via cqlsh and encountered an error
in the CLI as well as the appearance of losing all my keyspaces. Below is
the output from my cqlsh session:
$
No the old node is not defective. We Just want to separate out that Server
for testing.
And add a new node. (Present cluster has two Nodes and RF=2)
thanks
On Tue, Dec 2, 2014 at 12:04 AM, Robert Coli rc...@eventbrite.com wrote:
On Sun, Nov 30, 2014 at 10:15 PM, Neha Trivedi
Hi Tim,
We have happy SPM for Cassandra users who are using SPM for monitoring,
alerting, and anomaly detection for their Cassandra clusters. SPM agents
phone home by definition if you are using the Cloud version. The On
Premise / AWS AMI versions do not phone home.
See:
Hi Rob, any recommended documentation on describing
explanation/configuration of the JVM heap and permanent generation ? We
stucked in this same situation too. :(
Jason
On Tue, Dec 2, 2014 at 3:42 AM, Robert Coli rc...@eventbrite.com wrote:
On Fri, Nov 28, 2014 at 12:55 PM, Paulo Ricardo Motta
On Mon, Dec 1, 2014 at 8:39 PM, Robert Coli rc...@eventbrite.com wrote:
Why not use the much more robustly designed and maintained community based
project, tablesnap?
For two reasons:
- Because I am tired of the deployment model of Python apps which
require me to set up virtual
25 matches
Mail list logo