What you have described below should work just fine.
When I was replacing nodes in my ring, I ended up creating a new datacenter
with the new nodes, but I was upgrading to vnodes too at the time.
-Arindam
From: nash [mailto:nas...@gmail.com]
Sent: Monday, April 28, 2014 10:52 PM
To:
Thanks you Ben for the links
On Tue, Apr 29, 2014 at 3:40 AM, Ben Bromhead b...@instaclustr.com wrote:
Some imbalance is expected and considered normal:
See http://wiki.apache.org/cassandra/VirtualNodes/Balance
As well as
https://issues.apache.org/jira/browse/CASSANDRA-7032
Ben
Hello,
I am running mostly Cassandra 1.2 on my clusters, and wanted to migrate my
current Snappy compressed CF's to LZ4.
Changing the schema is easy, my questions are:
1. Will previous, Snappy compressed tables still be readable?
2. Will upgradesstables convert my current CFs from Snappy to LZ4?
Looks like it will be like with the version 7... Cassandra has been
compatible with this version for a long time, but there were no official
validations and Datastax recommended during a long time (still now ?) to
use Java 6.
The best thing would be to use older versions. If for some reason you
Hi, I would say:
1 - Yes
2 - Yes (No major compaction needed, upgradesstables should do the job)
As always in case of doubt, as always, test it. ìn this case you can even
do it using a local machine.
Alain
2014-04-29 9:57 GMT+02:00 Katriel Traum katr...@google.com:
Hello,
I am running
Hi Boying,
From Datastax documentation:
http://www.datastax.com/documentation/cassandra/1.2/cassandra/architecture/architectureGossipAbout_c.html
The seed node designation has no purpose other than bootstrapping the
gossip process for new nodes joining the cluster. Seed nodes are not a
single
Datastax recommended during a long time (still now ?) to use Java 6
Java 6 is recommended for version 1.2
Java 7 is required for version 2.0
Mark
On Tue, Apr 29, 2014 at 10:19 AM, Alain RODRIGUEZ arodr...@gmail.comwrote:
Looks like it will be like with the version 7... Cassandra has been
Thanks for the upgrade Mark.
2014-04-29 11:35 GMT+02:00 Mark Reddy mark.re...@boxever.com:
Datastax recommended during a long time (still now ?) to use Java 6
Java 6 is recommended for version 1.2
Java 7 is required for version 2.0
Mark
On Tue, Apr 29, 2014 at 10:19 AM, Alain
Hello,
Iirc writing a new value to a row will invalidate the row cache for that
value. Row cache is only populated after a read operation.
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_configuring_caches_c.html?scroll=concept_ds_n35_nnr_ck
Cassandra provides
Hi,
When we look at the wiki it's said :
Cassandra requires the most stable version of Java 7 you can deploy, preferably
the Oracle/Sun JVM.
And in chapter 4 we see that they are using Cassandra 1.2
Connected to Test Cluster at localhost:9160.
[cqlsh 2.3.0 | Cassandra 1.2.2 | CQL spec 3.0.0 |
Thanks for the answer.
I've tested it by myself now, and indeed it works.
Only note I have is that you have to run nodetool upgradesstables -a, so
all sstables are updated.
Katriel
On Tue, Apr 29, 2014 at 12:22 PM, Alain RODRIGUEZ arodr...@gmail.comwrote:
Hi, I would say:
1 - Yes
2 - Yes
Hi Rob,
I know it has been a while but we managed to perform a point-in-time recovery.
I am not really sure what the problem was but I guess it has to do with not
reading exactly (use GMT and not local time zone, copying archivelogs to the
wrong place, etc.).
So everything should work as
Thanks everyone
From: Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Sent: Tuesday, April 29, 2014 3:47 AM
To: user@cassandra.apache.org
Subject: Re: JDK 8
Thanks for the upgrade Mark.
2014-04-29 11:35 GMT+02:00 Mark Reddy
mark.re...@boxever.commailto:mark.re...@boxever.com:
Datastax recommended
hi,
writing a new value to a row will invalidate the row cache for that
value
do you mean the entire row will be invalidate ? or just the column it was
being updated ?
I was reading through
http://planetcassandra.org/blog/post/cassandra-11-tuning-for-frequent-column-updates/
that seems to
if Cassandra invalidate the row cache upon a single column update to that
row, that seems very inefficient.
Yes. For the most recent direction, take a look at:
https://issues.apache.org/jira/browse/CASSANDRA-5357
--
-
Nate McCall
Austin, TX
@zznate
Co-Founder Sr.
On Tue, Apr 29, 2014 at 9:30 AM, Jimmy Lin y2klyf+w...@gmail.com wrote:
if Cassandra invalidate the row cache upon a single column update to that
row, that seems very inefficient.
On Mon, Apr 28, 2014 at 10:52 PM, nash nas...@gmail.com wrote:
I have a new set of nodes and I'd like to migrate my entire cluster onto
them without any downtime. I believe that I can launch the new cluster and
have them join the ring and then use nodetool to decommission the old nodes
one at
On Tue, Apr 29, 2014 at 7:46 AM, Dennis Schwan dennis.sch...@1und1.dewrote:
I know it has been a while but we managed to perform a point-in-time
recovery.
I am not really sure what the problem was but I guess it has to do with
not reading exactly (use GMT and not local time zone, copying
On Mon, Apr 28, 2014 at 6:57 PM, Lu, Boying boying...@emc.com wrote:
I wonder if I can change the seeds list at runtime. i.e. without change
the yaml file and restart DB service?
There are dynamic seed providers, Priam for example uses one.
Just a heads up--this is only available in the latest version of Cassandra
2.0.6, and is not available in Cassandra 1.2.
On Mon, Apr 28, 2014 at 12:57 PM, Donald Smith
donald.sm...@audiencescience.com wrote:
CQL lets you specify a default TTL per column family/table: and
I was able to solve the issue. There was another layer of compression
happening in the DAO that was using java.util.zip.Deflater/Inflater, along
with the snappy compression defined on the CF. The solution was to extend
CassandraStorage and override the getNext() method. The new implementation
Are these issues 'resolved' only in 2.0 or later release?
What about 1.2 version?
On Apr 29, 2014, at 9:40 AM, Robert Coli rc...@eventbrite.com wrote:
On Tue, Apr 29, 2014 at 9:30 AM, Jimmy Lin y2klyf+w...@gmail.com wrote:
if Cassandra invalidate the row cache upon a single column update
On Tue, Apr 29, 2014 at 1:53 PM, Brian Lam y2k...@gmail.com wrote:
Are these issues 'resolved' only in 2.0 or later release?
What about 1.2 version?
As I understand it :
1.2 version has the on-heap row cache and off-heap row cache. It does not
have the new partition cache.
2.0 version has
Hi,
We're planning to deploy 3 cassandra rings, one in our datacenter (with
more node/power) and two others in EC2. We don't have enough public IP to
assign for each individual node in our data center, so i wonder how could
we connect the cluster together?
Have any one tried this before, and if
Hi there,
We are working on an API service that receives arbitrary json data, these
data can be nested json data or just normal json data. We started using
Astyanax but we noticed we couldn't use CQL3 to target the arbitrary
columns, in CQL3 those arbitrary columns ain't available. Ad-hoc query
Hi Elder.
Welcome.
We hope help you.
On Tue, Apr 29, 2014 at 9:28 PM, Ebot Tabi ebot.t...@gmail.com wrote:
Hi there,
We are working on an API service that receives arbitrary json data, these
data can be nested json data or just normal json data. We started using
Astyanax but we noticed we
I am hoping as well to get help on how to handle such scenario, the reason
we choose Cassandra was its performance for heavy writes.
On Wed, Apr 30, 2014 at 12:38 AM, Otávio Gonçalves de Santana
otaviopolianasant...@gmail.com wrote:
Hi Elder.
Welcome.
We hope help you.
On Tue, Apr 29,
You will need to have the nodes running on AWS in a VPC.
You can then configure a VPN to work with your VPC, see
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html. Also as you
will have multiple VPN connections (from your private DC and the other AWS
region) AWS CloudHub will
thanks all for the pointers.
let' me see if I can put the sequences of event together
1.2
people mis-understand/mis-use row cache, that cassandra cached the entire
row of data even if you are only looking for small subset of the row data.
e.g
select single_column from a_wide_row_table
will
Hi
We have enabled cassandra client authentication and have set new user/pass
per keyspace. As I understand user/pass is stored in the system table, do
we need to change the replication factor of the system table so this data
is replicated? The cluster is going to be multi-dc.
Thanks
Anand
Correction credentials are stored in the system_auth table, so it is
ok/recommended to change the replication factor of that keyspace?
On Tue, Apr 29, 2014 at 10:41 PM, Anand Somani meatfor...@gmail.com wrote:
Hi
We have enabled cassandra client authentication and have set new user/pass
per
31 matches
Mail list logo