Hi All,
I am using cassandra 2.1.3 , we have an application written in .net for
that we are using DataStax .Net Drivers . We are deploying this to our
production server (the .Net application) , we have created a static route
from our .net Webserver to the cassandra Cluster on port 9042 .
The
Please look at primary key which you've defined. Second mutation has
exactly the same primary key - it overwrote row that you previously
had.
On Fri, Aug 28, 2015 at 1:14 PM, Tommy Stendahl
tommy.stend...@ericsson.com wrote:
Hi,
I did a small test using TTL but I didn't get the result I
Yes, I understand that but I think this gives a strange behaviour.
Having values only on the primary key columns are perfectly valid so why
should the primary key be deleted by the TTL on the non-key column.
/Tommy
On 2015-08-28 13:19, Marcin Pietraszek wrote:
Please look at primary key
Hi,
I did a small test using TTL but I didn't get the result I expected.
I did this in sqlsh:
cqlsh create TABLE foo.bar ( key int, cluster int, col int, PRIMARY KEY
(key, cluster)) ;
cqlsh INSERT INTO foo.bar (key, cluster ) VALUES ( 1,1 );
cqlsh SELECT * FROM foo.bar ;
key | cluster |
What if you use an update statement in the second query?
--
Jacques-Henri Berthemet
-Original Message-
From: Tommy Stendahl [mailto:tommy.stend...@ericsson.com]
Sent: vendredi 28 août 2015 13:34
To: user@cassandra.apache.org
Subject: Re: TTL question
Yes, I understand that but I think
Thx, that was the problem. When I think about it it makes sense that I
should use update in this scenario and not insert.
cqlsh create TABLE foo.bar ( key int, cluster int, col int, PRIMARY KEY
(key, cluster)) ;
cqlsh INSERT INTO foo.bar (key, cluster ) VALUES ( 1,1 );
cqlsh SELECT * FROM
The Cassandra team is pleased to announce the release of Apache Cassandra
version 2.1.9.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source
Hi guys,
I got some issues with ccm and unit tests in java-driver. Here is what I see :
tail -f /tmp/1440780247703-0/test/node5/logs/system.log
INFO [STREAM-IN-/127.0.1.3] 2015-08-28 16:45:06,009 StreamResultFuture.java
(line 220) [Stream #22d9e9f0-4da4-11e5-9409-5d8a0f12fefd] All sessions
On Fri, Aug 28, 2015 at 6:27 AM, Tommy Stendahl tommy.stend...@ericsson.com
wrote:
Thx, that was the problem. When I think about it it makes sense that I
should use update in this scenario and not insert.
Per Sylvain on an old thread :
INSERT and UPDATE are not totally orthogonal in CQL
Unfortunately, the addresses/DC of the replicas are not available on the
exception hierarchy within Cassandra.
Fwiw, the DS Java Driver (most native protocol drivers actually) manages
membership dynamically by acting on cluster health events sent back over
the channel by the native transport.
hi;
We have cassandra cluster with Vnodes spanning across 3 data centers.
We take backup of the snapshots from one datacenter.
In a doomsday scenario, we want to restore a downed datacenter, with
snapshots from another datacenter. We have same number of nodes in each
datacenter.
1 : We
Do they show up in nodetool gossipinfo?
Either way, you probably need to invoke Gossiper.unsafeAssassinateEndpoints
via JMX as described in step 1 here:
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_gossip_purge.html
On Fri, Aug 28, 2015 at 1:32 PM, sai krishnam raju potturi
thanks Nate. But regarding our situation, of the 3 Datacenters we have DC1
DC2 and DC3, we take backup of snapshots on DC1.
If DC3 were to go down, will we not be able to bring up a new DC4 with
snapshots and token_ranges from DC1?
On Fri, Aug 28, 2015 at 3:19 PM, Nate McCall
Hi All,Been awhile since I upgaded and wanted to know what the steps are to
upgrade from 2.1.0 to 2.1.9. Also want to know if I need to upgrade my java
database driver.
Thanks,-Tony
hi;
we decommissioned nodes in a datacenter a while back. Those nodes keep
showing up in the logs, and also sometimes marked as UNREACHABLE when
`nodetool describecluster` is run.
However these nodes do not show up in `nodetool status` and
`nodetool ring`.
Below are a couple lines
You cannot use the identical token ranges. You have to capture membership
information somewhere for each datacenter, and use that token information
when briging up the replacement DC.
You can find details on this process here:
On Fri, Aug 28, 2015 at 11:32 AM, sai krishnam raju potturi
pskraj...@gmail.com wrote:
we decommissioned nodes in a datacenter a while back. Those nodes keep
showing up in the logs, and also sometimes marked as UNREACHABLE when
`nodetool describecluster` is run.
What version of
17 matches
Mail list logo