Hi Amlan,
On 30/03/15 22:12, Amlan Roy wrote:
Hi,
I have added new nodes to an existing cluster and ran the “nodetool cleanup”. I
am getting the following error. Wanted to know if there is any solution to it.
Regards,
Amlan
Error occurred during cleanup
Hi Joss,
On 24/03/15 12:58, joss Earl wrote:
I run into trouble after a while if I delete rows, this happens in both 2.1.3
and 2.0.13, and I encountered the same problem when using either the datastax
java driver or the stock python driver.
The problem is reproducible using the attached python
gc_grace thing will eventually
go away, eg by modifying C* to only drop known repaired tombstones.
Ciao, Duncan.
Thanks.
On Tue, Mar 24, 2015 at 9:38 AM, Duncan Sands duncan.sa...@gmail.com
mailto:duncan.sa...@gmail.com wrote:
Hi Roman,
On 24/03/15 17:32, Roman Tkachenko wrote
Hi Roman,
On 24/03/15 17:32, Roman Tkachenko wrote:
Hey guys,
Has anyone seen anything like this behavior or has an explanation for it? If
not, I think I'm gonna file a bug report.
this can happen if repair is run after the tombstone gc_grace_period has
expired. I suggest you increase
On 20/03/15 19:34, Pranay Agarwal wrote:
The cluster is processing something like 12k reads and 2k writes/seconds. The
disks are locally attached and latency is just fine. It's the number of disk
iops that's too high.
Maybe each read is accessing many sstables.
Ciao, Duncan.
Hi Gudmundur, each write and overwrite has a timestamp associated with it (you
can see these timestamps using the WRITETIME function). This timestamp is
provided by the Cassandra server if you don't explicitly supply it yourself
(which, judging by your queries, you are not). If the timestamp
Hi,
On 26/02/15 01:24, java8964 wrote:
...
select * from myTable;
59 | 336 | 1100390163336 | A |
[{updated_at:1424844362530,ids:668e5520-bb71-11e4-aecd-00163e56be7c}]
59 | 336 | 1100390163336 | D |
[{updated_at:1424844365062,ids:668e5520-bb71-11e4-aecd-00163e56be7c}]
Obviously, the
Hi Anand,
On 08/01/15 02:02, Anand Somani wrote:
Hi,
We have a 3 node cluster (on VM). Eg. host1, host2, host3. One of the VM
rebooted (host1) and when host1 came up it would see the others as down and the
others (host2 and host3) see it as down. So we restarted host2 and now the ring
seems
Hi Paulo,
On 10/11/14 15:18, Paulo Ricardo Motta Gomes wrote:
Hey,
We've seen a considerable increase in the number of dropped mutations after a
major upgrade from 1.2.18 to 2.0.10. I initially thought it was due to the extra
load incurred by upgradesstables, but the dropped mutations continue
Hi Peter, are you using the hsha RPC server type on this node? If you are, then
it looks like rpc_max_threads threads will be allocated on startup in 2.0.11
while this wasn't the case before. This can exhaust your heap if the value of
rpc_max_threads is too large (eg if you use the default).
Hi Tim,
On 28/10/14 15:42, Tim Dunphy wrote:
Hey all,
I'd like to setup datastax opscenter to monitor my cassandra ring. However I'm
using the open source version of 2.1.1. And before I expend any time and effort
in setting this up, I'm wondering if it will work with the open source version?
Hi Kevin, if you are using the latest version of opscenter, then even the
community (= free) edition can do a rolling restart of your cluster. It's
pretty convenient.
Ciao, Duncan.
On 16/09/14 19:44, Kevin Burton wrote:
Say I want to do a rolling restart of Cassandra…
I can’t just restart
Hi Clint,
INFO [StorageServiceShutdownHook] 2014-08-05 19:14:51,903
ThriftServer.java (line 141) Stop listening to thrift clients
INFO [StorageServiceShutdownHook] 2014-08-05 19:14:51,920 Server.java
(line 182) Stop listening for CQL clients
INFO [StorageServiceShutdownHook] 2014-08-05
Hi Clint, is time correctly synchronized between your nodes?
Ciao, Duncan.
On 02/08/14 02:12, Clint Kelly wrote:
BTW a few other details, sorry for omitting these:
* We are using version 2.0.4 of the Java driver
* We are running against Cassandra 2.0.9
* I tried messing around with the
Hi Akshay,
On 29/07/14 09:14, Akshay Ballarpure wrote:
Yes,
I have created keyspaces, but still i am getting error.
cqlsh:sample_new DESCRIBE KEYSPACES ;
system sample mykeyspace test *sample_new* system_traces
[root@CSL-simulation conf]# ../bin/sstableloader
Hi Keith,
On 25/07/14 14:43, Keith Wright wrote:
Answers to your questions below but in the end I believe the root issue here is
that LCS is clearly not compacting away as it should resulting in reads across
many SSTables which as you noted is “fishy”. I’m considering filing a JIRA for
this,
Hi Diane,
On 17/07/14 06:19, Diane Griffith wrote:
We have been struggling proving out linear read performance with our cassandra
configuration, that it is horizontally scaling. Wondering if anyone has any
suggestions for what minimal configuration and approach to use to demonstrate
this.
We
Hi Simon,
On 20/06/14 10:18, Simon Chemouil wrote:
Hi,
When I am sending BLOBs _below_ the max query size (blob size=0.6MB), on
Cassandra 2.0, it works fine, but on 2.1-rc1 I get the following error
within the Cassandra server (from the logs) and the query just dies:
WARN
Hi Gaurav, a schema versioning bug was fixed in 2.0.7.
Best wishes, Duncan.
On 12/05/14 21:31, Gaurav Sehgal wrote:
We have recently started seeing a lot of Schema Disagreement errors. We are
using Cassandra 2.0.6 with Oracle Java 1.7. I went through the Cassandra FAQ and
followed the below
Hi Jan,
On 02/05/14 09:29, Jan Kesten wrote:
Hello together,
I'm running a cassandra cluster with 2.0.6 and 6 nodes. As far as I know,
routine repairs are still mandatory for handling tombstones - even I noticed
that the cluster now does a snapshot-repair by default.
Now my cluster is running
Hi,
On 25/03/14 19:30, Robert Coli wrote:
On Tue, Mar 25, 2014 at 5:36 AM, Batranut Bogdan batra...@yahoo.com
mailto:batra...@yahoo.com wrote:
I am running 2.0.6 and I use /etc/init.d/cassandra start / stop . Also
before stopping I do :
nodetool disablegossip
nodetool
Hi user 01, in older versions of the datastax Debian packages startup
information was written to output.log but that is no longer the case (and hasn't
been for a while): it is normal that you have no output.log.
Ciao, Duncan.
On 24/03/14 13:26, user 01 wrote:
Hints please, anyone ?
On Mon,
Hi Aleksander, this may be related to CASSANDRA-6799 and CASSANDRA-6700 (if it
is caused by CASSANDRA-6700 then you are in luck: it is fixed in 2.0.6).
Best wishes, Duncan.
On 11/03/14 13:30, olek.stas...@gmail.com wrote:
Hi All,
I've faced an issue with cassandra 2.0.5.
I've 6 node cluster
On 11/03/14 14:00, olek.stas...@gmail.com wrote:
I plan to install 2.0.6 as soon as it will be available in datastax rpm repo.
But how to deal with schema inconsistency on such scale?
Does it get better if you restart all the nodes? In my case restarting just
some of the nodes didn't help,
Hi user 01,
On 10/03/14 13:11, user 01 wrote:
I installed DSC 2.0.5 on ubuntu 12.04 with Oracle JRE 7 but dsc 2.0.5 does not
start after installation. When I check the running status..
*$ sudo service cassandra status*
it says
** could not access pidfile for Cassandra*
no other
Hi Joel,
On 07/03/14 15:22, Joel Samuelsson wrote:
I try to fetch all the row keys from a column family (there should only be a
couple of hundred in that CF) in several different ways but I get timeouts
whichever way I try:
did you check the node logs for exceptions? You can get this kind of
Hi Graham,
On 21/02/14 07:54, graham sanderson wrote:
Note also; that reading at ONE there will be no read repair, since the
coordinator does not know that another replica has stale data (remember at ONE,
basically only one node is asked for the answer).
I don't think this is right. My
Hi Hari,
On 04/02/14 10:38, Hari Rajendhran wrote:
Dear Team ,
I have a 3 node cassandra 1.1.12 opensource version installed in our lab.The db
files for columnfamilies are getting created in 2 machines while in one of the
machine the data directory
is empty.I have tried with the following
Hi Ilya,
On 03/02/14 10:49, Ilya Sviridov wrote:
Hello Sundeep
It seems that in both configs of your nodes you are using the same hosname as
seeds value.
You have to enumerate all nodes in your cluster.
not so! If all nodes N1, N2, ... use the same node N0 as a seed, then by
gossiping
Hi Donald, which driver are you using? With the datastax python driver you need
to use the DCAwareRoundRobinPolicy for the load balancing policy if you want the
driver to distinguish between your data centres, otherwise by default it round
robins robins requests amongst all nodes regardless of
30 matches
Mail list logo