Hi all,
I have a problem with my client on Cassandra 1.2.18 which I did not have on
Cassandra 1.0.11
I create a big row with a lot of super-columns.
When writing that row using batch_mutate, I receive the following error in
my client:
A connection attempt failed because the connected party did
After a little testing I see that the client has a time-out because the
server takes longer.
This large batch_mutate took 5 seconds in Cassandra 1.0.11
The same batch_mutate takes three minutes in Cassandra 1.2.18
Is that normal!?
Thanks,
Rene
2015-02-04 18:29 GMT+01:00 Rene Kochen
Hi all,
I have a question about communication between two data-centers, both with
replication-factor three.
If I read data using local_quorum from datacenter1, I see that digest
requests are sent to datacenter2. This is for read-repair I guess. How can
I prevent this from happening? Setting
, 2014 at 4:53 PM, Rene Kochen rene.koc...@schange.com
wrote:
Hi all,
I have a question about communication between two data-centers, both with
replication-factor three.
If I read data using local_quorum from datacenter1, I see that digest
requests are sent to datacenter2. This is for read
Hi all,
I want to add a data-center to an existing single data-center cluster.
First I have to make the existing cluster multi data-center compatible.
The existing cluster is a 12 node cluster with:
- Replication factor = 3
- Placement strategy = SimpleStrategy
- Endpoint snitch = SimpleSnitch
at 11:52 AM, Rene Kochen rene.koc...@schange.com
wrote:
Hi all,
I want to add a data-center to an existing single data-center cluster.
First I have to make the existing cluster multi data-center compatible.
The existing cluster is a 12 node cluster with:
- Replication factor = 3
- Placement
data-center and rack.
Thanks again!
Rene
2014-08-05 20:05 GMT+02:00 Robert Coli rc...@eventbrite.com:
On Tue, Aug 5, 2014 at 3:52 AM, Rene Kochen rene.koc...@schange.com
wrote:
Do I have to run full repairs after this change? Because the yaml file
states: IF YOU CHANGE THE SNITCH AFTER
Quick question.
I am using Cassandra 1.0.11
When is nodetool cfhistograms output reset? I know that data is collected
during read requests. But I am wondering if it is data since the beginning
(start of Cassandra) or if it is reset periodically?
Thanks!
Rene
If I look at Read Latency I see indeed that they are reset during two runs
of cfhistograms. However, Row Size and Column Count keep the values.
When are they re-evaluated?
Thanks!
Rene
2013/10/1 Richard Low rich...@wentnet.com
On 1 October 2013 16:21, Rene Kochen rene.koc...@schange.com
Thanks!
Does that mean that cfhistograms scans all Statistics.db files in order to
populate the Row Size and Column Count values?
2013/10/1 Tyler Hobbs ty...@datastax.com
On Tue, Oct 1, 2013 at 2:34 PM, Rene Kochen
rene.koc...@emea.schange.comwrote:
However, Row Size and Column Count
Nice! Thats explains it.
2013/9/19 Robert Coli rc...@eventbrite.com
On Thu, Sep 19, 2013 at 3:08 AM, Rene Kochen rene.koc...@schange.comwrote:
And how does cfstats track the maximum size? What does Compacted mean
in Compacted row maximum size.
That maximum size is the largest row that I
Hi all,
I use Cassandra 1.0.11
If I do cfstats for a particular column family, I see a Compacted row
maximum size of 43388628
However, when I do a cfhistograms I do not see such a big row in the Row
Size column. The biggest row there is 126934.
Can someone explain this?
Thanks!
Rene
to the read request, while
cfstats tracks the largest row stored on given node.
M.
W dniu 19.09.2013 11:31, Rene Kochen pisze:
Hi all,
I use Cassandra 1.0.11
If I do cfstats for a particular column family, I see a Compacted row
maximum size of 43388628
However, when I do a cfhistograms I do
That is indeed how I read it. The maximal size is 3 rows with an offset of
126934, while cfstats reports 43388628.
Thanks,
Rene
2013/9/19 Richard Low rich...@wentnet.com
On 19 September 2013 10:31, Rene Kochen rene.koc...@schange.com wrote:
I use Cassandra 1.0.11
If I do cfstats
Hi All,
I have the following situation:
- Cassandra 1.0.11
- A 6 node cluster
- Random partitioner
- Tokens are balanced (according to node-tool)
- Data-load is balanced (according to node-tool)
I have a customers column-family with 100 customers. I also have a test
client which requests
reside on the first node (actually
the first three nodes because of the replication factor of three).
Thanks,
Rene
2013/6/28 Rene Kochen rene.koc...@schange.com
Hi All,
I have the following situation:
- Cassandra 1.0.11
- A 6 node cluster
- Random partitioner
- Tokens are balanced
) will take too long. I expect
the data to grow significantly.
It makes more sense to use the second cluster as a hot standby (and make
snapshots on both clusters).
Rene
2013/3/16 Aaron Turner synfina...@gmail.com
On Fri, Mar 15, 2013 at 10:35 AM, Rene Kochen
rene.koc...@emea.schange.com wrote
if the data is going from one data centre to
another, unless you have a high bandwidth connection between data centres
or you have a small amount of data.
Jabbar Azam
On 14 Mar 2013 14:31, Rene Kochen rene.koc...@schange.com wrote:
Hi all,
Is the following a good backup solution.
Create two
I have a four node EC2 cluster.
Three machines show via nodetool ring that all machines are UP.
One machine shows via nodetool ring that one machine is DOWN.
If I take a closer to the machine reporting the other machine as down, I
see the following:
- StorageService.UnreachableNodes =
shows as down can you post the output from nodetool gossipinfo
from 9.109 and the node that sees 9.109 as down.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 18/10/2012, at 8:45 PM, Rene Kochen rene.koc...@schange.com wrote:
I
Is this a bug? I'm using Cassandra 1.0.11:
INFO 13:45:43,750 Compacting
[SSTableReader(path='d:\data\Traxis\Parameters-hd-47-Data.db'),
SSTableReader(path='d:\data\Traxis\Parameters-hd-44-Data.db'),
SSTableReader(path='d:\data\Traxis\Parameters-hd-46-Data.db'),
/2012, at 6:32 AM, Rene Kochen rene.koc...@schange.com wrote:
Hi all,
Does minor compaction delete expired column-tombstones when the row is
also present in another table which is not subject to the minor
compaction?
Example:
Say there are 5 SStables:
- Customers_0 (10 MB)
- Customers_1
down
compaction. The harder thing is to tune the JVM memory settings (the
defaults often do a good job).
Hope that helps.
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 14/09/2012, at 10:41 PM, Rene Kochen rene.koc...@schange.com wrote
AM, Rene Kochen rene.koc...@schange.com wrote:
Hi
I have a cluster of 7 nodes:
- Windows Server 2008
- Cassandra 0.7.10
- The nodes are identical (hardware, configuration and client request load)
- Standard batch file with 8GB heap
- I use disk_access_mode = standard
- Random partitioner
Hi all,
Does minor compaction delete expired column-tombstones when the row is
also present in another table which is not subject to the minor
compaction?
Example:
Say there are 5 SStables:
- Customers_0 (10 MB)
- Customers_1 (10 MB)
- Customers_2 (10 MB)
- Customers_3 (10 MB)
- Customers_4
support Windows as our production platform.
Regards,
Oleg
On 2012-09-10 09:00:02 +, Rene Kochen said:
Hi all,
On my test cluster I have three Windows Server 2008 R2 machines running
Cassandra 1.0.11
If i use memory mapped IO (the default), then the nodes freeze after a
while. Paging
. If you have
received this message in error, please contact the sender immediately and
irrevocably delete this message and any copies.
*From:* Rene Kochen [mailto:rene.koc...@emea.schange.com]
*Sent:* Monday, September 10, 2012 14:47
*To:* user@cassandra.apache.org
*Subject:* Re: High commit
) Restart Cassandra
Thanks
Rene
2012/9/7 Rob Coli rc...@palominodb.com
On Fri, Sep 7, 2012 at 6:38 AM, Rene Kochen rene.koc...@schange.com
wrote:
If I use node-tool drain, it does stop accepting writes and flushes the
tables. However, is it normal that the commit log files are not deleted
Hi All,
I have a question about node-tool drain on a single Cassandra 1.0.11 node:
If I use node-tool drain, it does stop accepting writes and flushes the
tables. However, is it normal that the commit log files are not deleted and
that it gets replayed?
Because if I do the following:
1) Write
Okay, thanks for the info! I was just trying to understand what I saw.
2012/8/20 Tyler Hobbs ty...@datastax.com:
On Sun, Aug 19, 2012 at 6:27 AM, Rene Kochen rene.koc...@schange.com
wrote:
Why does it not increase when servicing a range operation?
It doesn't because, basically
Hi All,
I have a question about the ColumnFamilies.ReadCount counter.
I use:
- One node.
- Cassandra 1.0.10.
- No row cache.
- One table Products containing a few rows
If I use the cli command list Products, the ColumnFamilies.ReadCount
counter does not increase (also via nodetool cfstats).
if
Hi
I have a cluster of 7 nodes:
- Windows Server 2008
- Cassandra 0.7.10
- The nodes are identical (hardware, configuration and client request load)
- Standard batch file with 8GB heap
- I use disk_access_mode = standard
- Random partitioner
- TP stats shows no problems
- Ring command shows no
is telling it. Add
it as a new line at the bottom of cassandra-env.sh.
If it's still failing watch the logs and see what it says when it marks the
other as been down.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 1/02/2012, at 11:12 PM, Rene
I have a cluster with seven nodes.
If I run the node-tool ring command on all nodes, I see the following:
Node1 says that node2 is down.
Node 2 says that node1 is down.
All other nodes say that everyone is up.
Is this normal behavior?
I see no network related problems. Also no problems between
Thanks for this very helpful info. It is indeed a production site which I
cannot easily upgrade. I will try the various gc knobs and post any positive
results.
-Original Message-
From: sc...@scode.org [mailto:sc...@scode.org] On Behalf Of Peter Schuller
Sent: vrijdag 20 januari 2012
Thanks for your comments. The application is indeed suffering from a freezing
Cassandra node. Queries are taking longer than 10 seconds at the moment of a
full garbage collect.
Here is an example from the logs. I have a three node cluster. At some point I
see on a node the following log:
Thanks for your quick response!
I am currently running the performance tests with extended gc logging. I will
post the gc logging if clients time out at the same moment that the full
garbage collect runs.
Thanks
Rene
-Original Message-
From: sc...@scode.org [mailto:sc...@scode.org]
Assume the following default settings: min_compaction_threshold = 4,
max_compaction_threshold = 32.
When I start a bulk insert in Cassandra, I see minor compactions work: all
similar sized files are compacted when there are four of them. However, when
files gets larger, Cassandra waits with
I'm using Cassandra 0.7.9.
Ok, so in this version, Cassandra waits with compaction. But when (in my
original example) are the four 1GB files compacted?
Thanks!
-Original Message-
From: Radim Kolar [mailto:h...@sendmail.cz]
Sent: vrijdag 4 november 2011 15:55
To:
there are four similar sized files), but waits (up to 32)?
Thanks!
-Original Message-
From: Radim Kolar [mailto:h...@sendmail.cz]
Sent: vrijdag 4 november 2011 16:48
To: user@cassandra.apache.org
Subject: Re: Question about minor compaction
Dne 4.11.2011 16:16, Rene Kochen napsal(a):
I'm
What is the difference between these JMX column family attributes:
TotalDiskSpaceUsed and LiveDiskSpaceUsed
Thanks!
Rene
Given the following log line:
DEBUG [ReadStage:20] 11:39:07,028 collecting 0 of 2147483647:
SuperColumn(2150726f70657274696573 [64617461:false:4@1319189945952058,])
What does false:4 in the column mean?
Thanx!
Rene
-
From: Sylvain Lebresne [mailto:sylv...@datastax.com]
Sent: vrijdag 21 oktober 2011 13:16
To: user@cassandra.apache.org
Subject: Re: log line question
On Fri, Oct 21, 2011 at 12:48 PM, Rene Kochen
rene.koc...@emea.schange.com wrote:
Given the following log line:
DEBUG [ReadStage:20] 11:39
).
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 6/10/2011, at 11:40 PM, Rene Kochen wrote:
Node 3 is up (using ring on node 1).
There is no HH task (active = 0, pending = 0, completed = 0, blocked = 0).
This is the log from node 1
idea to upgrade to the latest stable release
before spending a lot of time debugging :)
On Fri, Oct 7, 2011 at 8:33 AM, Rene Kochen
rene.koc...@emea.schange.com wrote:
If I trigger hint delivery using JMX, it works. I see in the log:
2011-10-07 15:17:51,216 INFO 15:17:51,216 Started hinted
I'am using Cassandra 0.7.7 and have a question about hinted handoff.
I have a cluster of three nodes.
I stop node 3.
I see that the hint count for node 3 increases on node 1 (countPendingHints =
28709).
However, when I start node 3 again, I cannot see anything in the log regarding
hinted
/2011, at 10:35 PM, Rene Kochen wrote:
I'am using Cassandra 0.7.7 and have a question about hinted handoff.
I have a cluster of three nodes.
I stop node 3.
I see that the hint count for node 3 increases on node 1 (countPendingHints =
28709).
However, when I start node 3 again, I cannot see
I try to understand the flushing behavior in Cassandra 0.8
When I create rows, after a few seconds, I see the following line in the log:
INFO 11:18:46,470 flushing high-traffic column family
ColumnFamilyStore(table='Traxis', columnFamily='Customers')
INFO 11:18:46,471 Enqueuing flush of
The args to the java process should include -javaagent:bin/../lib/jamm-0.2.2.jar
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 17 Jun 2011, at 22:18, Rene Kochen wrote:
Since using cassandra 0.8, I see the following warning
Since using cassandra 0.8, I see the following warning:
WARN 12:05:59,807 MemoryMeter uninitialized (jamm not specified as java agent);
assuming liveRatio of 10.0. Usually this means cassandra-env.sh disabled jamm
because you are using a buggy JRE; upgrade to the Sun JRE instead
I'am using
: java.io.EOFException with version 0.7.6
Would you have a simple script to reproduce the issue ?
And could you open a JIRA ticket.
Sylvain
On Thu, May 19, 2011 at 4:22 PM, Rene Kochen
rene.koc...@emea.schange.com wrote:
I have some severe problems on our production site.
I created the following test
I have some severe problems on our production site.
I created the following test program to reproduce the issue with Cassandra
0.7.6 (with empty data set).
I use the following data-model
column_metadata: []
name: Customers
column_type: Super
gc_grace_seconds: 60
I have a super-column-family
52 matches
Mail list logo