PS: everything above is in bytes, not bits.
On Fri, Aug 31, 2012 at 11:03 AM, rohit bhatia rohit2...@gmail.com wrote:
I was wondering how much would be the memory usage of an established
connection in cassandra's heap space.
We are noticing extremely frequent young generation garbage
On Fri, Aug 31, 2012 at 11:27 AM, Peter Schuller
peter.schul...@infidyne.com wrote:
Could these 500 connections/second cause (on average) 2600Mb memory usage
per 2 second ~ 1300Mb/second.
or For 1 connection around 2-3Mb.
In terms of garbage generated it's much less about number of
@dong, any reason to do so??
On Sun, Sep 9, 2012 at 4:43 PM, dong.yajun dongt...@gmail.com wrote:
ruuning for a while, you should set the -Xss to more than 160k when you
using jdk1.7.
On Sun, Sep 9, 2012 at 3:39 AM, Peter Schuller
peter.schul...@infidyne.com wrote:
Has anyone tried
We use counters in a 8 node cluster with RF 2 in cassandra 1.0.5.
We use phpcassa and execute cql queries through thrift to work with
composite types.
We do not have any problem of overcounts as we tally with RDBMS daily.
It works fine but we are having some GC pressure for young generation.
Per
that
the information remains the property of the sender. You must not use,
disclose, distribute, copy, print or rely on this e-mail. If you have
received this message in error, please contact the sender immediately
and
irrevocably delete this message and any copies.
2012/9/18 rohit bhatia rohit2
@Edward,
We use counters in production with Cassandra 1.0.5. Though since our
application is sensitive to write latency and we are seeing problems with
Frequent Young Garbage Collections, and also we just do increments
(decrements have caused problems for some people)
We don't see inconsistencies
@Sylvain
In a relatively untroubled cluster, even timed out writes go through,
provided no messages are dropped. Which you can monitor on cassandra
nodes. We have 100% consistency on our production servers as we don't
see messages being dropped on our servers.
Though as you mention, there would
i guess 7000 is only for gossip protocol. Cassandra still uses 9160
for RPC even among nodes
Also, I see Connections over port 9160 among various cassandra Nodes
in my cluster.
Please correct me if i am wrong..
PS: mentioned Here http://wiki.apache.org/cassandra/CloudConfig
On Tue, Oct 2,
See
If you attempt to retrieve an entire row and it returns a result with
no columns, it effectively means that row does not exist.
Essentially a row without columns doesn't exist.. (except those with tombstones)
from here
Reads during a write still occur during a counter increment with CL ONE,
but that latency is not counted in the request latency for the write. Your
local node write latency of 45 microseconds is pretty quick. what is your
timeout and the write request latency you see. In our deployment we had
some
be the reason for a
8 second timeout.
On Sat, Dec 29, 2012 at 11:37 PM, André Cruz andre.c...@co.sapo.pt wrote:
On 29/12/2012, at 16:59, rohit bhatia rohit2...@gmail.com wrote:
Reads during a write still occur during a counter increment with CL ONE,
but that latency is not counted
MeteredFlusher use to trigger memtable flushes.
Also how accurate is the estimated size in the above logfile entry.
Regards
Rohit Bhatia
Software Engineer, Media.net
...@thelastpickle.com wrote:
See the section on memtable_total_space_in_mb here
http://thelastpickle.com/2011/05/04/How-are-Memtables-measured/
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 6/06/2012, at 2:27 AM, rohit bhatia wrote:
I am trying
in memory the JVM at once..
So it implies that for flushing, Cassandra copies the memtables content.
So does this imply that writes to column families are not stopped even
when it is being flushed?
Thanks
Rohit
On Wed, Jun 6, 2012 at 9:42 AM, rohit bhatia rohit2...@gmail.com wrote:
Hi Aaron
no
secondary indexes. The CF will not be allowed to get above one seventh
of 100MB or 14MB, as if the CF filled the flush pipeline with 7
memtables of this size it would take 98MB.
On Wed, Jun 6, 2012 at 6:22 PM, rohit bhatia rohit2...@gmail.com wrote:
Hi..
the link http://thelastpickle.com/2011/05/04
Restart cassandra on new node with autobootstrap as true, seed node as
the existing node in the cluster and an appropriate token...
You should not need to run nodetool repair as autobootstrap would take
care of it.
On Thu, Jun 7, 2012 at 12:22 PM, Adeel Akbar
adeel.ak...@panasiangroup.com wrote:
16 100 Up Normal 15.21 KB
61.52% 147906224866113468886003862620136792702
Thanks Regards
Adeel Akbar
-Original Message-
From: rohit bhatia [mailto:rohit2...@gmail.com]
Sent: Thursday, June 07, 2012 12:28 PM
To: user@cassandra.apache.org
Subject: Re
for 0.8
http://www.datastax.com/docs/0.8/operations/cluster_management#replacing-a-dead-node
On Thu, Jun 7, 2012 at 1:22 PM, rohit bhatia rohit2...@gmail.com wrote:
pardon me for assuming that ur new node was the same as the failed node..
please see
http://www.datastax.com/docs/1.0
Hi
I can't find this in any documentation online, so just wanted to ask
Do all flush writers share the same flush queue or do they maintain
their separate queues..
Thanks
Rohit
run nodetool -h localhost cfstats on the nodes... this gives node
specific column family based data...
just run this for both nodes...
On Fri, Jun 8, 2012 at 12:46 PM, Prakrati Agrawal
prakrati.agra...@mu-sigma.com wrote:
Yes the code is the same for both 1 and 2 node cluster. It's a Hector
Is ur client code calling with asyncrhynous requests?? and whats ur
replication factor and read consistency level.
In any case, 2 nodes might take as much time as one, but should not be
slow (unless u also doubled the data)...
On Fri, Jun 8, 2012 at 2:41 PM, Prakrati Agrawal
Hi
My cassandra node went out of heap memory with this message
GCInspector.java(line 88): Heap is .9934 full. Is this expected? or
should I adjust my flush_largest_memtable_at variable.
Also one change I did in my cluster was add 5 Column Families which are empty
Should empty ColumnFamilies
,
On Wed, Jun 13, 2012 at 6:30 PM, rohit bhatia rohit2...@gmail.com wrote:
Hi
My cassandra node went out of heap memory with this message
GCInspector.java(line 88): Heap is .9934 full. Is this expected? or
should I adjust my flush_largest_memtable_at variable.
Also one change I did in my cluster
Looking at http://blog.mikiobraun.de/2010/08/cassandra-gc-tuning.html
and server logs, I think my situation is this
The default cassandra settings has the highest peak heap usage. The
problem with this is that it raises the possibility that during the
CMS cycle, a collection of the young
with a
compaction or repair operation.
I would also consider experimenting on one node with 8GB / 800MB heap sizes.
More is not always better.
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 14/06/2012, at 8:05 PM, rohit bhatia wrote:
Looking
Hi
We have 8 cassandra 1.0.5 nodes with 16 cores and 32G ram, Heap size
is 12G, memtable_total_space_in_mb is one third = 4G, There are 12 Hot
CFs (write-read ratio of 10).
memtable_flush_queue_size = 4 and memtable_flush_writers = 2..
I got this log-entry MeteredFlusher.java (line 74)
:41 AM, rohit bhatia wrote:
Hi
We have 8 cassandra 1.0.5 nodes with 16 cores and 32G ram, Heap size
is 12G, memtable_total_space_in_mb is one third = 4G, There are 12 Hot
CFs (write-read ratio of 10).
memtable_flush_queue_size = 4 and memtable_flush_writers = 2..
I got this log-entry
Our Cassandra cluster consists of 8 nodes(16 core, 32G ram, 12G Heap,
1600Mb Young gen, cassandra1.0.5, JDK 1.7, 128 Concurrent writer
threads). The replication factor is 2 with 10 column families and we
service Counter incrementing write intensive tasks(CL=ONE).
I am trying to figure out the
...
messages are caused by PrintGCApplicationStoppedTime paramater which
is supposed to be logged whenever threads reach a safepoint. Is there
any way I can figure out what caused the java threads to pause.
Thanks
Rohit
On Thu, Jul 5, 2012 at 12:19 PM, rohit bhatia rohit2...@gmail.com wrote:
Our
http://cassandra.apache.org/ says 1.1.2
On Thu, Jul 5, 2012 at 7:46 PM, Raj N raj.cassan...@gmail.com wrote:
Hi experts,
I am planning to upgrade from 0.8.4 to 1.+. Whats the latest stable
version?
Thanks
-Rajesh
for the help.
Hope that helps
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 5/07/2012, at 6:49 PM, rohit bhatia wrote:
Our Cassandra cluster consists of 8 nodes(16 core, 32G ram, 12G Heap,
1600Mb Young gen, cassandra1.0.5, JDK 1.7, 128 Concurrent
On Fri, Jul 6, 2012 at 9:44 AM, rohit bhatia rohit2...@gmail.com wrote:
On Fri, Jul 6, 2012 at 4:47 AM, aaron morton aa...@thelastpickle.com wrote:
12G Heap,
1600Mb Young gen,
Is a bit higher than the normal recommendation. 1600MB young gen can cause
some extra ParNew pauses.
Thanks
@ravi, u can increase young gen size, keep a high tenuring rate or
increase survivor ratio..
On Fri, Jul 6, 2012 at 4:03 AM, aaron morton aa...@thelastpickle.com wrote:
Ideally we would like to collect maximum garbage from ParNew itself, during
compactions. What are the steps to take towards
are in the queue, 1 is being flushed.
Is this correct?
On Wed, Jun 6, 2012 at 9:08 PM, rohit bhatia rohit2...@gmail.com wrote:
Also, Could someone please explain how the factor of 7 comes in the
picture in this sentence
For example if memtable_total_space_in_mb is 100MB, and
memtable_flush_writers
Hi
I want to take out 2 nodes from a 8 node cluster and use in another
cluster, but can't afford the overhead of streaming the data and
rebalance cluster. Since replication factor is 2 in first cluster, I
won't lose any data.
I'm planning to save my commit_log and data directories and
Hi
As I understand that writes in cassandra are directly pushed to memory
and using counters with CL.ONE shouldn't take the read latency for
counters in account. So Writes for incrementing counters with CL.ONE
should basically be really fast.
But in my 8 node cluster(16 core/32G
.
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 11/07/2012, at 6:35 AM, rohit bhatia wrote:
Hi
I want to take out 2 nodes from a 8 node cluster and use in another
cluster, but can't afford the overhead of streaming the data and
rebalance cluster. Since
Hi,
I don't think that composite columns have parent columns. your point
might be true for supercolumns ..
but each composite column is probably independent..
On Wed, Jul 18, 2012 at 9:14 PM, Thomas Van de Velde
thomase...@gmail.com wrote:
Hi there,
I am trying to understand the expiration
You should probably try to break the one row scheme to
2*Number_of_nodes rows scheme.. This should ensure proper distribution
of rows and still allow u to query from a few fixed number of rows.
How u do it depends on how are u gonna choose ur 200-500 columns
during reading (try having them in the
39 matches
Mail list logo