You are saying I am doing 36000 inserts per second, when I am inserting 600
rows, I thought that every row goes into one Node, so the work is done for a
row not a column, so my assumption is NOT true, the work is done on a column
level? so if I reduce the number of columns I will get a
How reproducible is this stack overflow?
If you can reproduce it at will then I would like to see if you can
also reproduce against
(a) a single node Windows machine
(b) a single node Linux machine
On Fri, Sep 24, 2010 at 3:03 PM, Alaa Zubaidi alaa.zuba...@pdf.com wrote:
Nothing is working,
[note: i put user@ back on CC but I'm not quoting the source code]
Here is the code I am using (this is only for testing Cassandra it is not
going the be used in production) I am new to Java, but I tested this and it
seems to work fine when running for short amount of time:
If you mean to ask
On Mon, Sep 27, 2010 at 12:59 PM, Alaa Zubaidi alaa.zuba...@pdf.com wrote:
Thanks for the help.
we have 2 drives using basic configurations, commitlog on one drive and data
on another.
and Yes the CL for writes is 3, however, the CL for reads is 1.
It is simply not possible that you are
On Mon, Sep 27, 2010 at 2:51 PM, Benjamin Black b...@b3k.us wrote:
On Mon, Sep 27, 2010 at 12:59 PM, Alaa Zubaidi alaa.zuba...@pdf.com wrote:
Thanks for the help.
we have 2 drives using basic configurations, commitlog on one drive and data
on another.
and Yes the CL for writes is 3, however,
Sorry 3 means QUORUM.
On 9/27/2010 2:55 PM, Benjamin Black wrote:
On Mon, Sep 27, 2010 at 2:51 PM, Benjamin Blackb...@b3k.us wrote:
On Mon, Sep 27, 2010 at 12:59 PM, Alaa Zubaidialaa.zuba...@pdf.com wrote:
Thanks for the help.
we have 2 drives using basic configurations, commitlog on one
Its actually split to 8 different processes that are doing the insertion.
Thanks
On 9/27/2010 2:03 PM, Peter Schuller wrote:
[note: i put user@ back on CC but I'm not quoting the source code]
Here is the code I am using (this is only for testing Cassandra it is not
going the be used in
I can test the single node on Windows now..
On 9/27/2010 2:02 PM, Jonathan Ellis wrote:
How reproducible is this stack overflow?
If you can reproduce it at will then I would like to see if you can
also reproduce against
(a) a single node Windows machine
(b) a single node Linux machine
On
What is your RF?
On Mon, Sep 27, 2010 at 3:13 PM, Alaa Zubaidi alaa.zuba...@pdf.com wrote:
Sorry 3 means QUORUM.
On 9/27/2010 2:55 PM, Benjamin Black wrote:
On Mon, Sep 27, 2010 at 2:51 PM, Benjamin Blackb...@b3k.us wrote:
On Mon, Sep 27, 2010 at 12:59 PM, Alaa
Does that mean you are doing 600 rows/sec per process or 600/sec total
across all processes?
On Mon, Sep 27, 2010 at 3:14 PM, Alaa Zubaidi alaa.zuba...@pdf.com wrote:
Its actually split to 8 different processes that are doing the insertion.
Thanks
On 9/27/2010 2:03 PM, Peter Schuller wrote:
RF=2
Each process is processing 75 rows.
So, do you think that the cause of my problems is the high rate of
inserts I am doing (coupled with the reads)?
taking into consideration that, the first errors were heap overflow and
after I disabled swapping it was stack overflow?
I will try another
On Mon, Sep 27, 2010 at 3:48 PM, Alaa Zubaidi alaa.zuba...@pdf.com wrote:
RF=2
With RF=2, QUORUM and ALL are the same. Again, your logs show you are
attempting to insert about 180,000 columns/sec. The only way that is
possible with your hardware is if you are using CL.ZERO. The
available
It is odd that you are able to do 36000/sec _at all_ unless you are
using CL.ZERO, which would quickly lead to OOM.
The problem with the hypothesis as far as I can tell is that the
hotspot error log's heap information does not indicate that he's close
to maxing out his heap. And I don't believe
Looking further, I would expect your 36000 writes/sec to trigger a
memtable flush every 8-9 seconds (which is already crazy), but you are
actually flushing them every ~1.7 seconds, leading me to believe you
are writing a _lot_ faster than you think you are.
INFO [ROW-MUTATION-STAGE:21]
looks like you're OOMing trying to compact a very large row. solution:
smaller rows, or larger heap.
On Fri, Sep 24, 2010 at 3:03 PM, Alaa Zubaidi alaa.zuba...@pdf.com wrote:
Nothing is working, after disabling swap entirely, the heap is not
exhausted but Cassandra crashed with out of memory
My rows consist of only *60 *columns and these 60 columns looks like this:
ColumnName: Sensor59 -- Value: 434.2647915698039 -- TTL: 10800
On 9/24/2010 3:42 PM, Jonathan Ellis wrote:
looks like you're OOMing trying to compact a very large row. solution:
smaller rows, or larger heap.
On Fri,
I decreased the heap size, it did not help, however, it delayed the problem.
I noticed that its swapping, so, do you think that I should set windows to
Not to swap?
I'm not sure what's best done on Windows. For Linux/Unix there is
some discussion on:
Disabling swap entirely is usually the easiest fix, yes.
On Mon, Sep 20, 2010 at 8:10 PM, Alaa Zubaidi alaa.zuba...@pdf.com wrote:
Thanks Peter,
I decreased the heap size, it did not help, however, it delayed the problem.
I noticed that its swapping, so, do you think that I should set windows
Thanks Peter,
I decreased the heap size, it did not help, however, it delayed the problem.
I noticed that its swapping, so, do you think that I should set windows
to Not to swap?
Do you think its related to this issue?
https://issues.apache.org/jira/browse/CASSANDRA-1014
Thanks,
Alaa
On
Thread pools are part of the architecture, take a look at the SEDA paper referenced at the bottom of this pagehttp://wiki.apache.org/cassandra/ArchitectureInternalsThe number of threads in the pool are used to govern the resources available to that part of the processing pipeline.AaronOn 19 Sep,
I see a spike in heap memory usage on Node 2 where it goes from around 1G to
6GB (max) in less than an hour, and then goes our of memory.
There are some errors in the log file that are reported by other people, but
I don't think that these errors are the reason, because it use to happen
even
Even I would like to add here something and correct me if I am wrong, I
downloaded 0.7 beta and ran it, just by chance I checked 'top' to see how
the new version is doing and there were 64 processes running though
Cassandra was on single node with default configuration options ( ran it as
is,
Hi Peter
I actually checked after 15-20 of observation of monitor and logs when
everything calmed down then it was showing this many processes, shouldnt it
be good to reduce the no. of threads once server is idle or almost idle. As
I am not a Java guy the only thing that I can think of is that
23 matches
Mail list logo