I think vast may have changed the release schedule of cassandra. I talk a lot
with one of their key developers, and 3.0 was going to drop off heap memtables
for several releases due to a rewrite of the storage engine to be more CQL
friendly.
2.2 will take all of the improvements in 3.0 but not
how many sst tables were there? what compaction are you using ? These
properties define how many possible disk reads cassandra has to do to get
all the data you need depending on which SST Tables have data for your
partition key.
On Fri, May 8, 2015 at 6:25 PM, Alprema alpr...@alprema.com
PFA of error log
hs_err_pid9656.log
https://docs.google.com/a/indiamart.com/file/d/0B0hlSlesIPVfaU9peGwxSXdsZGc/edit?usp=drive_web
On Mon, May 11, 2015 at 3:58 PM, Rahul Bhardwaj
rahul.bhard...@indiamart.com wrote:
free RAM:
free -m
total used free shared
Hi All,
We have cluster of 3 nodes with 64GB RAM each. My cluster was running in
healthy state. Suddenly one machine's cassandra daemon stops working and
shut down.
On restarting it after 2 minutes it again stops and is getting stop after
returning below error in cassandra.log
Java HotSpot(TM)
the memory cassandra is trying to allocate is pretty small. you sure there
is no hardware failure on the machine. what is the free ram on the box ?
On Mon, May 11, 2015 at 3:28 PM, Rahul Bhardwaj
rahul.bhard...@indiamart.com wrote:
Hi All,
We have cluster of 3 nodes with 64GB RAM each. My
free RAM:
free -m
total used free sharedbuffers cached
Mem: 64398 23753 40644 0108 8324
-/+ buffers/cache: 15319 49078
Swap: 2925 15 2909
ulimit -a
core file size (blocks,
According to the trace log, only one was read, the compaction strategy is
size tiered.
I attached a more readable version of my trace for details.
On Mon, May 11, 2015 at 11:35 AM, Anishek Agarwal anis...@gmail.com wrote:
how many sst tables were there? what compaction are you using ? These
Well i havent used 2.1.x cassandra neither java 8 but any reason for not
using oracle JDK as i thought thats what is recommended. i saw a thread
earlier stating java 8 with 2.0.14+ cassandra is tested but not sure about
2.1.x versions.
On Mon, May 11, 2015 at 4:04 PM, Rahul Bhardwaj
Bu it is giving same error with java 7 and open jdk
On Mon, May 11, 2015 at 5:26 PM, Anishek Agarwal anis...@gmail.com wrote:
Well i havent used 2.1.x cassandra neither java 8 but any reason for not
using oracle JDK as i thought thats what is recommended. i saw a thread
earlier stating java 8
Hi Robert,
I saw somewhere you answering the same prob but no solution found. plz
check again.
regards:
Rahul Bhardwaj
On Mon, May 11, 2015 at 5:49 PM, Rahul Bhardwaj
rahul.bhard...@indiamart.com wrote:
Bu it is giving same error with java 7 and open jdk
On Mon, May 11, 2015 at 5:26
Hi,
I am trying to port an Oracle Table to Cassandra.
the table is a wide table (931 columns) and could have millions of rows.
name, filter1, filter2filter30, data1, data2...data900
The user would retrieve multiple rows from this table and filter (30 filter
columns) by one or more (up to 3)
On 05/11/2015 12:01 PM, Rahul Bhardwaj wrote:
Hi All,
I have 3 node cluster, one of node in this cluster is getting due to
below error:
Java HotSpot(TM) 64-Bit Server VM warning: Attempt to protect stack
guard pages failed.
Java HotSpot(TM) 64-Bit Server VM warning: INFO:
Hi All,
I have 3 node cluster, one of node in this cluster is getting due to below
error:
Java HotSpot(TM) 64-Bit Server VM warning: Attempt to protect stack guard
pages failed.
Java HotSpot(TM) 64-Bit Server VM warning: INFO:
os::commit_memory(0x7f14560cd000, 12288, 0) failed;
13 matches
Mail list logo