a table whichi read-only and hence
> doesn't receive any writes, then no SSTables will be created, and hence, no
> compaction will happen. What compaction strategy do you have on your table?
>
> Best regards
>
> Pedro Gordo
>
> On 8 April 2016 at 10:42, Yatong Zha
that unless you needed to
grow your cluster really quickly, and were ok with corrupting your old data.
On Sat, Jan 10, 2015 at 12:39 AM, Yatong Zhang bluefl...@gmail.com
wrote:
Hi there,
I am using C* 2.0.10 and I was trying to add a new node to a
cluster(actually replace a dead node
falling behind on compaction, it becomes difficult to
successfully bootstrap new nodes, and you're in a very tough spot.
On Wed, Jan 21, 2015 at 7:43 PM, Yatong Zhang bluefl...@gmail.com wrote:
Thanks for the reply. The bootstrap of new node put a heavy burden on the
whole cluster and I don't
Hi there,
I am using C* 2.0.10 and I was trying to add a new node to a
cluster(actually replace a dead node). But after added the new node some
other nodes in the cluster had a very high work-load and affected the whole
performance of the cluster.
So I am wondering is there a way to add a new
Hi there,
I am using 2.0.10 and my Cassandra node has 6 disks and I configured 6 data
directories in cassandra.yaml. But the data was not evenly stored on these
6 disks:
disk1 67% used
disk2 100% used
disk3 100% used
disk4 76% used
disk5 69% used
disk6 81% used
So:
1. Is
leveled compaction
On Sat, Nov 1, 2014 at 5:53 PM, venkat sam samvenkat...@outlook.com wrote:
What compaction strategy are you using?
venkat
*From:* Yatong Zhang bluefl...@gmail.com
*Sent:* Saturday, November 1, 2014 12:32 PM
*To:* user@cassandra.apache.org
Hi there,
I am
Hi there,
I am using leveled compaction strategy and have many sstable files. The
error was during the startup, so any idea about this?
ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line
199) Exception in thread Thread[FlushWriter:4,5,main]
java.lang.OutOfMemoryError:
no, I am running 64 bit JVM。 But I have many sstable files, about 30k+
On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson gra...@vast.com wrote:
Are you running on a 32 bit JVM?
On Sep 17, 2014, at 9:43 AM, Yatong Zhang bluefl...@gmail.com wrote:
Hi there,
I am using leveled compaction
sorry, about 300k+
On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang bluefl...@gmail.com wrote:
no, I am running 64 bit JVM。 But I have many sstable files, about 30k+
On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson gra...@vast.com
wrote:
Are you running on a 32 bit JVM?
On Sep 17, 2014
Well, how to upgrade from 2.0.x to 2.1? Just replace cassandra bin files?
On Wed, Sep 17, 2014 at 3:52 PM, Alex Popescu al...@datastax.com wrote:
Apologies for the late reply: the 2.1.x version of the C#, Java and Python
DataStax drivers support the new Cassandra 2.1 version.
Here's the
other advices are also welcome
On Wed, Sep 17, 2014 at 11:54 PM, Rahul Neelakantan ra...@rahul.be wrote:
What is your sstable size set to for each of the sstables, using LCS? Are
you at the default of 5 MB?
Rahul Neelakantan
On Sep 17, 2014, at 10:58 AM, Yatong Zhang bluefl...@gmail.com
setting it tries to use?
Chris
On Sep 15, 2014, at 8:16 PM, Yatong Zhang bluefl...@gmail.com wrote:
It's during the startup. I tried to upgrade cassandra from 2.0.7 to
2.0.10, but looks like cassandra could not start again. Also I found the
following log at '/var/log/messages':
Sep 16 09:06
Hi there,
I just encountered an error which left a log '/hs_err_pid3013.log'. So is
there a way to solve this?
#
# There is insufficient memory for the Java Runtime Environment to
continue.
# Native memory allocation (malloc) failed to allocate 12288 bytes for
committing reserved memory.
#
, 2014 at 9:00 AM, Robert Coli rc...@eventbrite.com wrote:
On Mon, Sep 15, 2014 at 5:55 PM, Yatong Zhang bluefl...@gmail.com wrote:
I just encountered an error which left a log '/hs_err_pid3013.log'. So is
there a way to solve this?
# There is insufficient memory for the Java Runtime
Hi,
I am using leveled compaction and I changed the replication factor from 3
to 2, but after a few days the disk space weren't freed. I tried to trigger
the compaction or clean up, but looks like it didn't take any effect.
/$(cat /var/run/cassandra.pid)/limits
as root or your cassandra user will tell you what limits it's actually
running with.
On Sun, May 4, 2014 at 10:12 PM, Yatong Zhang bluefl...@gmail.comwrote:
I am running 'repair' when the error occurred. And just a few days
before I changed
Hi Michael, thanks for the reply,
I would RAID0 all those data drives, personally, and give up managing them
separately. They are on multiple PCIe controllers, one drive per channel,
right?
Raid 0 is a simple way to go but one disk failure can cause the whole
volume down, so I am afraid raid
Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On 9/05/2014, at 12:09 pm, Yatong Zhang bluefl...@gmail.com wrote:
Hi,
We're going to deploy a large Cassandra cluster in PB level. Our scenario
would be:
1. Lots of writes, about 150 writes/second at average
if this has
side effect or not, but this is a solution for me. I hope this would be
useful to those who had similar issues.
On Sun, May 4, 2014 at 5:10 PM, Yatong Zhang bluefl...@gmail.com wrote:
I am using the latest 2.0.7. The 'nodetool tpstats' shows as:
[root@storage5 bin]# ./nodetool tpstats
Hi,
We're going to deploy a large Cassandra cluster in PB level. Our scenario
would be:
1. Lots of writes, about 150 writes/second at average, and about 300K size
per write.
2. Relatively very small reads
3. Our data will be never updated
4. But we will delete old data periodically to free space
? [heur]
I mean insert/write data. When data fills memtable, memtable is flushed to
disk as sstable, when new sstable is created, Cassandra will check if
compaction is needed and triggers one.
*From:* Yatong Zhang [mailto:bluefl...@gmail.com bluefl...@gmail.com]
*Sent:* Monday, May 5, 2014 9
received this message in error, please contact the sender immediately and
irrevocably delete this message and any copies.
*From:* Yatong Zhang [mailto:bluefl...@gmail.com]
*Sent:* Sunday, May 4, 2014 5:22 AM
*To:* user@cassandra.apache.org
*Subject:* Is the updating compaction strategy from 'sized
I tried to run 'nodetool compact' (or with keyspace and cfname), seemed not
woking. The command hung there and nothing happened. But I noticed that
there was about more 3+ pending tasks when using 'nodetool
compactionsats'
On Tue, May 6, 2014 at 1:45 AM, Robert Coli rc...@eventbrite.com
Zhang bluefl...@gmail.com wrote:
My Cassandra cluster has plenty of free space, for now only about 30% of
space are used
On Sun, May 4, 2014 at 6:36 AM, Yatong Zhang bluefl...@gmail.com wrote:
Hi there,
It was strange that the 'xxx-tmp-xxx.db' file kept increasing until
Cassandra throw
, Yatong Zhang bluefl...@gmail.com wrote:
after restarting or 'cleanup' the big tmp file has gone and all looks
like fine:
-rw-r--r-- 1 root root 19K Apr 30 13:58
mydb_oe-images-tmp-jb-96242-CompressionInfo.db
-rw-r--r-- 1 root root 145M Apr 30 13:58
mydb_oe-images-tmp-jb-96242-Data.db
-rw-r--r
On Sun, May 4, 2014 at 10:39 AM, Yatong Zhang bluefl...@gmail.com wrote:
Yes after a while the disk fills up again. So I changed the compaction
strategy from 'sized tiered' to 'leveled' to reduce the disk usage when
compacting, but the problem still occurs.
This table has lots of write
Hi there,
After I changed compaction strategy to 'leveled', one of my nodes keeps
reporting 'too many open files. But I have done some configuration
following: http://www.datastax.com/docs/1.1/install/recommended_settingsand
http://www.datastax.com/docs/1.1/troubleshooting/index#toomany
I am
[root@storage5 ~]# lsof -n | grep java | wc -l
5103
[root@storage5 ~]# lsof | wc -l
6567
It's mentioned in previous mail:)
On Mon, May 5, 2014 at 9:03 AM, nash nas...@gmail.com wrote:
The lsof command or /proc can tell you how many open files it has. How
many is it?
--nash
philip.per...@gmail.comwrote:
Have you tried running ulimit -a as the Cassandra user instead of as
root? It is possible that your configured a high file limit for root but
not for the user running the Cassandra process.
On Sun, May 4, 2014 at 6:07 PM, Yatong Zhang bluefl...@gmail.com wrote
I am running 'repair' when the error occurred. And just a few days before I
changed the compaction strategy to 'leveled'. don know if this helps
On Mon, May 5, 2014 at 1:10 PM, Yatong Zhang bluefl...@gmail.com wrote:
Cassandra is running as root
[root@storage5 ~]# ps aux | grep java
root
Hi,
I changed compaction strategy from 'size tiered' to 'leveled' but after
running a few days C* still tries to compacting some old large sstables,
say:
1. I have 6 disks per node and 6 data directories per disk
2. There are some old huge sstables generated when using 'sized tiered'
compaction
Hi there,
It was strange that the 'xxx-tmp-xxx.db' file kept increasing until
Cassandra throw exceptions with 'No space left on device'. I am using CQL 3
to create a table to store data about 200K ~ 500K per record. I have 6
harddisks per node and cassandra was configured with 6 data
My Cassandra cluster has plenty of free space, for now only about 30% of
space are used
On Sun, May 4, 2014 at 6:36 AM, Yatong Zhang bluefl...@gmail.com wrote:
Hi there,
It was strange that the 'xxx-tmp-xxx.db' file kept increasing until
Cassandra throw exceptions with 'No space left
Hi there,
I am updating compaction strategy from 'sized tiered' to 'leveled' and from
http://www.datastax.com/dev/blog/leveled-compaction-in-apache-cassandra it
is said:
When updating an existing column family, reads and writes can continue as
usual while leveling of existing sstables is
Hi there,
I have the following configuration:
data_file_directories:
- /data1/cass
- /data2/cass
- /data3/cass
- /data4/cass
- /data5/cass
- /data6/cass
and each directory resides on a separate stand-alone harddisk. My questions
are:
1. Will Cassandra split data
Thank you Arindam, that helps
On Wed, Apr 30, 2014 at 5:32 PM, Arindam Barua aba...@247-inc.com wrote:
This thread should answer your questions:
http://stackoverflow.com/questions/15925549/how-cassandra-split-keyspace-data-when-multiple-dirctories-found
*From:* Yatong Zhang
36 matches
Mail list logo