Re: disable compaction if all data are read-only?

2016-04-09 Thread Yatong Zhang
els. Without compaction, every query would have to look at every >> SSTable. >> >> Also, due to commit log rotation, your memtable may get flushed from time >> to time before it is full, resulting in small SSTables that would benefit >> from compaction. >> >> O

Re: disable compaction if all data are read-only?

2016-04-08 Thread Yatong Zhang
read-only and hence > doesn't receive any writes, then no SSTables will be created, and hence, no > compaction will happen. What compaction strategy do you have on your table? > > Best regards > > Pedro Gordo > > On 8 April 2016 at 10:42, Yatong Zhang wrote: > >>

disable compaction if all data are read-only?

2016-04-08 Thread Yatong Zhang
Hi there, I am wondering if it is possible to disable compaction when all my data are read-only?

Re: Is there a way to add a new node to a cluster but not sync old data?

2015-01-21 Thread Yatong Zhang
nding tasks is <5, preferably 0 or > 1). Once you're falling behind on compaction, it becomes difficult to > successfully bootstrap new nodes, and you're in a very tough spot. > > > On Wed, Jan 21, 2015 at 7:43 PM, Yatong Zhang wrote: > >> Thanks for the reply. The

Re: Is there a way to add a new node to a cluster but not sync old data?

2015-01-21 Thread Yatong Zhang
ootstrapping would have. > > I can't think of any reason you'd want to do that unless you needed to > grow your cluster really quickly, and were ok with corrupting your old data. > > On Sat, Jan 10, 2015 at 12:39 AM, Yatong Zhang > wrote: > >> Hi there, >>

Is there a way to add a new node to a cluster but not sync old data?

2015-01-09 Thread Yatong Zhang
Hi there, I am using C* 2.0.10 and I was trying to add a new node to a cluster(actually replace a dead node). But after added the new node some other nodes in the cluster had a very high work-load and affected the whole performance of the cluster. So I am wondering is there a way to add a new node

Re: The data didn't spread evenly on disks

2014-11-01 Thread Yatong Zhang
leveled compaction On Sat, Nov 1, 2014 at 5:53 PM, venkat sam wrote: > What compaction strategy are you using? > > >venkat > *From:* Yatong Zhang > *Sent:* ‎Saturday‎, ‎November‎ ‎1‎, ‎2014 ‎12‎:‎32‎ ‎PM > *To:* user@cassandra.apache.org > > Hi there, > > I

The data didn't spread evenly on disks

2014-11-01 Thread Yatong Zhang
Hi there, I am using 2.0.10 and my Cassandra node has 6 disks and I configured 6 data directories in cassandra.yaml. But the data was not evenly stored on these 6 disks: disk1 67% used > disk2 100% used > disk3 100% used > disk4 76% used > disk5 69% used > disk6 81% used > > So: 1.

Re: hs_err_pid3013.log, out of memory?

2014-09-17 Thread Yatong Zhang
at are the heap >> setting it tries to use? >> >> Chris >> >> On Sep 15, 2014, at 8:16 PM, Yatong Zhang wrote: >> >> It's during the startup. I tried to upgrade cassandra from 2.0.7 to >> 2.0.10, but looks like cassandra could not start again. Also I found

Re: ava.lang.OutOfMemoryError: unable to create new native thread

2014-09-17 Thread Yatong Zhang
e this would be helpful to others, but any other advices are also welcome On Wed, Sep 17, 2014 at 11:54 PM, Rahul Neelakantan wrote: > What is your sstable size set to for each of the sstables, using LCS? Are > you at the default of 5 MB? > > Rahul Neelakantan > > On Sep 17, 2014,

Re: [RELEASE] Apache Cassandra 2.1.0

2014-09-17 Thread Yatong Zhang
Well, how to upgrade from 2.0.x to 2.1? Just replace cassandra bin files? On Wed, Sep 17, 2014 at 3:52 PM, Alex Popescu wrote: > Apologies for the late reply: the 2.1.x version of the C#, Java and Python > DataStax drivers support the new Cassandra 2.1 version. > > Here's the quick list of links

Re: ava.lang.OutOfMemoryError: unable to create new native thread

2014-09-17 Thread Yatong Zhang
sorry, about 300k+ On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang wrote: > no, I am running 64 bit JVM。 But I have many sstable files, about 30k+ > > On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson > wrote: > >> Are you running on a 32 bit JVM? >> >> On

Re: ava.lang.OutOfMemoryError: unable to create new native thread

2014-09-17 Thread Yatong Zhang
no, I am running 64 bit JVM。 But I have many sstable files, about 30k+ On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson wrote: > Are you running on a 32 bit JVM? > > On Sep 17, 2014, at 9:43 AM, Yatong Zhang wrote: > > Hi there, > > I am using leveled compaction s

ava.lang.OutOfMemoryError: unable to create new native thread

2014-09-17 Thread Yatong Zhang
Hi there, I am using leveled compaction strategy and have many sstable files. The error was during the startup, so any idea about this? > ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line > 199) Exception in thread Thread[FlushWriter:4,5,main] > java.lang.OutOfMemoryError:

Re: hs_err_pid3013.log, out of memory?

2014-09-15 Thread Yatong Zhang
> Sep 16 09:06:59 storage6 kernel: [] > do_notify_resume+0x90/0xc0 > Sep 16 09:06:59 storage6 kernel: [] int_signal+0x12/0x17 > Sep 16 09:06:59 storage6 kernel: INFO: task java:4973 blocked for more > than 120 seconds. > On Tue, Sep 16, 2014 at 9:00 AM, Robert Coli wrote: &g

hs_err_pid3013.log, out of memory?

2014-09-15 Thread Yatong Zhang
Hi there, I just encountered an error which left a log '/hs_err_pid3013.log'. So is there a way to solve this? # > # There is insufficient memory for the Java Runtime Environment to > continue. > # Native memory allocation (malloc) failed to allocate 12288 bytes for > committing reserved memory.

How to free disk space after decreasing replication factor?

2014-08-23 Thread Yatong Zhang
Hi, I am using leveled compaction and I changed the replication factor from 3 to 2, but after a few days the disk space weren't freed. I tried to trigger the compaction or clean up, but looks like it didn't take any effect.

Re: Really need some advices on large data considerations

2014-05-16 Thread Yatong Zhang
Hi Michael, thanks for the reply, I would RAID0 all those data drives, personally, and give up managing them > separately. They are on multiple PCIe controllers, one drive per channel, > right? > Raid 0 is a simple way to go but one disk failure can cause the whole volume down, so I am afraid rai

Re: Cassandra 2.0.7 always failes due to 'too may open files' error

2014-05-16 Thread Yatong Zhang
; wrote: > >> Running >> >> #> cat /proc/$(cat /var/run/cassandra.pid)/limits >> >> as root or your cassandra user will tell you what limits it's actually >> running with. >> >> >> >> >> On Sun, May 4, 2014 at 10:12 PM, Yatong Zh

Re: Really need some advices on large data considerations

2014-05-14 Thread Yatong Zhang
TB. > > cheers > Aaron > > - > Aaron Morton > New Zealand > @aaronmorton > > Co-Founder & Principal Consultant > Apache Cassandra Consulting > http://www.thelastpickle.com > > On 9/05/2014, at 12:09 pm, Yatong Zhang wrote: > > Hi,

Really need some advices on large data considerations

2014-05-13 Thread Yatong Zhang
Hi, We're going to deploy a large Cassandra cluster in PB level. Our scenario would be: 1. Lots of writes, about 150 writes/second at average, and about 300K size per write. 2. Relatively very small reads 3. Our data will be never updated 4. But we will delete old data periodically to free space

Re: Cassandra 2.0.7 keeps reporting errors due to no space left on device

2014-05-13 Thread Yatong Zhang
gt; } > I don't have much time to do some more research to figure out if this has side effect or not, but this is a solution for me. I hope this would be useful to those who had similar issues. On Sun, May 4, 2014 at 5:10 PM, Yatong Zhang wrote: > I am using the latest 2.0

Re: Is the updating compaction strategy from 'sized tiered' to 'leveled' automatic or need to be done manually? [heur]

2014-05-06 Thread Yatong Zhang
; to > 'leveled' automatic or need to be done manually? [heur] > > > > I mean insert/write data. When data fills memtable, memtable is flushed to > disk as sstable, when new sstable is created, Cassandra will check if > compaction is needed and triggers one. > &

Re: Is the updating compaction strategy from 'sized tiered' to 'leveled' automatic or need to be done manually?

2014-05-05 Thread Yatong Zhang
I tried to run 'nodetool compact' (or with keyspace and cfname), seemed not woking. The command hung there and nothing happened. But I noticed that there was about more 3+ pending tasks when using 'nodetool compactionsats' On Tue, May 6, 2014 at 1:45 AM, Robert Coli wrote: > On Mon, May 5,

Re: Is the updating compaction strategy from 'sized tiered' to 'leveled' automatic or need to be done manually?

2014-05-04 Thread Yatong Zhang
; the information remains the property of the sender. You must not use, > disclose, distribute, copy, print or rely on this e-mail. If you have > received this message in error, please contact the sender immediately and > irrevocably delete this message and any copies. > > *From:*

Is there a way to stop cassandra compacting some large sstables?

2014-05-04 Thread Yatong Zhang
Hi, I changed compaction strategy from 'size tiered' to 'leveled' but after running a few days C* still tries to compacting some old large sstables, say: 1. I have 6 disks per node and 6 data directories per disk 2. There are some old huge sstables generated when using 'sized tiered' compaction s

Re: Cassandra 2.0.7 always failes due to 'too may open files' error

2014-05-04 Thread Yatong Zhang
I am running 'repair' when the error occurred. And just a few days before I changed the compaction strategy to 'leveled'. don know if this helps On Mon, May 5, 2014 at 1:10 PM, Yatong Zhang wrote: > Cassandra is running as root > > [root@storage5 ~]# ps aux | gre

Re: Cassandra 2.0.7 always failes due to 'too may open files' error

2014-05-04 Thread Yatong Zhang
CassandraDaemon > On Mon, May 5, 2014 at 1:02 PM, Philip Persad wrote: > Have you tried running "ulimit -a" as the Cassandra user instead of as > root? It is possible that your configured a high file limit for root but > not for the user running the Cassandra process. >

Re: Cassandra 2.0.7 always failes due to 'too may open files' error

2014-05-04 Thread Yatong Zhang
> > [root@storage5 ~]# lsof -n | grep java | wc -l > 5103 > [root@storage5 ~]# lsof | wc -l > 6567 It's mentioned in previous mail:) On Mon, May 5, 2014 at 9:03 AM, nash wrote: > The lsof command or /proc can tell you how many open files it has. How > many is it? > > --nash >

Cassandra 2.0.7 always failes due to 'too may open files' error

2014-05-04 Thread Yatong Zhang
Hi there, After I changed compaction strategy to 'leveled', one of my nodes keeps reporting 'too many open files. But I have done some configuration following: http://www.datastax.com/docs/1.1/install/recommended_settingsand http://www.datastax.com/docs/1.1/troubleshooting/index#toomany I am usin

Re: Cassandra 2.0.7 keeps reporting errors due to no space left on device

2014-05-04 Thread Yatong Zhang
ary files (-tmp-Data.db) are not properly > cleaned up. > > What is your Cassandra version ? Can you do a "nodetool tpstats" and look > into Cassandra logs to see whether there are issues with compactions ? > > I've found one discussion thread that have the same sympt

Re: Cassandra 2.0.7 keeps reporting errors due to no space left on device

2014-05-04 Thread Yatong Zhang
perform repair, not even for one time >> >> >> On Sun, May 4, 2014 at 2:37 PM, DuyHai Doan wrote: >> >>> Hello Yatong >>> >>> "If I restart the node or using 'cleanup', it will resume to normal." >>> -->

Re: Cassandra 2.0.7 keeps reporting errors due to no space left on device

2014-05-04 Thread Yatong Zhang
manual repair frequently ? > > > > > > On Sun, May 4, 2014 at 1:51 AM, Yatong Zhang wrote: > >> My Cassandra cluster has plenty of free space, for now only about 30% of >> space are used >> >> >> On Sun, May 4, 2014 at 6:36 AM, Yatong Zhang wrote:

Is the updating compaction strategy from 'sized tiered' to 'leveled' automatic or need to be done manually?

2014-05-03 Thread Yatong Zhang
Hi there, I am updating compaction strategy from 'sized tiered' to 'leveled' and from http://www.datastax.com/dev/blog/leveled-compaction-in-apache-cassandra it is said: When updating an existing column family, reads and writes can continue as > usual while leveling of existing sstables is perfor

Re: Cassandra 2.0.7 keeps reporting errors due to no space left on device

2014-05-03 Thread Yatong Zhang
My Cassandra cluster has plenty of free space, for now only about 30% of space are used On Sun, May 4, 2014 at 6:36 AM, Yatong Zhang wrote: > Hi there, > > It was strange that the 'xxx-tmp-xxx.db' file kept increasing until > Cassandra throw exceptions with 'No

Cassandra 2.0.7 keeps reporting errors due to no space left on device

2014-05-03 Thread Yatong Zhang
Hi there, It was strange that the 'xxx-tmp-xxx.db' file kept increasing until Cassandra throw exceptions with 'No space left on device'. I am using CQL 3 to create a table to store data about 200K ~ 500K per record. I have 6 harddisks per node and cassandra was configured with 6 data directories(e

Re: Can Cassandra work efficiently with multiple data directories on multiple disks?

2014-04-30 Thread Yatong Zhang
Thank you Arindam, that helps On Wed, Apr 30, 2014 at 5:32 PM, Arindam Barua wrote: > > > This thread should answer your questions: > > > http://stackoverflow.com/questions/15925549/how-cassandra-split-keyspace-data-when-multiple-dirctories-found > > > > > >

Can Cassandra work efficiently with multiple data directories on multiple disks?

2014-04-30 Thread Yatong Zhang
Hi there, I have the following configuration: data_file_directories: > - /data1/cass > - /data2/cass > - /data3/cass > - /data4/cass > - /data5/cass > - /data6/cass > and each directory resides on a separate stand-alone harddisk. My questions are: 1. Will Cassandra split