No, it does not have to read commit log segments except on log replay.
On Fri, Sep 25, 2009 at 11:55 PM, Igor Katkov <[email protected]> wrote: > I checked out and built 0.4 branch. It's all the same, files stays. > I also noticed a side effect - as number of commit log segments is > growing, server response time is also growing. > I assume this is because Cassandra now has to read through some these > files on reach read/write request > > > On Fri, Sep 25, 2009 at 4:53 PM, Jonathan Ellis <[email protected]> wrote: >> This is fixed on the 0.4 branch (but not in trunk, yet) >> >> On Fri, Sep 25, 2009 at 1:57 PM, Jonathan Ellis <[email protected]> wrote: >>> https://issues.apache.org/jira/browse/CASSANDRA-455 will address >>> FlushPeriod not working. >>> >>> On Fri, Sep 25, 2009 at 1:33 PM, Igor Katkov <[email protected]> wrote: >>>> I tried latest stable version 0.3 and commit logs segments are in fact >>>> deleted. >>>> Tried it again on 0.4 set periodic flush to 1min >>>> (FlushPeriodInMinutes="1") => it's all the same, files remains there >>>> forever. >>>> >>>> I also noticed that there are other implicit CFs, can these prevent >>>> logs from being deleted? >>>> DEBUG - adding Channels as 0 >>>> DEBUG - adding LocationInfo as 1 >>>> DEBUG - adding HintsColumnFamily as 2 >>>> >>>> On Thu, Sep 24, 2009 at 11:07 PM, Igor Katkov <[email protected]> wrote: >>>>> in my case commit log segments are never deleted (unless I restart the >>>>> server) >>>>> so they grow and grow and eventually hosts is running out of space. >>>>> >>>>> Any ideas how to fix it? >>>>> >>>>> On Thu, Sep 24, 2009 at 8:22 PM, Jonathan Ellis <[email protected]> wrote: >>>>>> When all the data from a given commit log segment has been flushed as >>>>>> sstables, that segment can be deleted. So if you do a bunch of >>>>>> inserts and then stop, it's normal to have some commitlogs around >>>>>> indefinitely. All CFs are flushed on server restart, and the log >>>>>> segments can then be removed, or you can add a periodic flush to the >>>>>> CF definition so it will flush even when there has not been any extra >>>>>> activity. >>>>>> >>>>>> (This last part doesn't quite work as designed right now, but we're >>>>>> working on a fix: https://issues.apache.org/jira/browse/CASSANDRA-455) >>>>>> >>>>>> -Jonathan >>>>>> >>>>>> On Thu, Sep 24, 2009 at 2:28 PM, Igor Katkov <[email protected]> wrote: >>>>>>> Hi, >>>>>>> >>>>>>> I'm using Cassandra 0.4.0 rc2 >>>>>>> >>>>>>> I can't make Cassandra to wipe commit logs. They just keep >>>>>>> accumulating, no mater what settings I play with in the config file. >>>>>>> >>>>>>> I insert 200ooo keys. 1 CF, one column, value is 170kb, single >>>>>>> Cassandra node. >>>>>>> MemtableSizeInMB =32 >>>>>>> MemtableObjectCountInMillions = 0.1 >>>>>>> >>>>>>> What do I do wrong? >>>>>>> >>>>>>> Please correct me if I misunderstood how things work: >>>>>>> >>>>>>> as soon as I insert a key-column-value, it gets written to memory, as >>>>>>> soon as [data size or # of object] (see the settings above) are >>>>>>> reached mem gets flushed to a commit log file. The very fact that I >>>>>>> have growing number of commit logs files tells me that this flushing >>>>>>> does happen. >>>>>>> >>>>>>> Now, commit logs records has to be transferred to the data and index >>>>>>> files, I'm sure it happens as well, since my data folder is also >>>>>>> growing, I see a lot of *.db files there. >>>>>>> According to >>>>>>> http://perspectives.mvdirona.com/2009/02/07/FacebookCassandraArchitectureAndDesign.aspx >>>>>>> commit logs has to be wiped as soon as all its column families pushed >>>>>>> to disk. >>>>>>> This thing does NOT happen somehow, I have only one column family >>>>>>> defined in the conf file. >>>>>>> >>>>>>> Conf file - http://www.katkovonline.com/storage-conf.xml >>>>>>> >>>>>> >>>>> >>>> >>> >> >
