[ 
https://issues.apache.org/jira/browse/CASSANDRA-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383628#comment-14383628
 ] 

Erik Forsberg commented on CASSANDRA-9036:
------------------------------------------

After applying patch:

{noformat}
 INFO [CompactionExecutor:12] 2015-03-27 10:16:38,930 CompactionManager.java 
(line 564) Cleaning up 
SSTableReader(path='/cassandra/production/Data_daily/production-Data_daily-jb-4345750-Data.db')
DEBUG [CompactionExecutor:12] 2015-03-27 10:16:39,423 Directories.java (line 
265) removing candidate /cassandra, usable=732825808896, requested=933404582552
ERROR [CompactionExecutor:12] 2015-03-27 10:16:39,424 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:12,1,main]
java.io.IOException: disk full
        at 
org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompaction(CompactionManager.java:567)
        at 
org.apache.cassandra.db.compaction.CompactionManager.access$400(CompactionManager.java:63)
        at 
org.apache.cassandra.db.compaction.CompactionManager$5.perform(CompactionManager.java:281)
        at 
org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:225)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
{noformat}

The number it reports as usable corresponds quite well with output from df:

{noformat}
# df /cassandra
Filesystem      1K-blocks       Used Available Use% Mounted on
/dev/sda7      1893666392 1178016188 715650204  63% /cassandra
{noformat}

The number it reports as requested doesn't at all correspond with the actual 
file size: 

{noformat}
# ls -l 
/cassandra/production/Data_daily/production-Data_daily-jb-4345750-Data.db 
-rw-r--r-- 1 cassandra cassandra 234667877465 Mar 21 04:42 
/cassandra/production/Data_daily/production-Data_daily-jb-4345750-Data.db
{noformat}

The file is compressed, we're using DeflateCompressor:

{noformat}
# sstablemetadata 
/cassandra/production/Data_daily/production-Data_daily-jb-4345750-Data.db|grep 
Compr
Compression ratio: 0.21589549046598225
{noformat}

No quota. Filesystem is XFS. 

Is the estimation of space needed for compaction taking compression into 
account? 

> "disk full" when running cleanup (on a far from full disk)
> ----------------------------------------------------------
>
>                 Key: CASSANDRA-9036
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9036
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Erik Forsberg
>            Assignee: Robert Stupp
>
> I'm trying to run cleanup, but get this:
> {noformat}
>  INFO [CompactionExecutor:18] 2015-03-25 10:29:16,355 CompactionManager.java 
> (line 564) Cleaning up 
> SSTableReader(path='/cassandra/production/Data_daily/production-Data_daily-jb-4345750-Data.db')
> ERROR [CompactionExecutor:18] 2015-03-25 10:29:16,664 CassandraDaemon.java 
> (line 199) Exception in thread Thread[CompactionExecutor:18,1,main]
> java.io.IOException: disk full
>         at 
> org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompaction(CompactionManager.java:567)
>         at 
> org.apache.cassandra.db.compaction.CompactionManager.access$400(CompactionManager.java:63)
>         at 
> org.apache.cassandra.db.compaction.CompactionManager$5.perform(CompactionManager.java:281)
>         at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:225)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Now that's odd, since:
> * Disk has some 680G left
> * The sstable it's trying to cleanup is far less than 680G:
> {noformat}
> # ls -lh *4345750*
> -rw-r--r-- 1 cassandra cassandra  64M Mar 21 04:42 
> production-Data_daily-jb-4345750-CompressionInfo.db
> -rw-r--r-- 1 cassandra cassandra 219G Mar 21 04:42 
> production-Data_daily-jb-4345750-Data.db
> -rw-r--r-- 1 cassandra cassandra 503M Mar 21 04:42 
> production-Data_daily-jb-4345750-Filter.db
> -rw-r--r-- 1 cassandra cassandra  42G Mar 21 04:42 
> production-Data_daily-jb-4345750-Index.db
> -rw-r--r-- 1 cassandra cassandra 5.9K Mar 21 04:42 
> production-Data_daily-jb-4345750-Statistics.db
> -rw-r--r-- 1 cassandra cassandra  81M Mar 21 04:42 
> production-Data_daily-jb-4345750-Summary.db
> -rw-r--r-- 1 cassandra cassandra   79 Mar 21 04:42 
> production-Data_daily-jb-4345750-TOC.txt
> {noformat}
> Sure, it's large, but it's not 680G. 
> No other compactions are running on that server. I'm getting this on 12 / 56 
> servers right now. 
> Could it be some bug in the calculation of the expected size of the new 
> sstable, perhaps? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to