Re: How much disk is needed to compact Leveled compaction?

2015-04-07 Thread Jean Tremblay
I am only using LeveledCompactionStrategy, and as I describe in my original 
mail, I don’t understand why C* is complaining that it cannot compact when I 
have more than 40% free disk space.



On 07 Apr 2015, at 01:10 , Bryan Holladay 
holla...@longsight.commailto:holla...@longsight.com wrote:


What other storage impacting commands or nuances do you gave to consider when 
you switch to leveled compaction? For instance, nodetool cleanup says

Running the nodetool cleanup command causes a temporary increase in disk space 
usage proportional to the size of your largest SSTable.

Are sstables smaller with leveled compaction making this a non issue?

How can you determine what the new threshold for storage space is?

Thanks,
Bryan

On Apr 6, 2015 6:19 PM, DuyHai Doan 
doanduy...@gmail.commailto:doanduy...@gmail.com wrote:

If you have SSD, you may afford switching to leveled compaction strategy, which 
requires much less than 50% of the current dataset for free space

Le 5 avr. 2015 19:04, daemeon reiydelle 
daeme...@gmail.commailto:daeme...@gmail.com a écrit :

You appear to have multiple java binaries in your path. That needs to be 
resolved.

sent from my mobile
Daemeon C.M. Reiydelle
USA 415.501.0198tel:415.501.0198
London +44.0.20.8144.9872tel:%2B44.0.20.8144.9872

On Apr 5, 2015 1:40 AM, Jean Tremblay 
jean.tremb...@zen-innovations.commailto:jean.tremb...@zen-innovations.com 
wrote:
Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.

The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data. During one compaction this node used 
almost 100% of the drive. The other nodes refuse to continue compaction 
claiming that there is not enough disk space.

From the documentation LeveledCompactionStrategy should be able to compact my 
data, well at least this is what I understand.

Size-tiered compaction requires at least as much free disk space for 
compaction as the size of the largest column family. Leveled compaction needs 
much less space for compaction, only 10 * sstable_size_in_mb. However, even if 
you’re using leveled compaction, you should leave much more free disk space 
available than this to accommodate streaming, repair, and snapshots, which can 
easily use 10GB or more of disk space. Furthermore, disk performance tends to 
decline after 80 to 90% of the disk space is used, so don’t push the 
boundaries.

This is the disk usage. Node 4 is the only one that could compact everything.
node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

When I try to compact the other ones I get this:

objc[18698]: Class JavaLaunchHelper is implemented in both 
/Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java and 
/Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/libinstrument.dylib.
 One of the two will be used. Which one is undefined.
error: Not enough space for compaction, estimated sstables = 2894, expected 
write size = 485616651726
-- StackTrace --
java.lang.RuntimeException: Not enough space for compaction, estimated sstables 
= 2894, expected write size = 485616651726
at 
org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace(CompactionTask.java:293)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:127)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:76)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:512)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)

I did not set the sstable_size_in_mb I use the 160MB default.

Is it normal that during compaction it needs so much diskspace? What would be 
the best solution to overcome this problem?

Thanks for your help




Re: How much disk is needed to compact Leveled compaction?

2015-04-06 Thread DuyHai Doan
If you have SSD, you may afford switching to leveled compaction strategy,
which requires much less than 50% of the current dataset for free space
Le 5 avr. 2015 19:04, daemeon reiydelle daeme...@gmail.com a écrit :

 You appear to have multiple java binaries in your path. That needs to be
 resolved.

 sent from my mobile
 Daemeon C.M. Reiydelle
 USA 415.501.0198
 London +44.0.20.8144.9872
 On Apr 5, 2015 1:40 AM, Jean Tremblay jean.tremb...@zen-innovations.com
 wrote:

  Hi,
 I have a cluster of 5 nodes. We use cassandra 2.1.3.

  The 5 nodes use about 50-57% of the 1T SSD.
  One node managed to compact all its data. During one compaction this
 node used almost 100% of the drive. The other nodes refuse to continue
 compaction claiming that there is not enough disk space.

  From the documentation LeveledCompactionStrategy should be able to
 compact my data, well at least this is what I understand.

  Size-tiered compaction requires at least as much free disk space for
 compaction as the size of the largest column family. Leveled compaction
 needs much less space for compaction, only 10 * sstable_size_in_mb.
 However, even if you’re using leveled compaction, you should leave much
 more free disk space available than this to accommodate streaming, repair,
 and snapshots, which can easily use 10GB or more of disk space.
 Furthermore, disk performance tends to decline after 80 to 90% of the disk
 space is used, so don’t push the boundaries.

  This is the disk usage. Node 4 is the only one that could compact
 everything.
  node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
 node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
 node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
 node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
 node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

  When I try to compact the other ones I get this:

  objc[18698]: Class JavaLaunchHelper is implemented in both
 /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java
 and /Library/Java/JavaVirtualMachines/jdk1.8.0_
 40.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be
 used. Which one is undefined.
 error: Not enough space for compaction, estimated sstables = 2894,
 expected write size = 485616651726
 -- StackTrace --
 java.lang.RuntimeException: Not enough space for compaction, estimated
 sstables = 2894, expected write size = 485616651726
 at org.apache.cassandra.db.compaction.CompactionTask.
 checkAvailableDiskSpace(CompactionTask.java:293)
 at org.apache.cassandra.db.compaction.CompactionTask.
 runMayThrow(CompactionTask.java:127)
 at org.apache.cassandra.utils.WrappedRunnable.run(
 WrappedRunnable.java:28)
 at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(
 CompactionTask.java:76)
 at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(
 AbstractCompactionTask.java:59)
 at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(
 CompactionManager.java:512)
 at org.apache.cassandra.utils.WrappedRunnable.run(
 WrappedRunnable.java:28)

   I did not set the sstable_size_in_mb I use the 160MB default.

  Is it normal that during compaction it needs so much diskspace? What
 would be the best solution to overcome this problem?

  Thanks for your help




Re: How much disk is needed to compact Leveled compaction?

2015-04-06 Thread Bryan Holladay
What other storage impacting commands or nuances do you gave to consider
when you switch to leveled compaction? For instance, nodetool cleanup says

Running the nodetool cleanup command causes a temporary increase in disk
space usage proportional to the size of your largest SSTable.

Are sstables smaller with leveled compaction making this a non issue?

How can you determine what the new threshold for storage space is?

Thanks,
Bryan
 On Apr 6, 2015 6:19 PM, DuyHai Doan doanduy...@gmail.com wrote:

 If you have SSD, you may afford switching to leveled compaction strategy,
 which requires much less than 50% of the current dataset for free space
 Le 5 avr. 2015 19:04, daemeon reiydelle daeme...@gmail.com a écrit :

 You appear to have multiple java binaries in your path. That needs to be
 resolved.

 sent from my mobile
 Daemeon C.M. Reiydelle
 USA 415.501.0198
 London +44.0.20.8144.9872
 On Apr 5, 2015 1:40 AM, Jean Tremblay 
 jean.tremb...@zen-innovations.com wrote:

  Hi,
 I have a cluster of 5 nodes. We use cassandra 2.1.3.

  The 5 nodes use about 50-57% of the 1T SSD.
  One node managed to compact all its data. During one compaction this
 node used almost 100% of the drive. The other nodes refuse to continue
 compaction claiming that there is not enough disk space.

  From the documentation LeveledCompactionStrategy should be able to
 compact my data, well at least this is what I understand.

  Size-tiered compaction requires at least as much free disk space for
 compaction as the size of the largest column family. Leveled compaction
 needs much less space for compaction, only 10 * sstable_size_in_mb.
 However, even if you’re using leveled compaction, you should leave much
 more free disk space available than this to accommodate streaming, repair,
 and snapshots, which can easily use 10GB or more of disk space.
 Furthermore, disk performance tends to decline after 80 to 90% of the disk
 space is used, so don’t push the boundaries.

  This is the disk usage. Node 4 is the only one that could compact
 everything.
  node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
 node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
 node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
 node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
 node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

  When I try to compact the other ones I get this:

  objc[18698]: Class JavaLaunchHelper is implemented in both
 /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java
 and /Library/Java/JavaVirtualMachines/jdk1.8.0_
 40.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will
 be used. Which one is undefined.
 error: Not enough space for compaction, estimated sstables = 2894,
 expected write size = 485616651726
 -- StackTrace --
 java.lang.RuntimeException: Not enough space for compaction, estimated
 sstables = 2894, expected write size = 485616651726
 at org.apache.cassandra.db.compaction.CompactionTask.
 checkAvailableDiskSpace(CompactionTask.java:293)
 at org.apache.cassandra.db.compaction.CompactionTask.
 runMayThrow(CompactionTask.java:127)
 at org.apache.cassandra.utils.WrappedRunnable.run(
 WrappedRunnable.java:28)
 at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(
 CompactionTask.java:76)
 at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(
 AbstractCompactionTask.java:59)
 at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(
 CompactionManager.java:512)
 at org.apache.cassandra.utils.WrappedRunnable.run(
 WrappedRunnable.java:28)

   I did not set the sstable_size_in_mb I use the 160MB default.

  Is it normal that during compaction it needs so much diskspace? What
 would be the best solution to overcome this problem?

  Thanks for your help




Re: How much disk is needed to compact Leveled compaction?

2015-04-06 Thread Ali Akhtar
I may have misunderstood, but it seems that he was already using
LeveledCompaction

On Tue, Apr 7, 2015 at 3:17 AM, DuyHai Doan doanduy...@gmail.com wrote:

 If you have SSD, you may afford switching to leveled compaction strategy,
 which requires much less than 50% of the current dataset for free space
 Le 5 avr. 2015 19:04, daemeon reiydelle daeme...@gmail.com a écrit :

 You appear to have multiple java binaries in your path. That needs to be
 resolved.

 sent from my mobile
 Daemeon C.M. Reiydelle
 USA 415.501.0198
 London +44.0.20.8144.9872
 On Apr 5, 2015 1:40 AM, Jean Tremblay 
 jean.tremb...@zen-innovations.com wrote:

  Hi,
 I have a cluster of 5 nodes. We use cassandra 2.1.3.

  The 5 nodes use about 50-57% of the 1T SSD.
  One node managed to compact all its data. During one compaction this
 node used almost 100% of the drive. The other nodes refuse to continue
 compaction claiming that there is not enough disk space.

  From the documentation LeveledCompactionStrategy should be able to
 compact my data, well at least this is what I understand.

  Size-tiered compaction requires at least as much free disk space for
 compaction as the size of the largest column family. Leveled compaction
 needs much less space for compaction, only 10 * sstable_size_in_mb.
 However, even if you’re using leveled compaction, you should leave much
 more free disk space available than this to accommodate streaming, repair,
 and snapshots, which can easily use 10GB or more of disk space.
 Furthermore, disk performance tends to decline after 80 to 90% of the disk
 space is used, so don’t push the boundaries.

  This is the disk usage. Node 4 is the only one that could compact
 everything.
  node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
 node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
 node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
 node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
 node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

  When I try to compact the other ones I get this:

  objc[18698]: Class JavaLaunchHelper is implemented in both
 /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java
 and /Library/Java/JavaVirtualMachines/jdk1.8.0_
 40.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will
 be used. Which one is undefined.
 error: Not enough space for compaction, estimated sstables = 2894,
 expected write size = 485616651726
 -- StackTrace --
 java.lang.RuntimeException: Not enough space for compaction, estimated
 sstables = 2894, expected write size = 485616651726
 at org.apache.cassandra.db.compaction.CompactionTask.
 checkAvailableDiskSpace(CompactionTask.java:293)
 at org.apache.cassandra.db.compaction.CompactionTask.
 runMayThrow(CompactionTask.java:127)
 at org.apache.cassandra.utils.WrappedRunnable.run(
 WrappedRunnable.java:28)
 at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(
 CompactionTask.java:76)
 at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(
 AbstractCompactionTask.java:59)
 at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(
 CompactionManager.java:512)
 at org.apache.cassandra.utils.WrappedRunnable.run(
 WrappedRunnable.java:28)

   I did not set the sstable_size_in_mb I use the 160MB default.

  Is it normal that during compaction it needs so much diskspace? What
 would be the best solution to overcome this problem?

  Thanks for your help




How much disk is needed to compact Leveled compaction?

2015-04-05 Thread Jean Tremblay
Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.

The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data. During one compaction this node used 
almost 100% of the drive. The other nodes refuse to continue compaction 
claiming that there is not enough disk space.

From the documentation LeveledCompactionStrategy should be able to compact my 
data, well at least this is what I understand.

Size-tiered compaction requires at least as much free disk space for 
compaction as the size of the largest column family. Leveled compaction needs 
much less space for compaction, only 10 * sstable_size_in_mb. However, even if 
you’re using leveled compaction, you should leave much more free disk space 
available than this to accommodate streaming, repair, and snapshots, which can 
easily use 10GB or more of disk space. Furthermore, disk performance tends to 
decline after 80 to 90% of the disk space is used, so don’t push the 
boundaries.

This is the disk usage. Node 4 is the only one that could compact everything.
node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

When I try to compact the other ones I get this:

objc[18698]: Class JavaLaunchHelper is implemented in both 
/Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java and 
/Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/libinstrument.dylib.
 One of the two will be used. Which one is undefined.
error: Not enough space for compaction, estimated sstables = 2894, expected 
write size = 485616651726
-- StackTrace --
java.lang.RuntimeException: Not enough space for compaction, estimated sstables 
= 2894, expected write size = 485616651726
at 
org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace(CompactionTask.java:293)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:127)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:76)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:512)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)

I did not set the sstable_size_in_mb I use the 160MB default.

Is it normal that during compaction it needs so much diskspace? What would be 
the best solution to overcome this problem?

Thanks for your help



Re: How much disk is needed to compact Leveled compaction?

2015-04-05 Thread daemeon reiydelle
You appear to have multiple java binaries in your path. That needs to be
resolved.

sent from my mobile
Daemeon C.M. Reiydelle
USA 415.501.0198
London +44.0.20.8144.9872
On Apr 5, 2015 1:40 AM, Jean Tremblay jean.tremb...@zen-innovations.com
wrote:

  Hi,
 I have a cluster of 5 nodes. We use cassandra 2.1.3.

  The 5 nodes use about 50-57% of the 1T SSD.
  One node managed to compact all its data. During one compaction this node
 used almost 100% of the drive. The other nodes refuse to continue
 compaction claiming that there is not enough disk space.

  From the documentation LeveledCompactionStrategy should be able to
 compact my data, well at least this is what I understand.

  Size-tiered compaction requires at least as much free disk space for
 compaction as the size of the largest column family. Leveled compaction
 needs much less space for compaction, only 10 * sstable_size_in_mb.
 However, even if you’re using leveled compaction, you should leave much
 more free disk space available than this to accommodate streaming, repair,
 and snapshots, which can easily use 10GB or more of disk space.
 Furthermore, disk performance tends to decline after 80 to 90% of the disk
 space is used, so don’t push the boundaries.

  This is the disk usage. Node 4 is the only one that could compact
 everything.
  node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
 node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
 node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
 node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
 node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

  When I try to compact the other ones I get this:

  objc[18698]: Class JavaLaunchHelper is implemented in both /Library/Java/
 JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java and
 /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/libinstrument.dylib.
 One of the two will be used. Which one is undefined.
 error: Not enough space for compaction, estimated sstables = 2894,
 expected write size = 485616651726
 -- StackTrace --
 java.lang.RuntimeException: Not enough space for compaction, estimated
 sstables = 2894, expected write size = 485616651726
 at org.apache.cassandra.db.compaction.CompactionTask.
 checkAvailableDiskSpace(CompactionTask.java:293)
 at org.apache.cassandra.db.compaction.CompactionTask.
 runMayThrow(CompactionTask.java:127)
 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(
 CompactionTask.java:76)
 at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(
 AbstractCompactionTask.java:59)
 at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(
 CompactionManager.java:512)
 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)

   I did not set the sstable_size_in_mb I use the 160MB default.

  Is it normal that during compaction it needs so much diskspace? What
 would be the best solution to overcome this problem?

  Thanks for your help