Missed that in the history, cheers.
A
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 23 Jun 2011, at 20:26, Sylvain Lebresne wrote:
> As Jonathan said earlier, you are hitting
> https://issues.apache.org/jira/browse/CASSANDRA-2765
>
> T
On Thu, Jun 23, 2011 at 10:23 AM, Jonathan Colby
wrote:
> A compaction will be triggered when "min" number of same sized SStable files
> are found. So what's actually the purpose of the "max" part of the
> threshold?
It says, if there is more than "max" number of same sized SSTable
files, on
As Jonathan said earlier, you are hitting
https://issues.apache.org/jira/browse/CASSANDRA-2765
This will be fixed in 0.8.1 that is currently under a vote and should be
released soon (let's say beginning of next week, maybe sooner).
--
Sylvain
2011/6/23 Héctor Izquierdo Seliva :
> Hi Aaron. Rever
A compaction will be triggered when "min" number of same sized SStable files
are found. So what's actually the purpose of the "max" part of the
threshold?
On Jun 23, 2011, at 12:55 AM, aaron morton wrote:
> Setting them to 2 and 2 means compaction can only ever compact 2 files at
> time
Btw, if I restart the node, then it happily proceeds with compaction.
El jue, 23-06-2011 a las 10:02 +0200, Héctor Izquierdo Seliva escribió:
> Hi Aaron. Reverted back to 4-32. Did the flush but it did not trigger
> any minor compaction. Ran compact by hand, and it picked only two
> sstables.
>
>
Hi Aaron. Reverted back to 4-32. Did the flush but it did not trigger
any minor compaction. Ran compact by hand, and it picked only two
sstables.
Here's the ls before:
http://pastebin.com/xDtvVZvA
And this is the ls after:
http://pastebin.com/DcpbGvK6
Any suggestions?
El jue, 23-06-2011 a l
Setting them to 2 and 2 means compaction can only ever compact 2 files at time,
so it will be worse off.
Lets the try following:
- restore the compactions settings to the default 4 and 32
- run `ls -lah` in the data dir and grab the output
- run `nodetool flush` this will trigger minor compactio
Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
to run compact, but it's not doing anything. There are over 69 sstables
now, read performance is horrible, and it's taking an insane amount of
space. Maybe I don't quite get how the new per bucket stuff works, but I
think this i
You may also have been running into
https://issues.apache.org/jira/browse/CASSANDRA-2765. We'll have a fix
for this in 0.8.1.
2011/6/13 Héctor Izquierdo Seliva :
> I was already way over the minimum. There were 12 sstables. Also, is
> there any reason why scrub got stuck? I did not see anything in
As Terje already said in this thread, the threshold is per bucket
(group of similarly sized sstables) not per CF.
2011/6/13 Héctor Izquierdo Seliva :
> I was already way over the minimum. There were 12 sstables. Also, is
> there any reason why scrub got stuck? I did not see anything in the
> logs.
I was already way over the minimum. There were 12 sstables. Also, is
there any reason why scrub got stuck? I did not see anything in the
logs. Via jmx I saw that the scrubbed bytes were equal to one of the
sstables size, and it stuck there for a couple hours .
El lun, 13-06-2011 a las 22:55 +0900,
That most likely happened just because after scrub you had new files and got
over the "4" file minimum limit.
https://issues.apache.org/jira/browse/CASSANDRA-2697
Is the bug report.
2011/6/13 Héctor Izquierdo Seliva
> Hi All. I found a way to be able to compact. I have to call scrub on
> the
Hi All. I found a way to be able to compact. I have to call scrub on
the column family. Then scrub gets stuck forever. I restart the node,
and voila! I can compact again without any message about not having
enough space. This looks like a bug to me. What info would be needed to
fill a report? This
El vie, 10-06-2011 a las 23:40 +0900, Terje Marthinussen escribió:
> Yes, which is perfectly fine for a short time if all you want is to
> compact to one file for some reason.
>
>
> I run min_compaction_threshold = 2 on one system here with SSD. No
> problems with the more aggressive disk utiliza
Yes, which is perfectly fine for a short time if all you want is to compact
to one file for some reason.
I run min_compaction_threshold = 2 on one system here with SSD. No problems
with the more aggressive disk utilization on the SSDs from the extra
compactions, reducing disk space is much more im
12 sounds perfectly fine in this case.
4 buckets, 3 in each bucket, the minimum default threshold _per is 4.
Terje
2011/6/10 Héctor Izquierdo Seliva
>
>
> El vie, 10-06-2011 a las 20:21 +0900, Terje Marthinussen escribió:
> > bug in the 0.8.0 release version.
> >
> >
> > Cassandra splits the s
El vie, 10-06-2011 a las 20:21 +0900, Terje Marthinussen escribió:
> bug in the 0.8.0 release version.
>
>
> Cassandra splits the sstables depending on size and tries to find (by
> default) at least 4 files of similar size.
>
>
> If it cannot find 4 files of similar size, it logs that message
Hi Terje,
There are 12 SSTables, so I don't think that's the problem. I will try
anyway and see what happens.
El vie, 10-06-2011 a las 20:21 +0900, Terje Marthinussen escribió:
> bug in the 0.8.0 release version.
>
>
>
> Cassandra splits the sstables depending on size and tries to find (by
>
But decreasing min_compaction_threashold will affect on minor
compaction frequency, won't it?
maki
2011/6/10 Terje Marthinussen :
> bug in the 0.8.0 release version.
> Cassandra splits the sstables depending on size and tries to find (by
> default) at least 4 files of similar size.
> If it canno
bug in the 0.8.0 release version.
Cassandra splits the sstables depending on size and tries to find (by
default) at least 4 files of similar size.
If it cannot find 4 files of similar size, it logs that message in 0.8.0.
You can try to reduce the minimum required files for compaction and it wil
20 matches
Mail list logo