Done
https://issues.apache.org/jira/browse/CASSANDRA-604
On Fri, Dec 4, 2009 at 4:01 PM, Jonathan Ellis wrote:
> Please do.
>
> On Fri, Dec 4, 2009 at 5:53 PM, Ramzi Rabah wrote:
>> Thanks Jonathan.
>> Should I open a bug for this?
>>
>> Ray
>>
>> On Fri, Dec 4, 2009 at 3:47 PM, Jonathan Ellis
Please do.
On Fri, Dec 4, 2009 at 5:53 PM, Ramzi Rabah wrote:
> Thanks Jonathan.
> Should I open a bug for this?
>
> Ray
>
> On Fri, Dec 4, 2009 at 3:47 PM, Jonathan Ellis wrote:
>> On Fri, Dec 4, 2009 at 5:32 PM, Ramzi Rabah wrote:
>>> Starting with fresh directories with no data and trying to
Thanks Jonathan.
Should I open a bug for this?
Ray
On Fri, Dec 4, 2009 at 3:47 PM, Jonathan Ellis wrote:
> On Fri, Dec 4, 2009 at 5:32 PM, Ramzi Rabah wrote:
>> Starting with fresh directories with no data and trying to do simple
>> inserts, I could not reproduce it *sigh*. Nothing is simple :(
On Fri, Dec 4, 2009 at 5:32 PM, Ramzi Rabah wrote:
> Starting with fresh directories with no data and trying to do simple
> inserts, I could not reproduce it *sigh*. Nothing is simple :(, so I
> decided to dig deeper into the code.
>
> I was looking at the code for compaction, and this is a very n
Starting with fresh directories with no data and trying to do simple
inserts, I could not reproduce it *sigh*. Nothing is simple :(, so I
decided to dig deeper into the code.
I was looking at the code for compaction, and this is a very noob
concern, so please bare with me if I'm way off, this code
Okay, in that case it doesn't hurt to update just in case but I think
you're going to need that test case. :)
On Fri, Dec 4, 2009 at 2:45 PM, Ramzi Rabah wrote:
> I have a two week old version of trunk. Probably need to update it to
> latest build.
>
> On Fri, Dec 4, 2009 at 12:34 PM, Jonathan El
I have a two week old version of trunk. Probably need to update it to
latest build.
On Fri, Dec 4, 2009 at 12:34 PM, Jonathan Ellis wrote:
> Are you testing trunk? If not, you should check that first to see if
> it's already fixed.
>
> On Fri, Dec 4, 2009 at 1:55 PM, Ramzi Rabah wrote:
>> Just
Are you testing trunk? If not, you should check that first to see if
it's already fixed.
On Fri, Dec 4, 2009 at 1:55 PM, Ramzi Rabah wrote:
> Just to be clear what I meant is that I ran the deletions and
> compaction with GCGraceSeconds set to 1 hour, so there was enough time
> for the tombstone
Just to be clear what I meant is that I ran the deletions and
compaction with GCGraceSeconds set to 1 hour, so there was enough time
for the tombstones to expire.
Anyway I will try to make a simpler test case to hopefully reproduce
this, and I will share the code if I can reproduce.
Ray
On Fri, D
Hi Jonathan I have changed that to 3600(one hour) based on your
recommendation before.
On Fri, Dec 4, 2009 at 11:01 AM, Jonathan Ellis wrote:
> this is what I was referring to by "the period specified in your config file":
>
>
> 864000
>
> On Fri, Dec 4, 2009 at 12:51 PM, Ramzi Rabah wrote:
>
this is what I was referring to by "the period specified in your config file":
864000
On Fri, Dec 4, 2009 at 12:51 PM, Ramzi Rabah wrote:
> I think there might be a bug in the deletion logic. I removed all the
> data on the cluster by running remove on every single key I entered,
> and I ru
I think there might be a bug in the deletion logic. I removed all the
data on the cluster by running remove on every single key I entered,
and I run major compaction
nodeprobe -host hostname compact on a certain node, and after the
compaction is over, I am left with one data file/ one index file an
cassandra never modifies data in-place. so it writes tombstones to
supress the older writes, and when compaction occurs the data and
tombstones get GC'd (after the period specified in your config file).
On Thu, Dec 3, 2009 at 8:07 PM, Ramzi Rabah wrote:
> Looking at jconsole I see a high number
Looking at jconsole I see a high number of writes when I do removes,
so I am guessing these are tombstones being written? If that's the
case, is the data being removed and replaced by tombstones? and will
they all be deleted eventually when compaction runs?
On Thu, Dec 3, 2009 at 3:18 PM, Ramzi
Hi all,
I ran a test where I inserted about 1.2 Gigabytes worth of data into
each node of a 4 node cluster.
I ran a script that first calls a get on each column inserted followed
by a remove. Since I was basically removing every entry
I inserted before, I expected that the disk space occupied by t
15 matches
Mail list logo