Thanks a lot Ben.
Really appreciate your suggestions here.
Regards,
Varun Saluja
Sent from my iPhone
> On 21-May-2017, at 5:40 PM, Ben Slater wrote:
>
> My main suggestion would be to monitor the compaction backlog (pending
> compactions). If the backlog is
My main suggestion would be to monitor the compaction backlog (pending
compactions). If the backlog is growing you need to either throttle writes,
add more capacity to your cluster or possibly tune things. There is no
simple answer to tuning but several good guides on the internet to help -
this
Hi All,
Can someone Please suggest any recommendations for write intensive jobs
Regards,
Varun Saluja
Sent from my iPhone
> On 17-May-2017, at 3:52 PM, varun saluja wrote:
>
> Thanks Jeff.
>
> I have taken backup and did manual removal of hints with rolling restart.
>
Thanks Jeff.
I have taken backup and did manual removal of hints with rolling restart.
This brought cluster back in stable state.
Can you Please share some recommendation for write intensive job . Actually
,we need to load dump from kafka to 3 node cassandra cluster . Write TPS
per node will be
You could also try stopping compaction, but that'll probably take a very long
time as well
Manually stopping each node (one at a time) and removing the sstables from only
system.hints may be a better option. May want to take a snapshot if you're very
concerned with that data.
--
Jeff
Hi,
Truncatehints on nodes is running for more than 7 hours now. Nothing
mentioned for same in sysemt logs even.
And compaction stats reports increase in hints total bytes.
pending tasks: 1
compaction type keyspace table completed totalunit
progress
Hi Jeff,
I ran nodetool truncatehints on all nodes. Its running for more than 30
mins now. Status for compactstats reports same.
pending tasks: 1
compaction type keyspace table completed totalunit
progress
Compaction system hints 11189118129
Thanks a lot Jeff.
You have explaned very well here. We have consitency as local quorum. Will
follow truncate hints and repair therafter.
I hope this brings cluster in stable state
Thanks again.
Regards,
Varun Saluja
Sent from my iPhone
> On 16-May-2017, at 8:42 PM, Jeff Jirsa
In Cassandra versions up to 3.0, hints are stored within a table, where the
partition key is the host ID of the server for which the hints are stored.
In such a data model, accumulating 800GB of hints is almost certain to cause
very wide rows, which will in turn cause GC pressure when you
You can control compaction with nodetool compactionthroughput but it will just
slow down compaction and give resources for application, however it's not a fix.
Sent from my iPhone
> On May 16, 2017, at 9:15 AM, varun saluja wrote:
>
> Thanks Nitan.
> Appreciate your help.
Thanks Nitan.
Appreciate your help.
Can anyone suggest parameter change or something which can help in this
situation.
Regards,
Varun
Sent from my iPhone
> On 16-May-2017, at 7:31 PM, Nitan Kainth wrote:
>
> If target table is dropped then you can remove its hints but
If target table is dropped then you can remove its hints but there could be
more hints from other table. If it has tables of your interest , then I won't
comment on truncating hints.
Size of hints depends on Kafka load , looks like you had overloaded the cluster
during data load and not hints
Hi Nitan,
Rolling reatart did not helped. Same compaction status after restart.
No other processes running here. These are dedicated cassandra nodes.
Sent from my iPhone
> On 16-May-2017, at 7:16 PM, Nitan Kainth wrote:
>
> Have you tried rolling restart?
> Any agent or
Thanks for update.
I could see lot of io waits. This causing Gc and mutation drops .
But as i mentioned we do not have high load for now. Hint replays are creating
such high disk I/O.
compactionstats show very high hint bytes like 780gb around. Is this normal?
Just mentioning we are using flash
Have you tried rolling restart?
Any agent or other process hogging system?
Sent from my iPhone
> On May 16, 2017, at 7:58 AM, varun saluja wrote:
>
> Hi Nitan,
>
> Thanks for response.
>
> Yes, I could see mutation drops and increase count in system.hints. Is there
> any
Yes but it means data has to be replicated using repair.
Hints are out come of unhealthy nodes, focus on finding why you have mutation
drops, is it node, io or network etc. ideally you shouldn't see increasing
hints all the time.
Sent from my iPhone
> On May 16, 2017, at 7:58 AM, varun saluja
Hi,
Could see intermittent GCs and mutation drops.
*System log reports:*
INFO [Service Thread] GCInspector.java:252 - ParNew GC in 3816ms. CMS
Old Gen: 4663180720 -> 5520012520; Par Eden Space: 1718091776 -> 0; Par
Survivor Space: 0 -> 214695936
INFO [ScheduledTasks:1]
Hi Nitan,
Thanks for response.
Yes, I could see mutation drops and increase count in system.hints. Is
there any way , i can proceed to truncate hints like using nodetool
truncatehints.
Regards,
Varun Saluja
On 16 May 2017 at 17:52, Nitan Kainth wrote:
> Do you see
Varun,
This a message better for the user@ ML.
Thanks,
-Jason
On Tue, May 16, 2017 at 3:41 AM, varun saluja wrote:
> Hi Experts,
>
> We are facing issue on production cluster. Compaction on system.hint table
> is running from last 2 days.
>
>
> pending tasks: 1
>
Do you see mutation drops?
Select count from system.hints; is it increasing?
Sent from my iPhone
> On May 16, 2017, at 5:52 AM, varun saluja wrote:
>
> Hi Experts,
>
> We are facing issue on production cluster. Compaction on system.hint table is
> running from last 2
Hi Experts,
We are facing issue on production cluster. Compaction on system.hint table
is running from last 2 days.
pending tasks: 1
compaction type keyspace table completed total
unit progress
Compaction system hints 20623021829
21 matches
Mail list logo