Hello Rahul,
I would request Hossein to correct me if I am wrong. Below is how it works
How will a application/database read something from the disk
A request comes in for read> the application code internally would be
invoking upon system calls-> these kernel level system calls will
Hello Jeff,
Request you to help on how to visualise the terms
1. Internal mutations
2. Cross node mutations
3. Mean internal dropped latency
4. Cross node dropped latency
Thanks,
Rajsekhar
On Thu, 25 Jul, 2019, 9:21 PM Jeff Jirsa, wrote:
> This means your database is seeing commands that have
0
>>> Max20.00263.21 74975.55 386857368
>>> 943127
>>>
>>> [cassadm@bipcas00 conf]$ nodetool tablehistograms tims MESSAGE_HISTORY_STATE
>>> tims/MESSAGE_HISTORY_STATE histograms
>>> P
Hello Rahul,
As per your description, Cassandra process is up and running as you
verified from the logs.
But nodetool and grafana arnt fetching data.
This points to the suspect being jmx port 7199.
Do run and check 'netstat -anp | egrep"7199|9042|7070" ' on the impacted
and other hosts in the
attach screenshot of the observation you are talking about. You may
choose to replace the ip address of the hosts
Thanks
On Fri, 19 Jul, 2019, 9:36 PM Rahul Reddy, wrote:
> Thanks for quick response rajshekar.
>
> Correct same cassandra.yml and same java
>
> On Fri, Jul 19, 2019, 11:
Hello Rahul,
May you please confirm on below things.
1. Cassandra.yaml file of the node which was started after the machine
reboot is same as that of rest of the nodes in the cluster.
2. Java version is consistent across all nodes in the cluster.
Do check and revert
Thanks
On Fri, 19 Jul,
Hello,
Kindly post below details
1. Nodetool cfstats for both the tables.
2. Nodetool cfhistograms for both the tables.
3. Replication factor of the tables.
4. Consistency with which write requests are sent
5. Also the type of write queries for the table if handy would also help
(Light weight
Hello Bobbie,
Do revert with below details:
1. Replication factor of the keyspace.
2. Consistency level used for read requests
3. Nodetool netstats output
4. grep “DigestMismatch” /log/directory/path/debug.log
Thanks
On 2019/07/18 17:19:10, Bobbie Haynes wrote:
> I have updated all the
Hello team,
I am observing below warn and info message in system.log
1. Info log: maximum memory usage reached (1.000GiB), cannot allocate chunk
of 1 MiB.
I tried by increasing the file_cache_size_in_mb in Cassandra.yaml from 512
to 1024. But still this message shows up in logs.
2. Warn log:
was of the view that checking for the same id in
system.compaction_history would fetch me the compaction details after a
running compaction ends.
But no such relationship exists I see.
Please do confirm on the above.
Thanks,
Rajsekhar Mallick
was of the view that checking for the same id in
system.compaction_history would fetch me the compaction details after a
running compaction ends.
But no such relationship exists I see.
Please do confirm on the above.
Thanks,
Rajsekhar Mallick
restart in the cluster,trigger a schema update for the cluster?
Thanks,
Rajsekhar Mallick
nodes in the cluster.
Kindly do help on the above issue. I am not able to exactly understand if the
GC is wrongly tuned, other if this is something else.
Thanks,
Rajsekhar Mallick
-
To unsubscribe, e-mail: user-unsubscr
Thank you Jeff for the link.
Please do comment on the G1GC settings,if they are ok for the cluster.
Also comment on reducing the concurrent reads to 32 on all nodes in the
cluster.
As has earlier lead to reads getting dropped.
Will adding nodes to the cluster be helpful.
Thanks,
Rajsekhar Mallick
will definitely try increasing the key cache sizes after verifying the
current max heap usage in the cluster.
Thanks,
Rajsekhar Mallick
On Wed, 6 Feb, 2019, 11:17 AM Jeff Jirsa What you're potentially seeing is the GC impact of reading a large
> partition - do you have GC logs or StatusLogger out
when we have large partitions.
Kindly Suggest ways to catch these slow queries.
Also do add if you see any other issues from the above details
We are now considering to expand our cluster. Is the cluster under-sized. Will
addition of nodes help resolve the issue.
Thanks,
Rajsekhar Mallick
16 matches
Mail list logo