GitHub user bhouse-nexthop added a comment to the discussion: Degraded 
cloudstack agent

That's an interesting angle on this issue.  I've got a test cluster, but it has 
more nodes (8 -- its an old supermicro microcloud 3U) and fewer vms, but I do 
NOT run k8s on that and its run for many weeks without seeing this issue (I've 
never seen it on this cluster). 

But on another cluster that is much newer and more powerful but only has 3 
nodes, is fairly heavily loaded with about 30 vms per node, but also runs k8s, 
I see this issue every 2 days ...

I hadn't thought to look at the k8s angle, but it is an additional difference 
over the load aspect.  I wonder if I can reproduce by deploying a k8s cluster 
in the test environment.

Since someone else asked about the primary and secondary storage, I use Ceph 
RBD for primary and NFS (via Ganesha NFS on top of CephFS) for secondary 
storage ... its set up the same on both the test and prod clusters.

GitHub link: 
https://github.com/apache/cloudstack/discussions/12450#discussioncomment-15515819

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to