saffronjam commented on issue #7829:
URL: https://github.com/apache/cloudstack/issues/7829#issuecomment-1678916522

   Hi!
   Thanks for the help, I've tried the things you suggested.
   
   1. I have checked each secondary storage for the ISOs I have used in my 
tests.
   
   Only the RW-storage (1 of 3) has the 1.27.3 Kubernetes ISO (the ID is 275 
just for reference). 
   Only the RW and one of the RO storage has 1.24.
   
   I also checked the MD5 sums and everything is in order.
   
   2. As I understand it, the ISO get mounted temporarily on the hypervisor as 
long as some cluster node needs it?
   I found that the correct secondary storage is mounted with the correct ISO, 
for example, the RW secondary storage with ISO with ID 275.
   
   ```
   nfs.cloud.cbh.kth.se:/mnt/cloud/cloudstack/sec/template/tmpl/1/275 nfs4    
39T   21G   39T   1% /mnt/b82f3155-51f4-3fa2-b7ff-2f3ed76e7585
   ```
   
   **However**, for some hypervisors I was unable to check this since the 
command `df -Th | grep nfs` got stuck (gave me no output and forced me to CTRL 
+ C), same with `ls /mnt`. This coincidentally correlates to which cluster node 
that either fails because of too many attempts to wait for /mnt/k8sdisk/ or 
just takes longer time to mount it (the node was se-flem-016 just for 
reference). 
   
   I tried to reboot the 016-hypervisor and I am now able to run `df -Th | grep 
nfs`. I will try to reproduce this problem again and see if it correlates. 
   
   @weizhouapache This leaves me with a question though. If the binaries fails 
to attach for any reason, how do we "reset" that cluster node so it tries to 
attach the ISO again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to