akrasnov-drv commented on issue #10313:
URL: https://github.com/apache/cloudstack/issues/10313#issuecomment-2634222643

   First of all thanks for the attention and care.
   
   @DaanHoogland I tried using webhooks in the past, but when started getting 
different issues I recreated the cluster without webhooks. Here is my agent 
config
   ```
   #Storage
   #Thu Jan 30 15:31:01 UTC 2025
   cluster=1
   pod=1
   resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
   private.network.device=private
   icluster=1
   domr.scripts.dir=scripts/network/domr/kvm
   guest.cpu.mode=host-passthrough
   guest.network.device=private
   keystore.passphrase=XXXXXXXXXXXXX
   hypervisor.type=kvm
   port=8250
   zone=1
   public.network.device=public
   local.storage.uuid=07db5346-58e2-4a8c-9b6f-08b3b57800a7
   host=10.10.67.1@static
   guid=1ce2f60a-5455-3f8a-846c-86b9413d2a76
   LibvirtComputingResource.id=8
   workers=5
   iscsi.session.cleanup.enabled=false
   vm.migrate.wait=3600
   
   ```
   Nevertheless (I believe I reported it before) there is 
`/etc/libvirt/hooks/qemu` brought by cloudstack-agent package
   ```
   md5sum /usr/share/cloudstack-agent/lib/libvirtqemuhook 
/etc/libvirt/hooks/qemu
   5eb81c675b4caf8c5cb538e19673f29e  
/usr/share/cloudstack-agent/lib/libvirtqemuhook
   5eb81c675b4caf8c5cb538e19673f29e  /etc/libvirt/hooks/qemu
   ```
   and I had some doubts about it. Though I do not see how it can be related to 
the current nat issue. I created 100 VMs via api without a problem, only this 
call fails with timeout.
   
    
   @weizhouapache I have workers in global config set to 50, but as you see 
above, agent has it set to 5, and it's not something I set. I can increase 
that, no problem, but I really doubt number of workers should be relevant to a 
failure in single api call.
   As to VR HA - yes, currently I have a pair, but I believe it failed also 
with a single one. I'll change setup to make sure and provide info on that.
   
   @shwstppr I'll clean the env and start it again to provide a wider log, 
covering both successful executions and then failure.
   
   In the meantime, has anybody tried my flow? Did it work (and then the 
failure is just in my env)?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to