Rule of thumb: big volume, much io, old snapshot merge (huge delta) -> huge 
problems

Von: Jeremy Hansen <[email protected]>
Antworten an: <[email protected]>
Datum: Mittwoch, 11. Februar 2026 um 02:33
An: <[email protected]>
Betreff: Re: Issue with backups on large data volumes

Adjusted the cmds.timeout in agent.properties to 28800.

I noticed the datadisk seems to be trickling in very slowly.  This is the only 
volume that does this.  You can see the root disk dumped fine, but this 
particular data disk goes very very slow.  It’s coming from the same storage 
pool.:

33088   datadisk.aab61212-c1df-48c7-9abc-3d2a47d37468.qcow2
20974976        root.8a60af8e-c138-4193-aece-a0bfe89a6fa3.qcow2

and then ultimately it still timed out.

Perhaps there’s other timeouts to adjust?

Thanks
-jeremy




On Tuesday, Feb 10, 2026 at 4:38 AM, 
<[email protected]<mailto:[email protected]>> wrote:
Just saw these replies. Thank you. I will try this.

-jeremy



On Sunday, Feb 08, 2026 at 8:29 PM, Abhisar Sinha 
<[email protected]<mailto:[email protected]>> wrote:
Hi Jeremy,

As Jithin said, the request might be timing out.
Currently it takes the timeout from cmds.timeout in agent.properties (7200
default - which means 2 hours).

You can change the value, restart cloudstack-agent and try again.

Thanks
Abhisar

On Mon, Feb 9, 2026, 9:35 AM Jithin Raju <[email protected]> wrote:


Hi Jeremy,

Can you check the backup async jobs log? It may be a timeout issue.
Enable debug verbosity in the KVM agent : $sed -i 's/INFO/DEBUG/g'
/etc/cloudstack/agent/log4j-cloud.xml


-Jithin

From: Jeremy Hansen <[email protected]>
Date: Saturday, 7 February 2026 at 3:53 AM
To: [email protected] <[email protected]>
Subject: Re: Issue with backups on large data volumes

I was able to do a couple instances with 100G data volumes and it worked
fine but this one instance witht he 512G volume fails.

-jeremy




On Friday, Feb 06, 2026 at 1:27 PM, Jeremy Hansen <[email protected]
<mailto:[email protected]>> wrote:
I’m trying to work through the new backup features in 4.22. Instances
with no data disk seem to work well, but I have a few instances with very
large data disks and for some reason, the backup just disappears from “In
Progress” and the data disk never completes. I tried twice on the same
instance and the data disk seems to stop at the same byte count.

i-4-65-VM]# du -sk */*
65792
2026.02.06.08.11.44/datadisk.aab61212-c1df-48c7-9abc-3d2a47d37468.qcow2
6 2026.02.06.08.11.44/domain-config.xml
1 2026.02.06.08.11.44/domblklist.xml
1 2026.02.06.08.11.44/domiflist.xml
1 2026.02.06.08.11.44/dominfo.xml
20974976
2026.02.06.08.11.44/root.8a60af8e-c138-4193-aece-a0bfe89a6fa3.qcow2
65792
2026.02.06.16.17.29/datadisk.aab61212-c1df-48c7-9abc-3d2a47d37468.qcow2
6 2026.02.06.16.17.29/domain-config.xml
1 2026.02.06.16.17.29/domblklist.xml
1 2026.02.06.16.17.29/domiflist.xml
1 2026.02.06.16.17.29/dominfo.xml
20974976
2026.02.06.16.17.29/root.8a60af8e-c138-4193-aece-a0bfe89a6fa3.qcow2

This instance’s data disk is a 512.00 GB volume on Ceph.

Any info on this would be appreciated. I’m trying to use these backups to
restore to a different location. So far instances without a data disk work
fine.

-jeremy

Reply via email to