Anyone had problems when migrating "big" volumes between different pools? I
have 3 storage pools. The overprovisioning factor was configured with 2.0
(default) and pool2 got full. So, I've configured factor as 1.0 and then
had to move some volumes from pool2 to pool3.

CS 4.17.2.0, Ubuntu 22.04 LTS. I'm using KVM with NFS. Same zone, same pod,
same cluster. All hosts (hypervisors) had all 3 pools mounted. I've tried
two ways:

1) from instance details page, with instance stopped, using the option
"Migrate instance to another primary storage" (when instance is running
this option is named "Migrate instance to another host"). Then, I've marked
"Migrate all volume(s) of the instance to a single primary storage" and
choose the destination primary storage pool3.

2) from volume details page, with instance stopped, using the option
"Migrate volume" and then selecting the destination primary storage pool3.

Both methods didn't work with a volume of 1.1 TB. Do they do the same thing?

Looking at the host that executes the action, I can see that it mounts the
Secondary Storage, starts a "qemu-img convert" process to generate a new
volume. After some time (3 hours) and copy 1.1 TB, the process fail with:

com.cloud.utils.exception.CloudRuntimeException: Resource [StoragePool:8]
is unreachable: Migrate volume failed:
com.cloud.utils.exception.CloudRuntimeException: Failed to copy
/mnt/4be0a812-1d87-376f-9e72-db79206a796c/565fa2dd-ff14-4b28-a5d0-dbe88b860ee9
to d3d5a858-285c-452b-b33f-c152c294711b.qcow2

I checked in the database that StoragePool:8 is pool3, the destination.

After failing, the async job is finished. But, the new qcow2 file remains
at secondary storage, lost.

So, the host is saying it can't access the pool3. BUT, this pool is
mounted! There are other VMs running using this pool3. And, I've
successfully migrated many others VMs using 1) or 2), but these VMs had up
to 100 GB.

I'm using

job.cancel.threshold.minutes: 480
migratewait: 28800
storage.pool.max.waitseconds: 28800
wait: 28800

so, no log messages about timeouts.

Any help?

Thank you :)

-- 
Jorge Luiz Corrêa
Embrapa Agricultura Digital

echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu
YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm
NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln
aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW
xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD
RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF
NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4
Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm
JyCgo="|base64 -d

-- 
__________________________
Aviso de confidencialidade

Esta mensagem da 
Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), empresa publica 
federal  regida pelo disposto  na Lei Federal no. 5.851,  de 7 de dezembro 
de 1972,  e  enviada exclusivamente  a seu destinatario e pode conter 
informacoes  confidenciais, protegidas  por sigilo profissional.  Sua 
utilizacao desautorizada  e ilegal e  sujeita o infrator as penas da lei. 
Se voce  a recebeu indevidamente, queira, por gentileza, reenvia-la ao 
emitente, esclarecendo o equivoco.

Confidentiality note

This message from 
Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), a government 
company  established under  Brazilian law (5.851/72), is directed 
exclusively to  its addressee  and may contain confidential data,  
protected under  professional secrecy  rules. Its unauthorized  use is 
illegal and  may subject the transgressor to the law's penalties. If you 
are not the addressee, please send it back, elucidating the failure.

Reply via email to