You need to execute /var/tmp/one/tm/ssh/clone vm158.jinr.ru :/ var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105 cldwn07 :/ vz/one/datastores/0/80/disk.0 80 1
"CLONE ssh" is translated to "/var/tmp/one/tm/ssh/clone", so you can easily debug the others.... Are you also getting errors with TM=ssh? Are the same errors as before? Cheers On Fri, Jul 12, 2013 at 11:13 AM, Alexandr Baranov <[email protected]>wrote: > The contents of the file vms/80/transfer.0.prolog: > > > CLONE ssh vm158.jinr.ru :/ > var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105 cldwn07 :/ > vz/one/datastores/0/80/disk.0 80 1 > MKIMAGE ssh 22016 ext4 cldwn07 :/ vz/one/datastores/0/80/disk.1 80 0 > CONTEXT ssh / var/lib/one/vms/80/context.sh cldwn07 :/ > vz/one/datastores/0/80/disk.2 80 0 > > > 2013/7/12 Alexandr Baranov <[email protected]> > >> Hi, Ruben, >> >> Checked on your recommendation datastore TM_MAD >> >> 1. I used TM_MAD = ssh >> >> 2. I can not get the command: >> CLONE ssh vm158.jinr.ru :/ >> var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105 cldwn07 :/ >> vz/one/datastores/0/80/disk.0 80 1 >> -bash: CLONE: command not found >> >> from the file vms/80/transfer.0.prolog >> >> Could you detail how to execute this script? >> Thank you >> 04.07.2013 16:00 пользователь "Ruben S. Montero" < >> [email protected]> написал: >> >> Hi Alexandr, >> >> This may depend on the storage backend you are using. If the Datastore is >> uising TM_MAD=shared, it may be a problem with the NFS mount options or >> user mapping. You can: >> >> 1.- Try with TM_MAD=ssh (create other system datastore, and a cluster >> with a node and that system ds to make the test) >> >> 2.- Execute directly the TM commands to check that this is an storage >> problem. Look for vms/50/transfer.0.prolog. In that file there is a clone >> statement. Like >> >> CLONE qcow2 vm158 :/ >> var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667 >> cldwn07:/vz/one/datastores/0/50/disk.0 >> >> Execute (probably with -xv to debug) the script >> >> /var/lib/one/remotes/tm/qcow2/clone vm158 :/ >> var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667 >> cldwn07:/vz/one/datastores/0/50/disk.0 >> >> If that scripts creates the file on cldwn07 with the right permissions >> the you have a problem with libvirt (try restarting it, double check >> configuration and oneadmin membership....) >> >> >> Cheers and good luck >> >> Ruben >> >> >> >> >> On Wed, Jul 3, 2013 at 11:14 AM, Alexandr Baranov < >> [email protected]> wrote: >> >>> Hello everyone! >>> the essence is that I can not yet get a successful operation to deploy >>> disk type using qcow2. >>> In the logs I see the following: >>> ********************* >>> >>> Wed Jul 3 11:59:39 2013 [ReM] [D]: Req: 8976 UID: 0 >>> VirtualMachinePoolInfo invoked, -2, -1, -1, -1 >>> Wed Jul 3 11:59:39 2013 [ReM] [D]: Req: 8976 UID: 0 >>> VirtualMachinePoolInfo result SUCCESS, "<VM_POOL> <VM> <ID> 50 <..." >>> Wed Jul 3 11:59:39 2013 [TM] [D]: Message received: LOG I 50 clone: >>> Cloning vm158 :/ var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667 >>> in / vz / o >>> ne/datastores/0/50/disk.0 >>> >>> Wed Jul 3 11:59:39 2013 [TM] [D]: Message received: LOG I 50 ExitCode: 0 >>> >>> Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: LOG I 50 context: >>> Generating context block device at cldwn07 :/ vz/one/datastores/0/50/disk.1 >>> >>> Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: LOG I 50 ExitCode: 0 >>> >>> Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: TRANSFER SUCCESS 50 - >>> >>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 ExitCode: 0 >>> >>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 >>> Successfully execute network driver operation: pre. >>> >>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Command >>> execution fail: cat << EOT | / var / tmp / one / vmm / kvm / deploy / vz / >>> one / datastores / 0/50/deploy >>> ment.0 cldwn07 50 cldwn07 >>> >>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 error: >>> Failed to create domain from / vz/one/datastores/0/50/deployment.0 >>> >>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 error: >>> internal error process exited while connecting to monitor: qemu-kvm:-drive >>> file = / vz/one/datastores/0 / 50/disk.0, if = none, id = drive-ide0-0-0, >>> format = qcow2, cache = none: could not open disk image / >>> vz/one/datastores/0/50/disk.0: Invalid argument >>> >>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 >>> >>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG E 50 Could not >>> create domain from / vz/one/datastores/0/50/deployment.0 >>> >>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 ExitCode: >>> 255 >>> >>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Failed to >>> execute virtualization driver operation: deploy. >>> >>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: DEPLOY FAILURE 50 >>> Could not create domain from / vz/one/datastores/0/50/deployment.0 >>> >>> *********************** >>> >>> In the folder / vz/one/datastores/0/50/disk.0: This disc is created as >>> root: root. >>> So it seems the monitor can not handle it - hence the error. >>> >>> Now actually that is not clear - why it is created as root? >>> In the configuration of the hypervisor I checked: >>> root @ vm158 ~] # grep-vE '^ ($ | #)' / etc / libvirt / qemu.conf >>> user = "oneadmin" >>> group = "oneadmin" >>> dynamic_ownership = 0 >>> It is true there is also a caveat onedmin I could not read it. >>> >>> _______________________________________________ >>> Users mailing list >>> [email protected] >>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org >>> >>> -- >>> <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org> >>> -- >>> Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013 >>> -- >>> Ruben S. Montero, PhD >>> Project co-Lead and Chief Architect >>> OpenNebula - The Open Source Solution for Data Center Virtualization >>> <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org> >>> www.OpenNebula.org | [email protected] | @OpenNebula >>> >> > -- -- Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013 -- Ruben S. Montero, PhD Project co-Lead and Chief Architect OpenNebula - The Open Source Solution for Data Center Virtualization www.OpenNebula.org | [email protected] | @OpenNebula
_______________________________________________ Users mailing list [email protected] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
