[ovirt-users] Re: VM Snapshots not erasable and not bootable

2019-04-20 Thread Eyal Shenitzky
On Sat, Apr 20, 2019 at 10:06 PM Jonathan Baecker 
wrote:

> Am 20.04.2019 um 20:38 schrieb Jonathan Baecker:
>
> Am 14.04.2019 um 14:01 schrieb Jonathan Baecker:
>
> Am 14.04.2019 um 13:57 schrieb Eyal Shenitzky:
>
>
>
> On Sun, Apr 14, 2019 at 2:28 PM Jonathan Baecker 
> wrote:
>
>> Am 14.04.2019 um 12:13 schrieb Eyal Shenitzky:
>>
>> Seems like your SPM went down while you had running Live merge operation.
>>
>> Can you please submit a bug and attach the logs?
>>
>> Yes I can do - but you really think this is a bug? Because in that time I
>> had only one host running, so this was the SPM. And the time in the log is
>> exactly this time when the host was restarting. But the merging jobs and
>> snapshot deleting was starting ~20 hours before.
>>
> We should investigate and see if there is a bug or not.
> I overview the logs and saw some NPE that might suggest that there may be
> a bug here.
> Please attach all the logs including the beginning of the snapshot
> deletion.
>
> Ok, I did:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1699627
>
> The logs are also in full length.
>
> Now I have the same issue, that my host trying to delete the snapshots. It
> is still running, no reboot until now. But is there anything I can do?
>
> I'm happy that the backup before was made correctly, other while I would
> be in big trouble. But it looks like that I can not make any more normal
> backup jobs.
>
> Ok here is a interesting situation. I starting to shutdown my VM. fist
> this ones which had no snapshots deleting running. The also VMs which are
> in process, and now all deleting jobs finished successfully. Can it be,
> that the host and VM are not communicating correctly, and somehow this
> brings the host in a situation that it can not merge and delete a created
> snapshot? From some VMs I also get the waring that I need a newer
> ovirt-guest-agent, but there is no updates for it.
>

When you shutdown the VM the engine preform a - "Cold merge" for the
deleted snapshot, this is a good workaround when you encountered some
problems during "Live merge".
Those flows are different so "Cold merge" can succeed while "Live merge"
failed.



>
>
>
>> On Sun, Apr 14, 2019 at 9:40 AM Jonathan Baecker 
>> wrote:
>>
>>> Am 14.04.2019 um 07:05 schrieb Eyal Shenitzky:
>>>
>>> Hi Jonathan,
>>>
>>> Can you please add the engine and VDSM logs?
>>>
>>> Thanks,
>>>
>>> Hi Eyal,
>>>
>>> my last message had the engine.log in a zip included.
>>>
>>> Here are both again, but I delete some lines to get it smaller.
>>>
>>>
>>>
>>> On Sun, Apr 14, 2019 at 12:24 AM Jonathan Baecker 
>>> wrote:
>>>
 Hello,

 I make automatically backups of my VMs and last night there was making
 some new one. But somehow ovirt could not delete the snapshots anymore,
 in the log it show that it tried the hole day to delete them but they
 had to wait until the merge command was done.

 In the evening the host was totally crashed and started again. Now I
 can
 not delete the snapshots manually and I can also not start the VMs
 anymore. In the web interface I get the message:

 VM timetrack is down with error. Exit message: Bad volume specification
 {'address': {'bus': '0', 'controller': '0', 'type': 'drive', 'target':
 '0', 'unit': '0'}, 'serial': 'fd3b80fd-49ad-44ac-9efd-1328300582cd',
 'index': 0, 'iface': 'scsi', 'apparentsize': '1572864', 'specParams':
 {}, 'cache': 'none', 'imageID': 'fd3b80fd-49ad-44ac-9efd-1328300582cd',
 'truesize': '229888', 'type': 'disk', 'domainID':
 '9c3f06cf-7475-448e-819b-f4f52fa7d782', 'reqsize': '0', 'format':
 'cow',
 'poolID': '59ef3a18-002f-02d1-0220-0124', 'device': 'disk',
 'path':
 '/rhev/data-center/59ef3a18-002f-02d1-0220-0124/9c3f06cf-7475-448e-819b-f4f52fa7d782/images/fd3b80fd-49ad-44ac-9efd-1328300582cd/47c0f42e-8bda-4e3f-8337-870899238788',

 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
 '47c0f42e-8bda-4e3f-8337-870899238788', 'diskType': 'file', 'alias':
 'ua-fd3b80fd-49ad-44ac-9efd-1328300582cd', 'discard': False}.

 When I check the path permission is correct and there are also files in
 it.

 Is there any ways to fix that? Or to prevent this issue in the future?

 In the attachment I send also the engine.log


 Regards

 Jonathan




 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/XLHPEKGQWTVFJCHPJUC3WOXH525SWLEC/

>>>
>>>
>>> --
>>> Regards,
>>> Eyal Shenitzky
>>>
>>>
>>>
>>
>> --
>> Regards,
>> Eyal Shenitzky
>>
>>
>>
>
> --
> Regards,
> Eyal Sheni

[ovirt-users] Upgrade from 4.3.2 to 4.3.3 fails on database schema update

2019-04-20 Thread eshwayri
Tried to upgrade to 4.3.3, and during engine-setup I get:

2019-04-20 14:24:47,041-0400 Running upgrade sql script 
'/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql'...
2019-04-20 14:24:47,043-0400 dbfunc_psql_die 
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql
* QUERY **
SELECT fn_db_create_constraint('image_transfers',
   'fk_image_transfers_command_enitites',
   'FOREIGN KEY (command_id) REFERENCES 
command_entities(command_id) ON DELETE CASCADE');
**

2019-04-20 14:24:47,060-0400 DEBUG 
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.executeRaw:863 
execute-result: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 
'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', 
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20190420142153-s5heq0.log', 
'-c', 'apply'], rc=1
2019-04-20 14:24:47,060-0400 DEBUG 
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:921 
execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 
'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', 
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20190420142153-s5heq0.log', 
'-c', 'apply'] stdout:


2019-04-20 14:24:47,061-0400 DEBUG 
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:926 
execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 
'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', 
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20190420142153-s5heq0.log', 
'-c', 'apply'] stderr:
psql:/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql:3:
 ERROR:  insert or update on table "image_transfers" violates foreign key 
constraint "fk_image_transfers_command_enitites"
DETAIL:  Key (command_id)=(7d68555a-71ee-41f5-9f12-0b26b6b9d449) is not present 
in table "command_entities".
CONTEXT:  SQL statement "ALTER TABLE image_transfers ADD CONSTRAINT 
fk_image_transfers_command_enitites FOREIGN KEY (command_id) REFERENCES 
command_entities(command_id) ON DELETE CASCADE"
PL/pgSQL function fn_db_create_constraint(character varying,character 
varying,text) line 9 at EXECUTE
FATAL: Cannot execute sql command: 
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql

2019-04-20 14:24:47,061-0400 ERROR 
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema schema._misc:432 
schema.sh: FATAL: Cannot execute sql command: 
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql
2019-04-20 14:24:47,061-0400 DEBUG otopi.context context._executeMethod:145 
method exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in 
_executeMethod
method['method']()
  File 
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py",
 line 434, in _misc
raise RuntimeError(_('Engine schema refresh failed'))
RuntimeError: Engine schema refresh failed
2019-04-20 14:24:47,064-0400 ERROR otopi.context context._executeMethod:154 
Failed to execute stage 'Misc configuration': Engine schema refresh failed
2019-04-20 14:24:47,065-0400 DEBUG otopi.transaction transaction.abort:119 
aborting 'Yum Transaction'
2019-04-20 14:24:47,065-0400 INFO otopi.plugins.otopi.packagers.yumpackager 
yumpackager.info:80 Yum Performing yum transaction rollback
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2BA3WQYBT2X3MXFYBCGNISVBOCQR4ENA/


[ovirt-users] Re: VM Snapshots not erasable and not bootable

2019-04-20 Thread Jonathan Baecker

Am 20.04.2019 um 20:38 schrieb Jonathan Baecker:

Am 14.04.2019 um 14:01 schrieb Jonathan Baecker:

Am 14.04.2019 um 13:57 schrieb Eyal Shenitzky:



On Sun, Apr 14, 2019 at 2:28 PM Jonathan Baecker > wrote:


Am 14.04.2019 um 12:13 schrieb Eyal Shenitzky:

Seems like your SPM went down while you had running Live merge
operation.

Can you please submit a bug and attach the logs?


Yes I can do - but you really think this is a bug? Because in
that time I had only one host running, so this was the SPM. And
the time in the log is exactly this time when the host was
restarting. But the merging jobs and snapshot deleting was
starting ~20 hours before.

We should investigate and see if there is a bug or not.
I overview the logs and saw some NPE that might suggest that there 
may be a bug here.
Please attach all the logs including the beginning of the snapshot 
deletion.



Ok, I did:

https://bugzilla.redhat.com/show_bug.cgi?id=1699627

The logs are also in full length.

Now I have the same issue, that my host trying to delete the 
snapshots. It is still running, no reboot until now. But is there 
anything I can do?


I'm happy that the backup before was made correctly, other while I 
would be in big trouble. But it looks like that I can not make any 
more normal backup jobs.


Ok here is a interesting situation. I starting to shutdown my VM. fist 
this ones which had no snapshots deleting running. The also VMs which 
are in process, and now all deleting jobs finished successfully. Can it 
be, that the host and VM are not communicating correctly, and somehow 
this brings the host in a situation that it can not merge and delete a 
created snapshot? From some VMs I also get the waring that I need a 
newer ovirt-guest-agent, but there is no updates for it.








On Sun, Apr 14, 2019 at 9:40 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Am 14.04.2019 um 07:05 schrieb Eyal Shenitzky:

Hi Jonathan,

Can you please add the engine and VDSM logs?

Thanks,


Hi Eyal,

my last message had the engine.log in a zip included.

Here are both again, but I delete some lines to get it smaller.




On Sun, Apr 14, 2019 at 12:24 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Hello,

I make automatically backups of my VMs and last night
there was making
some new one. But somehow ovirt could not delete the
snapshots anymore,
in the log it show that it tried the hole day to
delete them but they
had to wait until the merge command was done.

In the evening the host was totally crashed and
started again. Now I can
not delete the snapshots manually and I can also not
start the VMs
anymore. In the web interface I get the message:

VM timetrack is down with error. Exit message: Bad
volume specification
{'address': {'bus': '0', 'controller': '0', 'type':
'drive', 'target':
'0', 'unit': '0'}, 'serial':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'index': 0, 'iface': 'scsi', 'apparentsize':
'1572864', 'specParams':
{}, 'cache': 'none', 'imageID':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'truesize': '229888', 'type': 'disk', 'domainID':
'9c3f06cf-7475-448e-819b-f4f52fa7d782', 'reqsize':
'0', 'format': 'cow',
'poolID': '59ef3a18-002f-02d1-0220-0124',
'device': 'disk',
'path':

'/rhev/data-center/59ef3a18-002f-02d1-0220-0124/9c3f06cf-7475-448e-819b-f4f52fa7d782/images/fd3b80fd-49ad-44ac-9efd-1328300582cd/47c0f42e-8bda-4e3f-8337-870899238788',

'propagateErrors': 'off', 'name': 'sda', 'bootOrder':
'1', 'volumeID':
'47c0f42e-8bda-4e3f-8337-870899238788', 'diskType':
'file', 'alias':
'ua-fd3b80fd-49ad-44ac-9efd-1328300582cd', 'discard':
False}.

When I check the path permission is correct and there
are also files in it.

Is there any ways to fix that? Or to prevent this
issue in the future?

In the attachment I send also the engine.log


Regards

Jonathan




___
Users mailing list -- users@ovirt.org

To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives

[ovirt-users] Re: VM Snapshots not erasable and not bootable

2019-04-20 Thread Jonathan Baecker

Am 14.04.2019 um 14:01 schrieb Jonathan Baecker:

Am 14.04.2019 um 13:57 schrieb Eyal Shenitzky:



On Sun, Apr 14, 2019 at 2:28 PM Jonathan Baecker > wrote:


Am 14.04.2019 um 12:13 schrieb Eyal Shenitzky:

Seems like your SPM went down while you had running Live merge
operation.

Can you please submit a bug and attach the logs?


Yes I can do - but you really think this is a bug? Because in
that time I had only one host running, so this was the SPM. And
the time in the log is exactly this time when the host was
restarting. But the merging jobs and snapshot deleting was
starting ~20 hours before.

We should investigate and see if there is a bug or not.
I overview the logs and saw some NPE that might suggest that there 
may be a bug here.
Please attach all the logs including the beginning of the snapshot 
deletion.



Ok, I did:

https://bugzilla.redhat.com/show_bug.cgi?id=1699627

The logs are also in full length.

Now I have the same issue, that my host trying to delete the snapshots. 
It is still running, no reboot until now. But is there anything I can do?


I'm happy that the backup before was made correctly, other while I would 
be in big trouble. But it looks like that I can not make any more normal 
backup jobs.








On Sun, Apr 14, 2019 at 9:40 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Am 14.04.2019 um 07:05 schrieb Eyal Shenitzky:

Hi Jonathan,

Can you please add the engine and VDSM logs?

Thanks,


Hi Eyal,

my last message had the engine.log in a zip included.

Here are both again, but I delete some lines to get it smaller.




On Sun, Apr 14, 2019 at 12:24 AM Jonathan Baecker
mailto:jonba...@gmail.com>> wrote:

Hello,

I make automatically backups of my VMs and last night
there was making
some new one. But somehow ovirt could not delete the
snapshots anymore,
in the log it show that it tried the hole day to delete
them but they
had to wait until the merge command was done.

In the evening the host was totally crashed and started
again. Now I can
not delete the snapshots manually and I can also not
start the VMs
anymore. In the web interface I get the message:

VM timetrack is down with error. Exit message: Bad
volume specification
{'address': {'bus': '0', 'controller': '0', 'type':
'drive', 'target':
'0', 'unit': '0'}, 'serial':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'index': 0, 'iface': 'scsi', 'apparentsize': '1572864',
'specParams':
{}, 'cache': 'none', 'imageID':
'fd3b80fd-49ad-44ac-9efd-1328300582cd',
'truesize': '229888', 'type': 'disk', 'domainID':
'9c3f06cf-7475-448e-819b-f4f52fa7d782', 'reqsize': '0',
'format': 'cow',
'poolID': '59ef3a18-002f-02d1-0220-0124',
'device': 'disk',
'path':

'/rhev/data-center/59ef3a18-002f-02d1-0220-0124/9c3f06cf-7475-448e-819b-f4f52fa7d782/images/fd3b80fd-49ad-44ac-9efd-1328300582cd/47c0f42e-8bda-4e3f-8337-870899238788',

'propagateErrors': 'off', 'name': 'sda', 'bootOrder':
'1', 'volumeID':
'47c0f42e-8bda-4e3f-8337-870899238788', 'diskType':
'file', 'alias':
'ua-fd3b80fd-49ad-44ac-9efd-1328300582cd', 'discard':
False}.

When I check the path permission is correct and there
are also files in it.

Is there any ways to fix that? Or to prevent this issue
in the future?

In the attachment I send also the engine.log


Regards

Jonathan




___
Users mailing list -- users@ovirt.org

To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/XLHPEKGQWTVFJCHPJUC3WOXH525SWLEC/



-- 
Regards,

Eyal Shenitzky





-- 
Regards,

Eyal Shenitzky





--
Regards,
Eyal Shenitzky





___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/

[ovirt-users] Re: Import of VMs failing - 0% progress on qemu-img

2019-04-20 Thread Callum Smith
Dear Benny,

This is via VDSM, but a manual one fails too.

Background: cloning of an image fails in any context, whether creating from 
template, using Clone VM, etc etc. Tested on any 4.3+ on multiple NFS servers 
etc. Even tried allowing root access and not, seems to not fix.

gdb is blank and has no relevant output

/var/log/messages
Appear to be getting this regularly ~5mins:
Apr 20 10:15:42 virthyp03 nfsidmap[57709]: nss_getpwnam: name 'nobody' does not
map into domain 'virt.in.bmrc.ox.ac.uk'
Apr 20 10:15:42 virthyp03 nfsidmap[57713]: nss_name_to_gid: name 'nobody' does n
ot map into domain ‘virt.in.bmrc.ox.ac.uk'

Some relevant area of /var/log/vdsm/vdsm.log
2019-04-20 10:17:19,644+ INFO  (jsonrpc/5) [vdsm.api] START 
copyImage(sdUUID=u'0e01f014-530b-4067-aa1d-4e9378626a9d', 
spUUID=u'df02d494-56b9-11e9-a05b-00163e4d2a92', vmUUID='', 
srcImgUUID=u'6597eede-9fa0-4451-84fc-9f9c070cb5f3', 
srcVolUUID=u'765fa48b-2e77-4637-b4ca-e1affcd71e48', 
dstImgUUID=u'e1c182ae-f25d-464c-b557-93088e894452', 
dstVolUUID=u'a1157ad0-44a8-4073-a20c-468978973f4f', description=u'', 
dstSdUUID=u'0e01f014-530b-4067-aa1d-4e9378626a9d', volType=8, volFormat=5, 
preallocate=2, postZero=u'false', force=u'false', discard=False) 
from=:::10.141.31.240,44298, flow_id=82e5a31b-5010-4e50-a333-038b16738ecb, 
task_id=d6bf22cf-1ea3-419d-8248-6c8842537a2e (api:48)
2019-04-20 10:17:19,649+ INFO  (jsonrpc/5) [storage.Image] image 
6597eede-9fa0-4451-84fc-9f9c070cb5f3 in domain 
0e01f014-530b-4067-aa1d-4e9378626a9d has vollist 
[u'765fa48b-2e77-4637-b4ca-e1affcd71e48'] (image:313)
2019-04-20 10:17:19,657+ INFO  (jsonrpc/5) [storage.Image] Current 
chain=765fa48b-2e77-4637-b4ca-e1affcd71e48 (top)  (image:702)
2019-04-20 10:17:19,658+ INFO  (jsonrpc/5) [IOProcessClient] (Global) 
Starting client (__init__:308)
2019-04-20 10:17:19,668+ INFO  (ioprocess/57971) [IOProcess] (Global) 
Starting ioprocess (__init__:434)
2019-04-20 10:17:19,679+ INFO  (jsonrpc/5) [vdsm.api] FINISH copyImage 
return=None from=:::10.141.31.240,44298, 
flow_id=82e5a31b-5010-4e50-a333-038b16738ecb, 
task_id=d6bf22cf-1ea3-419d-8248-6c8842537a2e (api:54)
2019-04-20 10:17:19,696+ INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call 
Volume.copy succeeded in 0.05 seconds (__init__:312)
2019-04-20 10:17:19,697+ INFO  (tasks/7) [storage.ThreadPool.WorkerThread] 
START task d6bf22cf-1ea3-419d-8248-6c8842537a2e (cmd=>, args=None) 
(threadPool:208)
2019-04-20 10:17:19,713+ INFO  (tasks/7) [storage.Image] 
sdUUID=0e01f014-530b-4067-aa1d-4e9378626a9d vmUUID= 
srcImgUUID=6597eede-9fa0-4451-84fc-9f9c070cb5f3 
srcVolUUID=765fa48b-2e77-4637-b4ca-e1affcd71e48 
dstImgUUID=e1c182ae-f25d-464c-b557-93088e894452 
dstVolUUID=a1157ad0-44a8-4073-a20c-468978973f4f 
dstSdUUID=0e01f014-530b-4067-aa1d-4e9378626a9d volType=8 volFormat=RAW 
preallocate=SPARSE force=False postZero=False discard=False (image:724)
2019-04-20 10:17:19,719+ INFO  (tasks/7) [storage.VolumeManifest] Volume: 
preparing volume 
0e01f014-530b-4067-aa1d-4e9378626a9d/765fa48b-2e77-4637-b4ca-e1affcd71e48 
(volume:567)
2019-04-20 10:17:19,722+ INFO  (tasks/7) [storage.Image] copy source 
0e01f014-530b-4067-aa1d-4e9378626a9d:6597eede-9fa0-4451-84fc-9f9c070cb5f3:765fa48b-2e77-4637-b4ca-e1affcd71e48
 size 104857600 blocks destination 
0e01f014-530b-4067-aa1d-4e9378626a9d:e1c182ae-f25d-464c-b557-93088e894452:a1157ad0-44a8-4073-a20c-468978973f4f
 allocating 104857600 blocks (image:767)
2019-04-20 10:17:19,722+ INFO  (tasks/7) [storage.Image] image 
e1c182ae-f25d-464c-b557-93088e894452 in domain 
0e01f014-530b-4067-aa1d-4e9378626a9d has vollist [] (image:313)
2019-04-20 10:17:19,723+ INFO  (tasks/7) [storage.StorageDomain] Create 
placeholder 
/rhev/data-center/mnt/10.141.15.248:_export_instruct_vm__storage/0e01f014-530b-4067-aa1d-4e9378626a9d/images/e1c182ae-f25d-464c-b557-93088e894452
 for image's volumes (sd:1288)
2019-04-20 10:17:19,731+ INFO  (tasks/7) [storage.Volume] Creating volume 
a1157ad0-44a8-4073-a20c-468978973f4f (volume:1183)
2019-04-20 10:17:19,754+ INFO  (tasks/7) [storage.Volume] Request to create 
RAW volume 
/rhev/data-center/mnt/10.141.15.248:_export_instruct_vm__storage/0e01f014-530b-4067-aa1d-4e9378626a9d/images/e1c182ae-f25d-464c-b557-93088e894452/a1157ad0-44a8-4073-a20c-468978973f4f
 with size = 20480 blocks (fileVolume:463)
2019-04-20 10:17:19,754+ INFO  (tasks/7) [storage.Volume] Changing volume 
u'/rhev/data-center/mnt/10.141.15.248:_export_instruct_vm__storage/0e01f014-530b-4067-aa1d-4e9378626a9d/images/e1c182ae-f25d-464c-b557-93088e894452/a1157ad0-44a8-4073-a20c-468978973f4f'
 permission to 0660 (fileVolume:480)
2019-04-20 10:17:19,800+ INFO  (tasks/7) [storage.VolumeManifest] Volume: 
preparing volume 
0e01f014-530b-4067-aa1d-4e9378626a9d/a1157ad0-44a8-4073-a20c-468978973f4f 
(volume:567)

I tried to filter the usual noise out of VDSM.log so hopefully this is the 
relevant bit you need - let me know if the full thing would help.

Reg

[ovirt-users] Re: Import of VMs failing - 0% progress on qemu-img

2019-04-20 Thread Benny Zlotnik
Sorry, I kind of lost track of what the problem is

The  "KeyError: 'appsList'" issue is a known bug[1]

If a manual (not via vdsm) run of qemu-img is actually stuck, then
let's involve the qemu-discuss list, with the version of the relevant
packages (qemu, qemu-img, kernel, your distro) and the output of gdb
commands

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1690301



On Sat, Apr 20, 2019 at 1:36 AM Callum Smith  wrote:
>
> Dear Benny and others,
>
> So it seems I wasn’t being patient with GDB and it does show me some output. 
> This error of qemu-img convert even is failing and preventing updating 
> ovirt-node version from 4.3.2 to 4.3.3.1. I get a feeling this is an 
> unrelated error, but I thought I’d be complete:
>
> Excuse any typos, im having to type this manually from a remote session, but 
> the error:
>
> [733272.427922] hid-generic 0003:0624:0249.0001: usb_submit_urb(ctrl) failed: 
> -19
>
> If this bug is preventing even a local yum updatei can’t see how it’s any 
> issue other than somehow involved with the hardware of the hypervisor, our 
> network and storage configuration must be irrelevant to this fact at this 
> stage?
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 11 Apr 2019, at 12:00, Callum Smith  wrote:
>
> Without the sudo and running in a  dir where the root has access to, gdb has 
> zero output:
>
> 
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 11 Apr 2019, at 11:54, Callum Smith  wrote:
>
> Some more information:
>
> running qemu-img convert manually having captured the failed attempt from the 
> previous:
>
> sudo -u vdsm /usr/bin/qemu-img convert -p -t none -T none -f raw 
> /rhev/data-center/mnt/10.141.15.248:_export_instruct_vm__storage/0e01f014-530b-4067-aa1d-4e9378626a9d/images/6597eede-9fa0-4451-84fc-9f9c070cb5f3/765fa48b-2e77-4637-b4ca-e1affcd71e48
>  -O raw 
> /rhev/data-center/mnt/10.141.15.248:_export_instruct_vm__storage/0e01f014-530b-4067-aa1d-4e9378626a9d/images/9cc99110-70a2-477f-b3ef-1031a912d12b/c2776107-4579-43a6-9d60-93a5ea9c64c5
>  -W
>
> Added the -W flag just to see what would happen:
>
> gdb -p 79913 -batch -ex "t a a bt"
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> 0x7f528d7661f0 in __poll_nocancel () from /lib64/libc.so.6
>
> Thread 1 (Thread 0x7f528e6bb840 (LWP 79913)):
> #0  0x7f528d7661f0 in __poll_nocancel () from /lib64/libc.so.6
> #1  0x7f528dc510fb in sudo_ev_scan_impl () from 
> /usr/libexec/sudo/libsudo_util.so.0
> #2  0x7f528dc49b44 in sudo_ev_loop_v1 () from 
> /usr/libexec/sudo/libsudo_util.so.0
> #3  0x55e94aa0e271 in exec_nopty ()
> #4  0x55e94aa0afda in sudo_execute ()
> #5  0x55e94aa18a12 in run_command ()
> #6  0x55e94aa0969e in main ()
>
>
> And without -W:
>
> gdb -p 85235 -batch -ex "t a a bt"
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> 0x7fc0cc69b1f0 in __poll_nocancel () from /lib64/libc.so.6
>
> Thread 1 (Thread 0x7fc0cd5f0840 (LWP 85235)):
> #0  0x7fc0cc69b1f0 in __poll_nocancel () from /lib64/libc.so.6
> #1  0x7fc0ccb860fb in sudo_ev_scan_impl () from 
> /usr/libexec/sudo/libsudo_util.so.0
> #2  0x7fc0ccb7eb44 in sudo_ev_loop_v1 () from 
> /usr/libexec/sudo/libsudo_util.so.0
> #3  0x5610f4397271 in exec_nopty ()
> #4  0x5610f4393fda in sudo_execute ()
> #5  0x5610f43a1a12 in run_command ()
> #6  0x5610f439269e in main ()
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 11 Apr 2019, at 09:57, Callum Smith  wrote:
>
> Dear Benny,
>
> It would seem that even cloning a VM is failing, creating a VM works on the 
> same storage. This is the only error i could find:
>
> ERROR Internal server error
>   Traceback (most 
> recent call last):
> File 
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in 
> _handle_request
>   res = 
> method(**params)
> File 
> "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in 
> _dynamicMethod
>   result = 
> fn(*methodArgs)
> File 
> "", line 2, in getAllVmStats
> File 
> "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
>   

[ovirt-users] Activation With Product Key

2019-04-20 Thread deamjones5247
It not only protects your personal data but also provides safe online browsing. 
It provides protection to your data by encrypting the information over the 
internet.
http://ask-norton.com";>norton.com/setup | http://product-activate.com";>mcafee.com/activate | http://mcafee-com-activate-code.com";>mcafee.com/activate | http://setmsoffice.com";>office.com/setup | http://enterproductkey.com";>office.com/setup | http://assistnorton.com";>norton.com/setup
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BMMB6FF5MGAA63V6NWXAON6MR6SCS4XI/