[ovirt-users] Re: Poor performance of oVirt Export Domain

2018-08-17 Thread Алексей Максимов
Hello Nir

Thanks for the advice using dd.

I found the main reason for the low level of performance. 
The problem was in the faulty battery from cache-module on the disk shelf. 
We also used a low-performance RAID configuration.

Thanks for the help.


05.08.2018, 18:38, "Nir Soffer" :
> On Fri, Aug 3, 2018 at 12:12 PM  wrote:
>> Hello
>>
>> I deployed a dedicated server (fs05.holding.com) on CentOS 7.5 and created a 
>> VDO volume on it.
>> The local write test in the VDO volume on this server gives us an acceptable 
>> result
>>
>> # dd if=/dev/zero of=/mnt/vdo-vd1/nfs/testfile count=100
>> 100+0 records in
>> 100+0 records out
>> 51200 bytes (512 MB) copied, 6.26545 s, 81.7 MB/s
>
> This is not a good test for copying images:
> - You are not using direct I/O, and
> - You are using block size of 512 bytes, way too small
> - You don't sync at the end of the transfer
> - You don't copy real image, reading zeros does not take any time, while
>   reading real image takes time. This can drop dd performance in half
>
> A better way to test this is:
>
>     dd if=/path/to/src of=/path/to/dst bs=8M count=1280 iflag=direct 
> oflag=direct conv=fsync status=progress
>
> This does not optimize copying sparse parts of the image. For this you can use
> qemu-img, what oVirt is using.
>
> The command oVirt uses is:
>
>     qemu-img convert -p -f raw|qcow2 -O raw|qcow2 -t none -T none 
> /paht/to/src /path/to/dst
>
>> The disk capacity of the VDO volume is connected to the oVirt 4.2.5 cluster 
>> as the Export Domain via NFS.
>>
>> I'm seeing a problem with the low performance of Export Domain.
>> Snapshots of virtual machines are copied very slowly to the Export Domain, 
>> approximately 6-8 MB/s.
>
> This is very very low throughput.
>
> Can you give more details on
> - the source domain, how it is connected (iSCSI/FC)?
> - the destination domain, how is it connected? (NFS 4.2?, 1G nic? 10G nic?)
> - the source image - can you attach output of:
>   qemu-img map --output json /path/to/src
> - If the source image is on block storage, please copy it to a file system
>   supporting sparsness using NFS 4.2 using:
>   qemu-img convert -p -f raw -O raw -t none -T none /path/to/src/ /path/to/dst
>   (if the image is qcow2, replace "raw" with "qcow2")
>
>> At the same time, if I try to run a write test in an mounted NFS-directory 
>> on any of the oVit cluster hosts, I get about 50-70 MB/s.
>>
>> # dd if=/dev/zero 
>> of=/rhev/data-center/mnt/fs05.holding.com:_mnt_vdo-vd1_nfs_ovirt-vm-backup/testfile
>>  count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 512000 bytes (5.1 GB) copied, 69.5506 s, 73.6 MB/s
>
> Again, not a good way to test.
>
> This sounds like https://bugzilla.redhat.com/1511891.
> (the bug may be private)
>
> Finally, can you provide detailed commands to reproduce your
> setup, so we can reproduce it in the lab?
> - how to create the vdo volume
> - how you created the file system on this volume
> - NFS version/configuration on the server
> - info about the server
> - info abut the network
> - info about the host
>
> Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JBEBADJ5TQEP4DXWNDOIRVJ5AWNTPI4I/


[ovirt-users] Re: oVirt 4.2.5 : VM snapshot creation does not work : command HSMGetAllTasksStatusesVDS failed: Could not acquire resource

2018-08-15 Thread Алексей Максимов
Hello Nir

> To confirm this theory, please share the output of:
> Top volume:
> dd if=/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/metadata bs=512 count=1 
> skip=16 iflag=direct

DOMAIN=6db73566-0f7f-4438-a9ef-6815075f45ea
CTIME=1533083673
FORMAT=COW
DISKTYPE=DATA
LEGALITY=LEGAL
SIZE=62914560
VOLTYPE=LEAF
DESCRIPTION=
IMAGE=cdf1751b-64d3-42bc-b9ef-b0174c7ea068
PUUID=208ece15-1c71-46f2-a019-6a9fce4309b2
MTIME=0
POOL_UUID=
TYPE=SPARSE
GEN=0
EOF
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.000348555 s, 1.5 MB/s


> Base volume:
> dd if=/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/metadata bs=512 count=1 
> skip=23 iflag=direct


DOMAIN=6db73566-0f7f-4438-a9ef-6815075f45ea
CTIME=1512474404
FORMAT=COW
DISKTYPE=2
LEGALITY=LEGAL
SIZE=62914560
VOLTYPE=INTERNAL
DESCRIPTION={"DiskAlias":"KOM-APP14_Disk1","DiskDescription":""}
IMAGE=cdf1751b-64d3-42bc-b9ef-b0174c7ea068
PUUID=----
MTIME=0
POOL_UUID=
TYPE=SPARSE
GEN=0
EOF
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00031362 s, 1.6 MB/s


> Deleted volume?:
> dd if=/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/metadata bs=512 count=1 
> skip=15 iflag=direct

NONE=##
EOF
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.000350361 s, 1.5 MB/s


15.08.2018, 21:09, "Nir Soffer" :
> On Wed, Aug 15, 2018 at 6:14 PM Алексей Максимов 
>  wrote:
>> Hello Nir
>>
>> Thanks for the answer.
>> The output of the commands is below.
>>
>> *
>>> 1. Please share the output of this command on one of the hosts:
>>> lvs -o vg_name,lv_name,tags | grep cdf1751b-64d3-42bc-b9ef-b0174c7ea068
>> *
>> # lvs -o vg_name,lv_name,tags | grep cdf1751b-64d3-42bc-b9ef-b0174c7ea068
>>
>>   VG                                   LV                                   
>> LV Tags
>>   ...
>>   6db73566-0f7f-4438-a9ef-6815075f45ea 208ece15-1c71-46f2-a019-6a9fce4309b2 
>> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_23,PU_----
>>   6db73566-0f7f-4438-a9ef-6815075f45ea 4974a4cc-b388-456f-b98e-19d2158f0d58 
>> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_15,PU_----
>>   6db73566-0f7f-4438-a9ef-6815075f45ea 8c66f617-7add-410c-b546-5214b0200832 
>> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_16,PU_208ece15-1c71-46f2-a019-6a9fce4309b2
>
> So we have 2 volumes - 2 are base volumes:
>
> - 208ece15-1c71-46f2-a019-6a9fce4309b2 
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_23,PU_----
> - 4974a4cc-b388-456f-b98e-19d2158f0d58 
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_15,PU_----
>
> And one is top volume:
> - 8c66f617-7add-410c-b546-5214b0200832 
> IU_cdf1751b-64d3-42bc-b9ef-b0174c7ea068,MD_16,PU_208ece15-1c71-46f2-a019-6a9fce4309b2
>
> So according to vdsm, this is the chain:
>
>     208ece15-1c71-46f2-a019-6a9fce4309b2 <- 
> 8c66f617-7add-410c-b546-5214b0200832 (top)
>
> The volume 4974a4cc-b388-456f-b98e-19d2158f0d58 is not part of this chain.
>
>> *
>>> qemu-img info --backing /dev/vg_name/lv_name
>> *
>>
>> # qemu-img info --backing 
>> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
>>
>> image: 
>> /dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
>> file format: qcow2
>> virtual size: 30G (32212254720 bytes)
>> disk size: 0
>> cluster_size: 65536
>> Format specific information:
>>     compat: 1.1
>>     lazy refcounts: false
>>     refcount bits: 16
>>     corrupt: false
>
> This is the base volume according to vdsm and qemu, good.
>
>> # qe

[ovirt-users] Re: oVirt 4.2.5 : VM snapshot creation does not work : command HSMGetAllTasksStatusesVDS failed: Could not acquire resource

2018-08-15 Thread Алексей Максимов
luster_size: 65536
backing file: 208ece15-1c71-46f2-a019-6a9fce4309b2 (actual path: 
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2)
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

image: 
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 0
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false



# qemu-img info --backing 
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/8c66f617-7add-410c-b546-5214b0200832

image: 
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/8c66f617-7add-410c-b546-5214b0200832
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 0
cluster_size: 65536
backing file: 208ece15-1c71-46f2-a019-6a9fce4309b2 (actual path: 
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2)
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

image: 
/dev/6db73566-0f7f-4438-a9ef-6815075f45ea/208ece15-1c71-46f2-a019-6a9fce4309b2
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 0
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false



Do not quite understand. What should I do next?


15.08.2018, 15:33, "Nir Soffer" :
> On Tue, Aug 14, 2018 at 6:03 PM Алексей Максимов 
>  wrote:
>> Hello, Nir
>>
>> Log in attachment.
>
> In the log we can see both createVolume and deleteVolume fail for this disk 
> uuid:
> cdf1751b-64d3-42bc-b9ef-b0174c7ea068
>
> 1. Please share the output of this command on one of the hosts:
>
>     lvs -o vg_name,lv_name,tags | grep cdf1751b-64d3-42bc-b9ef-b0174c7ea068
>
> This will show all the volumes belonging to this disk.
>
> 2. For every volume, share the output of qemu-img info:
>
> If the lv is not active, activate it:
>
>     lvchange -ay vg_name/lv_name
>
> Then run qemu-img info to find the actual chain:
>
>     qemu-img info --backing /dev/vg_name/lv_name
>
> If the lv was not active, deactivate it - we don't want to leave unused lvs 
> active.
>
>     lvchange -an vg_name/lv_name
> 3. On of the volume in the chain will not be part of the chain.
>
> No other volume will use it as backing file, and it may not have a backing
> file, or it may point to another volume in the chain.
>
> Once we found this volume, please check engine logs for this volume uuid. You 
> will probaly
> find that the volume was deleted in the past. Maybe you will not find it 
> since it was deleted
> months or years ago.
>
> 4. To verify that this volume does not have metadata, check the volume MD_N 
> tag.
> N is the offset in 512 bytes blocks from the start of the metadata volume.
>
> This will read the volume metadata block:
>
>     dd if=dev/vg_name/metadata bs=512 count=1 skip=N iflag=direct
>
> We expect to see:
>
>     NONE=...
>
> 5. To remove this volume use:
>
>     lvremove vg_name/lv_name
>
> Once the volume is removed, you will be able to create snapshot.
>
>> 14.08.2018, 01:30, "Nir Soffer" :
>>> On Mon, Aug 13, 2018 at 1:45 PM Aleksey Maksimov 
>>>  wrote:
>>>> We use oVirt 4.2.5.2-1.el7 (Hosted engine / 4 hosts in cluster / about 
>>>> twenty virtual machines)
>>>> Virtual machine disks are located on the Data Domain from FC SAN.
>>>> Snapshots of all virtual machines are created normally. But for one 
>>>> virtual machine, we can not create a snapshot.
>>>>
>>>> When we try to create a snapshot in the oVirt web console, we see such 
>>>> errors:
>>>>
>>>> Aug 13, 2018, 1:05:06 PM Failed to complete snapshot 'KOM-APP14_BACKUP01' 
>>>> creation for VM 'KOM-APP14'.
>>>> Aug 13, 2018, 1:05:01 PM VDSM KOM-VM14 command HSMGetAllTasksStatusesVDS 
>>>> failed: Could not acquire resource. Probably resource factory threw an 
>>>> exception.: ()
>>>> Aug 13, 2018, 1:05:00 PM Snapshot 'KOM-APP14_BACKUP01' creation for VM 
>>>> 'KOM-APP14' was initiated by pe...@sub.holding.com@sub.holding.com-authz.
>>>>
>>>> At this time on the server with the role of "SPM" in the vdsm.log we see 
>>>> this:
>>>>
>>>> ...
>>>> 2018-08-13 05:05:06,471-0500 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC 
>>>> call VM.getStats succeeded in 0.00 seconds (__init__:57

Re: [ovirt-users] ovirt-guest-agent.service has broken in Debian 8 virtual machines after updates hosts to 4.2

2018-01-08 Thread Алексей Максимов

Thank You Milan.

08.01.2018, 20:10, "Milan Zamazal" <mzama...@redhat.com>:
> Алексей Максимов <aleksey.i.maksi...@yandex.ru> writes:
>
>>  # wget 
>> http://ftp.us.debian.org/debian/pool/main/o/ovirt-guest-agent/ovirt-guest-agent_1.0.13.dfsg-2_all.deb
>>  # apt-get install gir1.2-glib-2.0 libdbus-glib-1-2 libgirepository-1.0-1 
>> libpango1.0-0 libuser1 python-dbus python-dbus-dev python-ethtool python-gi 
>> qemu-guest-agent usermode
>>  # dpkg -i ~/packages/ovirt-guest-agent_1.0.13.dfsg-2_all.deb
>
> Yes, right, you need newer ovirt-guest-agent version, so to install it
> from testing.
>
> I filed a Debian bug asking for a backport of the package for stable:
> https://bugs.debian.org/886661
>
>>  # udevadm trigger --subsystem-match="virtio-ports"
>>
>>  # systemctl restart ovirt-guest-agent.service
>
> Yes, alternatively you can reboot the VM, whatever is easier :-).
>
> However the package should do it itself, I think there is a bug in its
> installation script, so I filed another bug against the package:
> https://bugs.debian.org/886660
>
>>  Now the service is working.
>>  But I do not know if it's the right way :(
>
> Yes, it is.
>
> Regards,
> Milan

-- 
С наилучшими пожеланиями,
Максимов Алексей

Email: aleksey.i.maksi...@yandex.ru
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-guest-agent.service has broken in Debian 8 virtual machines after updates hosts to 4.2

2018-01-08 Thread Алексей Максимов
No one has any thoughts about this?

05.01.2018, 20:41, "Алексей Максимов" <aleksey.i.maksi...@yandex.ru>:
> Hello Ed.
>
> I found another way to solve the problem.
> But I do not know if it's the right way :(
> So I decided to ask a question here. But none of the gurus answered.
>
> # mkdir ~/packages
> # cd ~/packages
> # wget 
> http://ftp.us.debian.org/debian/pool/main/o/ovirt-guest-agent/ovirt-guest-agent_1.0.13.dfsg-2_all.deb
> # apt-get install gir1.2-glib-2.0 libdbus-glib-1-2 libgirepository-1.0-1 
> libpango1.0-0 libuser1 python-dbus python-dbus-dev python-ethtool python-gi 
> qemu-guest-agent usermode
> # dpkg -i ~/packages/ovirt-guest-agent_1.0.13.dfsg-2_all.deb
>
> # systemctl restart ovirt-guest-agent.service
> # systemctl status ovirt-guest-agent.service
>
> * ovirt-guest-agent.service - oVirt Guest Agent
>    Loaded: loaded (/lib/systemd/system/ovirt-guest-agent.service; enabled)
>    Active: failed (Result: exit-code) since Wed 2018-01-03 22:52:14 MSK; 1s 
> ago
>   Process: 23206 ExecStart=/usr/bin/python 
> /usr/share/ovirt-guest-agent/ovirt-guest-agent.py (code=exited, 
> status=1/FAILURE)
>   Process: 23203 ExecStartPre=/bin/chown ovirtagent:ovirtagent 
> /run/ovirt-guest-agent.pid (code=exited, status=0/SUCCESS)
>   Process: 23200 ExecStartPre=/bin/touch /run/ovirt-guest-agent.pid 
> (code=exited, status=0/SUCCESS)
>   Process: 23197 ExecStartPre=/sbin/modprobe virtio_console (code=exited, 
> status=0/SUCCESS)
>  Main PID: 23206 (code=exited, status=1/FAILURE)
>
> As we can see, the service does not start.
> In this case, the error in the log 
> (/var/log/ovirt-guest-agent/ovirt-guest-agent.log) will be different:
>
> MainThread::INFO::2018-01-03 
> 22:52:14,771::ovirt-guest-agent::59::root::Starting oVirt guest agent
> MainThread::ERROR::2018-01-03 
> 22:52:14,773::ovirt-guest-agent::141::root::Unhandled exception in oVirt 
> guest agent!
> Traceback (most recent call last):
>   File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in 
> 
> agent.run(daemon, pidfile)
>   File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run
> self.agent = LinuxVdsAgent(config)
>   File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in 
> __init__
> AgentLogicBase.__init__(self, config)
>   File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in 
> __init__
> self.vio = VirtIoChannel(config.get("virtio", "device_prefix"))
>   File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 162, in __init__
> self._stream = VirtIoStream(vport_name)
>   File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 143, in __init__
> self._vport = os.open(vport_name, os.O_RDWR)
> OSError: [Errno 13] Permission denied: '/dev/virtio-ports/ovirt-guest-agent.0'
>
> As a workaround for this problem, I use this:
>
> # cat /etc/udev/rules.d/55-ovirt-guest-agent.rules
>
> SYMLINK=="virtio-ports/ovirt-guest-agent.0", OWNER="ovirtagent", 
> GROUP="ovirtagent"
>
> refresh udev:
>
> # udevadm trigger --subsystem-match="virtio-ports"
>
> # systemctl restart ovirt-guest-agent.service
> # systemctl status ovirt-guest-agent.service
>
> * ovirt-guest-agent.service - oVirt Guest Agent
>    Loaded: loaded (/lib/systemd/system/ovirt-guest-agent.service; enabled)
>    Active: active (running) since Wed 2018-01-03 22:54:56 MSK; 6s ago
>   Process: 23252 ExecStartPre=/bin/chown ovirtagent:ovirtagent 
> /run/ovirt-guest-agent.pid (code=exited, status=0/SUCCESS)
>   Process: 23249 ExecStartPre=/bin/touch /run/ovirt-guest-agent.pid 
> (code=exited, status=0/SUCCESS)
>   Process: 23247 ExecStartPre=/sbin/modprobe virtio_console (code=exited, 
> status=0/SUCCESS)
>  Main PID: 23255 (python)
>    CGroup: /system.slice/ovirt-guest-agent.service
>    L-23255 /usr/bin/python 
> /usr/share/ovirt-guest-agent/ovirt-guest-agent.py
>
> Now the service is working.
> But I do not know if it's the right way :(
>
> 05.01.2018, 19:58, "Ed Stout" <edst...@gmail.com>:
>>  On 5 January 2018 at 10:32, Алексей Максимов
>>  <aleksey.i.maksi...@yandex.ru> wrote:
>>>   A similar problem is described here:
>>>
>>>   https://bugzilla.redhat.com/show_bug.cgi?id=1472293
>>>
>>>   But there is no solution.
>>
>>  There is, but its a bit sparse in the bug report - this worked for me
>>  on Ubuntu, same problem:
>>
>>  ##
>>  So I changed value in /etc/ovirt-guest-agent.conf to look like:
>>  [virtio]
>>  # device = /dev/virtio-ports/com.redh

Re: [ovirt-users] ovirt-guest-agent.service has broken in Debian 8 virtual machines after updates hosts to 4.2

2018-01-05 Thread Алексей Максимов
Hello Ed.

I found another way to solve the problem.
But I do not know if it's the right way :(
So I decided to ask a question here. But none of the gurus answered.


# mkdir ~/packages
# cd ~/packages
# wget 
http://ftp.us.debian.org/debian/pool/main/o/ovirt-guest-agent/ovirt-guest-agent_1.0.13.dfsg-2_all.deb
# apt-get install gir1.2-glib-2.0 libdbus-glib-1-2 libgirepository-1.0-1 
libpango1.0-0 libuser1 python-dbus python-dbus-dev python-ethtool python-gi 
qemu-guest-agent usermode
# dpkg -i ~/packages/ovirt-guest-agent_1.0.13.dfsg-2_all.deb

# systemctl restart ovirt-guest-agent.service
# systemctl status ovirt-guest-agent.service

* ovirt-guest-agent.service - oVirt Guest Agent
   Loaded: loaded (/lib/systemd/system/ovirt-guest-agent.service; enabled)
   Active: failed (Result: exit-code) since Wed 2018-01-03 22:52:14 MSK; 1s ago
  Process: 23206 ExecStart=/usr/bin/python 
/usr/share/ovirt-guest-agent/ovirt-guest-agent.py (code=exited, 
status=1/FAILURE)
  Process: 23203 ExecStartPre=/bin/chown ovirtagent:ovirtagent 
/run/ovirt-guest-agent.pid (code=exited, status=0/SUCCESS)
  Process: 23200 ExecStartPre=/bin/touch /run/ovirt-guest-agent.pid 
(code=exited, status=0/SUCCESS)
  Process: 23197 ExecStartPre=/sbin/modprobe virtio_console (code=exited, 
status=0/SUCCESS)
 Main PID: 23206 (code=exited, status=1/FAILURE)


As we can see, the service does not start.
In this case, the error in the log 
(/var/log/ovirt-guest-agent/ovirt-guest-agent.log) will be different:

MainThread::INFO::2018-01-03 
22:52:14,771::ovirt-guest-agent::59::root::Starting oVirt guest agent
MainThread::ERROR::2018-01-03 
22:52:14,773::ovirt-guest-agent::141::root::Unhandled exception in oVirt guest 
agent!
Traceback (most recent call last):
  File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in 

agent.run(daemon, pidfile)
  File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run
self.agent = LinuxVdsAgent(config)
  File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in __init__
AgentLogicBase.__init__(self, config)
  File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in __init__
self.vio = VirtIoChannel(config.get("virtio", "device_prefix"))
  File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 162, in __init__
self._stream = VirtIoStream(vport_name)
  File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 143, in __init__
self._vport = os.open(vport_name, os.O_RDWR)
OSError: [Errno 13] Permission denied: '/dev/virtio-ports/ovirt-guest-agent.0'

As a workaround for this problem, I use this:

# cat /etc/udev/rules.d/55-ovirt-guest-agent.rules

SYMLINK=="virtio-ports/ovirt-guest-agent.0", OWNER="ovirtagent", 
GROUP="ovirtagent"

refresh udev:

# udevadm trigger --subsystem-match="virtio-ports"


# systemctl restart ovirt-guest-agent.service
# systemctl status ovirt-guest-agent.service

* ovirt-guest-agent.service - oVirt Guest Agent
   Loaded: loaded (/lib/systemd/system/ovirt-guest-agent.service; enabled)
   Active: active (running) since Wed 2018-01-03 22:54:56 MSK; 6s ago
  Process: 23252 ExecStartPre=/bin/chown ovirtagent:ovirtagent 
/run/ovirt-guest-agent.pid (code=exited, status=0/SUCCESS)
  Process: 23249 ExecStartPre=/bin/touch /run/ovirt-guest-agent.pid 
(code=exited, status=0/SUCCESS)
  Process: 23247 ExecStartPre=/sbin/modprobe virtio_console (code=exited, 
status=0/SUCCESS)
 Main PID: 23255 (python)
   CGroup: /system.slice/ovirt-guest-agent.service
   L-23255 /usr/bin/python 
/usr/share/ovirt-guest-agent/ovirt-guest-agent.py

Now the service is working.
But I do not know if it's the right way :(

05.01.2018, 19:58, "Ed Stout" <edst...@gmail.com>:
> On 5 January 2018 at 10:32, Алексей Максимов
> <aleksey.i.maksi...@yandex.ru> wrote:
>>  A similar problem is described here:
>>
>>  https://bugzilla.redhat.com/show_bug.cgi?id=1472293
>>
>>  But there is no solution.
>
> There is, but its a bit sparse in the bug report - this worked for me
> on Ubuntu, same problem:
>
> ##
> So I changed value in /etc/ovirt-guest-agent.conf to look like:
> [virtio]
> # device = /dev/virtio-ports/com.redhat.rhevm.vdsm
> device = /dev/virtio-ports/ovirt-guest-agent.0
> ##
>
> Then...
>
> ##
>
> cat /etc/udev/rules.d/55-ovirt-guest-agent.rules
> SYMLINK=="virtio-ports/ovirt-guest-agent.0", OWNER="ovirtagent",
> GROUP="ovirtagent"
>
> udevadm trigger --subsystem-match="virtio-ports"
> ##
>
> After that, the service started for me.
>
>>  02.01.2018, 18:15, "Алексей Максимов" <aleksey.i.maksi...@yandex.ru>:
>>>  Hello, oVirt guru's
>>>
>>>  I just successfully updated my oVirt HE to 4.2.
>>>  

Re: [ovirt-users] ovirt-guest-agent.service has broken in Debian 8 virtual machines after updates hosts to 4.2

2018-01-05 Thread Алексей Максимов


A similar problem is described here:

https://bugzilla.redhat.com/show_bug.cgi?id=1472293

But there is no solution.

02.01.2018, 18:15, "Алексей Максимов" <aleksey.i.maksi...@yandex.ru>:
> Hello, oVirt guru's
>
> I just successfully updated my oVirt HE to 4.2.
> But after I upgraded all hosts and virtual machine restart, the 
> ovirt-guest-agent.service has stopped running in virtual machines with Debian 
> Jessie.
>
> 
>
> # lsb_release -a
>
> No LSB modules are available.
> Distributor ID: Debian
> Description: Debian GNU/Linux 8.10 (jessie)
> Release: 8.10
> Codename: jessie
>
> 
>
> # dpkg -l | grep ovirt
>
> ii ovirt-guest-agent 1.0.10.2.dfsg-2+deb8u1 all daemon that resides within 
> guest virtual machines
>
> Note: This package installed from Debian Jessie official repo
>
> 
>
> # systemctl status ovirt-guest-agent.service
>
> ● ovirt-guest-agent.service - oVirt Guest Agent
>    Loaded: loaded (/lib/systemd/system/ovirt-guest-agent.service; disabled)
>    Active: failed (Result: exit-code) since Tue 2018-01-02 17:36:29 MSK; 
> 23min ago
>  Main PID: 3419 (code=exited, status=1/FAILURE)
>
> Jan 02 17:36:29 APP3 systemd[1]: Started oVirt Guest Agent.
> Jan 02 17:36:29 APP3 systemd[1]: ovirt-guest-agent.service: main process 
> exited, code=exited, status=1/FAILURE
> Jan 02 17:36:29 APP3 systemd[1]: Unit ovirt-guest-agent.service entered 
> failed state.
>
> 
>
> From /var/log/ovirt-guest-agent/ovirt-guest-agent.log:
>
> MainThread::INFO::2018-01-02 
> 17:36:29,764::ovirt-guest-agent::57::root::Starting oVirt guest agent
> MainThread::ERROR::2018-01-02 
> 17:36:29,768::ovirt-guest-agent::138::root::Unhandled exception in oVirt 
> guest agent!
> Traceback (most recent call last):
>   File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 132, in 
> 
> agent.run(daemon, pidfile)
>   File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 63, in run
> self.agent = LinuxVdsAgent(config)
>   File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 375, in 
> __init__
> AgentLogicBase.__init__(self, config)
>   File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 171, in 
> __init__
> self.vio = VirtIoChannel(config.get("virtio", "device"))
>   File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 150, in __init__
> self._stream = VirtIoStream(vport_name)
>   File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 131, in __init__
> self._vport = os.open(vport_name, os.O_RDWR)
> OSError: [Errno 2] No such file or directory: 
> '/dev/virtio-ports/com.redhat.rhevm.vdsm'
>
> 
>
> Before updating to version 4.2 (for version 4.1.8) everything worked fine.
>
> Please help solve the problem

-- 
С наилучшими пожеланиями,
Максимов Алексей

Email: aleksey.i.maksi...@yandex.ru
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-guest-agent.service has broken in Debian 8 virtual machines after updates hosts to 4.2

2018-01-02 Thread Алексей Максимов
Hello, oVirt guru's

I just successfully updated my oVirt HE to 4.2.
But after I upgraded all hosts and virtual machine restart, the 
ovirt-guest-agent.service has stopped running in virtual machines with Debian 
Jessie.



# lsb_release -a

No LSB modules are available.
Distributor ID: Debian
Description:Debian GNU/Linux 8.10 (jessie)
Release:8.10
Codename:   jessie



# dpkg -l | grep ovirt

ii  ovirt-guest-agent1.0.10.2.dfsg-2+deb8u1 all 
 daemon that resides within guest virtual machines


Note: This package installed from Debian Jessie official repo



# systemctl status ovirt-guest-agent.service

● ovirt-guest-agent.service - oVirt Guest Agent
   Loaded: loaded (/lib/systemd/system/ovirt-guest-agent.service; disabled)
   Active: failed (Result: exit-code) since Tue 2018-01-02 17:36:29 MSK; 23min 
ago
 Main PID: 3419 (code=exited, status=1/FAILURE)

Jan 02 17:36:29 APP3 systemd[1]: Started oVirt Guest Agent.
Jan 02 17:36:29 APP3 systemd[1]: ovirt-guest-agent.service: main process 
exited, code=exited, status=1/FAILURE
Jan 02 17:36:29 APP3 systemd[1]: Unit ovirt-guest-agent.service entered failed 
state.



From /var/log/ovirt-guest-agent/ovirt-guest-agent.log:

MainThread::INFO::2018-01-02 
17:36:29,764::ovirt-guest-agent::57::root::Starting oVirt guest agent
MainThread::ERROR::2018-01-02 
17:36:29,768::ovirt-guest-agent::138::root::Unhandled exception in oVirt guest 
agent!
Traceback (most recent call last):
  File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 132, in 

agent.run(daemon, pidfile)
  File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 63, in run
self.agent = LinuxVdsAgent(config)
  File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 375, in __init__
AgentLogicBase.__init__(self, config)
  File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 171, in __init__
self.vio = VirtIoChannel(config.get("virtio", "device"))
  File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 150, in __init__
self._stream = VirtIoStream(vport_name)
  File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 131, in __init__
self._vport = os.open(vport_name, os.O_RDWR)
OSError: [Errno 2] No such file or directory: 
'/dev/virtio-ports/com.redhat.rhevm.vdsm'



Before updating to version 4.2 (for version 4.1.8) everything worked fine.

Please help solve the problem
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM command GetVmsInfoVDS failed: Missing OVF file from VM

2017-11-22 Thread Алексей Максимов
https://bugzilla.redhat.com/show_bug.cgi?id=1516494 22.11.2017, 17:47, "Benny Zlotnik" <bzlot...@redhat.com>:Hi, glad to hear it helped. https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engineThe component is BLL.Storageand the team is Storage Thanks On Wed, Nov 22, 2017 at 3:51 PM, Алексей Максимов <aleksey.i.maksi...@yandex.ru> wrote:Hello, Benny. I deleted the empty directory and the problem disappeared.Thank you for your help. PS:I don't know how to properly open a bug on https://bugzilla.redhat.com/Don't know which option to choose (https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt).Maybe you can open a bug and attach my logs? 20.11.2017, 13:08, "Benny Zlotnik" <bzlot...@redhat.com>:Yes, you can remove it On Mon, Nov 20, 2017 at 8:10 AM, Алексей Максимов <aleksey.i.maksi...@yandex.ru> wrote:I found an empty directory in the Export domain storage: # ls -la /rhev/data-center/mnt/fs01.my.dom-holding.com:_mnt_quadstor-vv1_ovirt-vm-backup/3a514c90-e574-4282-b1ee-779602e35f24/master/vms/f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6 total 16drwxr-xr-x.   2 vdsm kvm  4096 Nov  9 02:32 .drwxr-xr-x. 106 vdsm kvm 12288 Nov  9 02:32 .. I can just remove this directory? 19.11.2017, 18:51, "Benny Zlotnik" <bzlot...@redhat.com>:+ ovirt-users On Sun, Nov 19, 2017 at 5:40 PM, Benny Zlotnik <bzlot...@redhat.com> wrote:Hi, There are a couple of issues here, can you please open a bug so we can track this properly? https://bugzilla.redhat.com/and attach all relevant logs  I went over the logs, are you sure the export domain was formatted properly? Couldn't find it in the engine.logLooking at the logs it seems VMs were found on the export domain (id=3a514c90-e574-4282-b1ee-779602e35f24) 2017-11-19 13:18:13,007+0300 INFO  (jsonrpc/2) [storage.StorageDomain] vmList=[u'01a4f53e-699e-4ea5-aef4-458638f23ce9', u'03c9e965-710d-4fc8-be06-583abbd1d7a9', u'07dab4f6-d677-4faa-9875-97bd6d601f49', u'0b94a559-b31a-475d-9599-36e0dbea579a', u'13b42f3a-3057-4eb1-ad4b-f4e52f6ff196', u'151a4e75-d67a-4603-8f52-abfb46cb74c1', u'177479f5-2ed8-4b6c-9120-ec067d1a1247', u'18945b31-3ba5-4e54-9bf0-8fdc3a7d7411', u'1e72be16-f540-4cfd-b0e9-52b66220a98b', u'1ec85134-a7b5-46c2-9c6c-eaba340c5ffd', u'20b88cfc-bfae-4983-8d83-ba4e0c7feeb7', u'25fa96d1-6083-4daa-9755-026e632553d9', u'273ffd05-6f93-4e4a-aac9-149360b5f0b4', u'28188426-ae8b-4999-8e31-4c04fbba4dac', u'28e9d5f2-4312-4d0b-9af9-ec1287bae643', u'2b7093dc-5d16-4204-b211-5b3a1d729872', u'32ecfcbb-2678-4f43-8d59-418e03920693', u'3376ef0b-2af5-4a8b-9987-18f28f6bb334', u'34d1150f-7899-44d9-b8cf-1c917822f624', u'383bbfc6-6841-4476-b108-a1878ed9ce43', u'388e372f-b0e8-408f-b21b-0a5c4a84c457', u'39396196-42eb-4a27-9a57-a3e0dad8a361', u'3fc02ca2-7a03-4d5e-bc21-688f138a914f', u'4101ac1e-0582-4ebe-b4fb-c4aed39fadcf', u'44e10588-8047-4734-81b3-6a98c229b637', u'4794ca9c-5abd-4111-b19c-bdfbf7c39c86', u'47a83986-d3b8-4905-b017-090276e967f5', u'49d83471-a312-412e-b791-8ee0badccbb5', u'4b1b9360-a48a-425b-9a2e-19197b167c99', u'4d783e2a-2d81-435a-98c4-f7ed862e166b', u'51976b6e-d93f-477e-a22b-0fa84400ff84', u'56b77077-707c-4949-9ea9-3aca3ea912ec', u'56dc5c41-6caf-435f-8146-6503ea3eaab9', u'5729e036-5f6e-473b-9d1d-f1c4c5c55b2d', u'5873f804-b992-4559-aff5-797f97bfebf7', u'58b7a4ea-d572-4ab4-a4f1-55dddc5dc8e8', u'590d1adb-52e4-4d29-af44-c9aa5d328186', u'5c79f970-6e7b-4996-a2ce-1781c28bff79', u'5feab1f2-9a3d-4870-a0f3-fd97ea3c85c3', u'63749307-4486-4702-ade9-4324f5bfe80c', u'6555ac11-7b20-4074-9d71-f86bc10c01f9', u'66b4b8a0-b53b-40ea-87ab-75f6d9eef728', u'672c4e12-628f-4dcd-a57e-b4ff822a19f3', u'679c0445-512c-4988-8903-64c0c08b5fab', u'6ae337d0-e6a0-489f-82e6-57a85f63176a', u'6d713cb9-993d-4822-a030-ac7591794050', u'72a50ef0-945d-428a-a336-6447c4a70b99', u'751dfefc-9e18-4f26-bed6-db412cdb258c', u'7587db59-e840-41bc-96f3-b212b7b837a4', u'778c969e-1d22-46e3-bdbe-e20e0c5bb967', u'7810dec1-ee1c-4291-93f4-18e9a15fa8e2', u'7a6cfe35-e493-4c04-8fc6-e0bc72efc72d', u'7a7d814e-4586-40d5-9750-8896b00a6490', u'7af76921-4cf2-4c3c-9055-59c24d9e8b08', u'7d781e21-6613-41f4-bcea-8b57417e1211', u'7da51499-d7db-49fd-88f6-bcac30e5dd86', u'850a8041-77a4-4ae3-98f9-8d5f3a5778e6', u'85169fe8-8198-492f-b988-b8e24822fd01', u'87839926-8b84-482b-adec-5d99573edd9e', u'8a7eb414-71fa-4f91-a906-d70f95ccf995', u'8a9a1071-b005-4448-ba3f-c72bd7e0e34b', u'8b73e593-8513-4a8e-b051-ce91765b22bd', u'8cbd5615-4206-4e4a-992d-8705b2f2aac2', u'92e9d966-c552-4cf9-b84a-21dda96f3f81', u'95209226-a9a5-4ada-8eed-a672d58ba72c', u'986ce2a5-9912-4069-bfa9-e28f7a17385d', u'9f6c8d1d-da81-4020-92e5-1c14cf082d2c', u'9ff87197-d089-4b2d-8822-b0d6f6e67292', u'a0a0c756-fbe9-4f8e-b6e9-1f2d58f1d957', u'a46d5615-8d9f-4944-9334-2fca2b53c27e', u'a6a50244-366b-4b7c-b80f-04d7ce2d8912', u'aa6a4de6-cc9e-4d79-a795-98326bbd83db', u'accc0bc3-c501-4f0b-aeeb-6858f7e894fd', u'b09e5783-6765-4514-a5a3-86e5e73b729b', u'b1ecfe29-7563-44a9-b814-0faefac5465b', u'baa542e1-492a-4b1b-9f54-e9566a4fe315', u'bb91f9f5-98df-45b1-b8ca-

Re: [ovirt-users] VDSM command GetVmsInfoVDS failed: Missing OVF file from VM

2017-11-22 Thread Алексей Максимов
Hello, Benny. I deleted the empty directory and the problem disappeared.Thank you for your help. PS:I don't know how to properly open a bug on https://bugzilla.redhat.com/Don't know which option to choose (https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt).Maybe you can open a bug and attach my logs? 20.11.2017, 13:08, "Benny Zlotnik" <bzlot...@redhat.com>:Yes, you can remove it On Mon, Nov 20, 2017 at 8:10 AM, Алексей Максимов <aleksey.i.maksi...@yandex.ru> wrote:I found an empty directory in the Export domain storage: # ls -la /rhev/data-center/mnt/fs01.my.dom-holding.com:_mnt_quadstor-vv1_ovirt-vm-backup/3a514c90-e574-4282-b1ee-779602e35f24/master/vms/f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6 total 16drwxr-xr-x.   2 vdsm kvm  4096 Nov  9 02:32 .drwxr-xr-x. 106 vdsm kvm 12288 Nov  9 02:32 .. I can just remove this directory? 19.11.2017, 18:51, "Benny Zlotnik" <bzlot...@redhat.com>:+ ovirt-users On Sun, Nov 19, 2017 at 5:40 PM, Benny Zlotnik <bzlot...@redhat.com> wrote:Hi, There are a couple of issues here, can you please open a bug so we can track this properly? https://bugzilla.redhat.com/and attach all relevant logs  I went over the logs, are you sure the export domain was formatted properly? Couldn't find it in the engine.logLooking at the logs it seems VMs were found on the export domain (id=3a514c90-e574-4282-b1ee-779602e35f24) 2017-11-19 13:18:13,007+0300 INFO  (jsonrpc/2) [storage.StorageDomain] vmList=[u'01a4f53e-699e-4ea5-aef4-458638f23ce9', u'03c9e965-710d-4fc8-be06-583abbd1d7a9', u'07dab4f6-d677-4faa-9875-97bd6d601f49', u'0b94a559-b31a-475d-9599-36e0dbea579a', u'13b42f3a-3057-4eb1-ad4b-f4e52f6ff196', u'151a4e75-d67a-4603-8f52-abfb46cb74c1', u'177479f5-2ed8-4b6c-9120-ec067d1a1247', u'18945b31-3ba5-4e54-9bf0-8fdc3a7d7411', u'1e72be16-f540-4cfd-b0e9-52b66220a98b', u'1ec85134-a7b5-46c2-9c6c-eaba340c5ffd', u'20b88cfc-bfae-4983-8d83-ba4e0c7feeb7', u'25fa96d1-6083-4daa-9755-026e632553d9', u'273ffd05-6f93-4e4a-aac9-149360b5f0b4', u'28188426-ae8b-4999-8e31-4c04fbba4dac', u'28e9d5f2-4312-4d0b-9af9-ec1287bae643', u'2b7093dc-5d16-4204-b211-5b3a1d729872', u'32ecfcbb-2678-4f43-8d59-418e03920693', u'3376ef0b-2af5-4a8b-9987-18f28f6bb334', u'34d1150f-7899-44d9-b8cf-1c917822f624', u'383bbfc6-6841-4476-b108-a1878ed9ce43', u'388e372f-b0e8-408f-b21b-0a5c4a84c457', u'39396196-42eb-4a27-9a57-a3e0dad8a361', u'3fc02ca2-7a03-4d5e-bc21-688f138a914f', u'4101ac1e-0582-4ebe-b4fb-c4aed39fadcf', u'44e10588-8047-4734-81b3-6a98c229b637', u'4794ca9c-5abd-4111-b19c-bdfbf7c39c86', u'47a83986-d3b8-4905-b017-090276e967f5', u'49d83471-a312-412e-b791-8ee0badccbb5', u'4b1b9360-a48a-425b-9a2e-19197b167c99', u'4d783e2a-2d81-435a-98c4-f7ed862e166b', u'51976b6e-d93f-477e-a22b-0fa84400ff84', u'56b77077-707c-4949-9ea9-3aca3ea912ec', u'56dc5c41-6caf-435f-8146-6503ea3eaab9', u'5729e036-5f6e-473b-9d1d-f1c4c5c55b2d', u'5873f804-b992-4559-aff5-797f97bfebf7', u'58b7a4ea-d572-4ab4-a4f1-55dddc5dc8e8', u'590d1adb-52e4-4d29-af44-c9aa5d328186', u'5c79f970-6e7b-4996-a2ce-1781c28bff79', u'5feab1f2-9a3d-4870-a0f3-fd97ea3c85c3', u'63749307-4486-4702-ade9-4324f5bfe80c', u'6555ac11-7b20-4074-9d71-f86bc10c01f9', u'66b4b8a0-b53b-40ea-87ab-75f6d9eef728', u'672c4e12-628f-4dcd-a57e-b4ff822a19f3', u'679c0445-512c-4988-8903-64c0c08b5fab', u'6ae337d0-e6a0-489f-82e6-57a85f63176a', u'6d713cb9-993d-4822-a030-ac7591794050', u'72a50ef0-945d-428a-a336-6447c4a70b99', u'751dfefc-9e18-4f26-bed6-db412cdb258c', u'7587db59-e840-41bc-96f3-b212b7b837a4', u'778c969e-1d22-46e3-bdbe-e20e0c5bb967', u'7810dec1-ee1c-4291-93f4-18e9a15fa8e2', u'7a6cfe35-e493-4c04-8fc6-e0bc72efc72d', u'7a7d814e-4586-40d5-9750-8896b00a6490', u'7af76921-4cf2-4c3c-9055-59c24d9e8b08', u'7d781e21-6613-41f4-bcea-8b57417e1211', u'7da51499-d7db-49fd-88f6-bcac30e5dd86', u'850a8041-77a4-4ae3-98f9-8d5f3a5778e6', u'85169fe8-8198-492f-b988-b8e24822fd01', u'87839926-8b84-482b-adec-5d99573edd9e', u'8a7eb414-71fa-4f91-a906-d70f95ccf995', u'8a9a1071-b005-4448-ba3f-c72bd7e0e34b', u'8b73e593-8513-4a8e-b051-ce91765b22bd', u'8cbd5615-4206-4e4a-992d-8705b2f2aac2', u'92e9d966-c552-4cf9-b84a-21dda96f3f81', u'95209226-a9a5-4ada-8eed-a672d58ba72c', u'986ce2a5-9912-4069-bfa9-e28f7a17385d', u'9f6c8d1d-da81-4020-92e5-1c14cf082d2c', u'9ff87197-d089-4b2d-8822-b0d6f6e67292', u'a0a0c756-fbe9-4f8e-b6e9-1f2d58f1d957', u'a46d5615-8d9f-4944-9334-2fca2b53c27e', u'a6a50244-366b-4b7c-b80f-04d7ce2d8912', u'aa6a4de6-cc9e-4d79-a795-98326bbd83db', u'accc0bc3-c501-4f0b-aeeb-6858f7e894fd', u'b09e5783-6765-4514-a5a3-86e5e73b729b', u'b1ecfe29-7563-44a9-b814-0faefac5465b', u'baa542e1-492a-4b1b-9f54-e9566a4fe315', u'bb91f9f5-98df-45b1-b8ca-9f67a92eef03', u'bd11f11e-be3d-4456-917c-f93ba9a19abe', u'bee3587e-50f4-44bc-a199-35b38a19ffc5', u'bf573d58-1f49-48a9-968d-039e0916c973', u'c01d466a-8ad8-4afe-b383-e365deebc6b8', u'c0be5c12-be26-47b7-ad26-3ec2469f1d3f', u'c31f4f53-c22b-40ff-8408-f36f591f55b5', u'c530e339-99bf-48a2-a63a-cfd2a4dba198', u'c8a610c8-72e5-4217-b4d9-130f85db1db7', u'ca0567e1-d445-4875-9

Re: [ovirt-users] VDSM command GetVmsInfoVDS failed: Missing OVF file from VM

2017-11-19 Thread Алексей Максимов
f29f', u'eeb8a3c5-8995-40cb-91c6-3097f7bc8254', u'f2b20f6d-5a6a-4498-b305-db558a22af48', u'f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6', u'f8acf745-eee9-4509-8658-c5e569fbb4ef', u'fb9a7e78-fe58-4f79-a4e9-ee574200207f', u'fdf5f646-b25c-4000-8984-5e90a5b2c034'] (sd:906) f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6 - the offending VM is there. Can you see if there's anything in the directory? The path should be something like /rhev/data-center/mnt/./3a514c90-e574-4282-b1ee-779602e35f24/master/vms/f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6You can try and remove it manually. On Sun, Nov 19, 2017 at 1:04 PM, Алексей Максимов <aleksey.i.maksi...@yandex.ru> wrote:Hello, Benny Logs attached. 19.11.2017, 13:42, "Benny Zlotnik" <bzlot...@redhat.com>:Hi, Please attach full engine and vdsm logs On Sun, Nov 19, 2017 at 12:26 PM, Алексей Максимов <aleksey.i.maksi...@yandex.ru> wrote: Hello, oVirt guru`s! oVirt Engine Version: 4.1.6.2-1.el7.centos Some time ago the problems started with the oVirt administrative web console.When I try to open the sup-tab "Template import" for Export domain on tab "Storage" I get the error in sub-tub "Alerts" VDSM command GetVmsInfoVDS failed: Missing OVF file from VM: (u'f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6',) All storages on tab "Storage" mark as down in web console.The SPM-role begins frantically be transmitted from one host to another.Screenshot attached. All virtual machines at the same time working without stopBut I can't get a list of VMS stored on Export domain storage. Recently this problem appeared and I deleted Export domain storage.I completely deleted the Export domain storage from oVirt, formatted it, and then attached again to the oVirtThe problem is repeated again. Please help to solve this problem. -- With best wishes,Aleksey.I.Maksimov___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users   -- С наилучшими пожеланиями,Максимов АлексейEmail: aleksey.i.maksi...@yandex.ru   -- С наилучшими пожеланиями,Максимов АлексейEmail: aleksey.i.maksi...@yandex.ru ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4 with custom SSL certificate and SPICE HTML5 browser client -> WebSocket error: Can't connect to websocket on URL: wss://ovirt.engine.fqdn:6100/

2016-08-01 Thread Алексей Максимов
Hello oVirt guru`s !

I have successfully replaced the oVirt 4 site SSL-certificate according to the 
instructions from "Replacing oVirt SSL Certificate" 
section in "oVirt Administration Guide"
http://www.ovirt.org/documentation/admin-guide/administration-guide/

3 files have been replaced:

/etc/pki/ovirt-engine/certs/apache.cer
/etc/pki/ovirt-engine/keys/apache.key.nopass
/etc/pki/ovirt-engine/apache-ca.pem

Now the oVirt site using my certificate and everything works fine, but when I 
try to use SPICE HTML5 browser client in Firefox or Chrome 
I see a gray screen and message under the button "Toggle messages output":
 
WebSocket error: Can't connect to websocket on URL: 
wss://ovirt.engine.fqdn:6100/eyJ...0=[object Event]


Before replacing certificates SPICE HTML5 browser client works.
Native SPICE client works fine.

Tell me what to do with SPICE HTML5 browser client?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users