[ovirt-users] Re: Info about Cinderlib integration testing

2019-03-06 Thread Gianluca Cecchi
On Wed, Mar 6, 2019 at 12:49 PM Benny Zlotnik  wrote:

> Also, which driver are you planning on trying?
>
> And there are some known issues we fixed in the upcoming 4.3.2,
> like setting correct permissions to /usr/share/ovirt-engine/cinderlib
> it should be owned by the ovirt user
>
> We'll be happy to receive bug reports
>
> On Wed, Mar 6, 2019, 13:44 Benny Zlotnik  wrote:
>
>> unfortunately we don't have proper packaging for cinderlib at the moment,
>> it needs to be installed via pip,
>> pip install cinderlib
>>
>> also you need to enable the config value ManagedBlockDomainSupported
>>
>>>
>>>
Thanks all for the info and the links.
I thought from the engine-setup output that it had already been integrated.
I'm going to test something with the upcoming 4.3.2 so.
As far as the driver version is concerned, I think I can use the latest
version available or what did you mean exactly?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PRY4OBATW6JSFIB3R6SJDJDVANEMTX2X/


[ovirt-users] Re: oVirt Performance (Horrific)

2019-03-06 Thread Krutika Dhananjay
So from the profile, it appears the XATTROPs and FINODELKs are way higher
than the number of WRITEs:


...
...
%-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
 Fop
 -   ---   ---   ---   

  0.43 384.83 us  51.00 us   65375.00 us  13632
FXATTROP
  7.54   13535.70 us 225.00 us  210298.00 us   6816
 WRITE
 45.99   28508.86 us   7.00 us 2591280.00 us  19751
FINODELK

...
...


We'd noticed something similar in our internal tests and found
inefficiencies in gluster's eager-lock implementation. This was fixed at
https://review.gluster.org/c/glusterfs/+/19503.
I need the two things I asked for in the prev mail to confirm if you're
hitting the same issue.

-Krutika

On Thu, Mar 7, 2019 at 12:24 PM Krutika Dhananjay 
wrote:

> Hi,
>
> Could you share the following pieces of information to begin with -
>
> 1. output of `gluster volume info $AFFECTED_VOLUME_NAME`
> 2. glusterfs version you're running
>
> -Krutika
>
>
> On Sat, Mar 2, 2019 at 3:38 AM Drew R  wrote:
>
>> Saw some people asking for profile info.  So I had started a migration
>> from a 6TB WDGold 2+1arb replicated gluster to a 1TB samsung ssd 2+1 rep
>> gluster and it's been running a while for a 100GB file thin provisioned
>> with like 28GB actually used.  Here is the profile info.  I started the
>> profiler like 5 minutes ago. The migration had been running for like
>> 30minutes:
>>
>> gluster volume profile gv2 info
>> Brick: 10.30.30.122:/gluster_bricks/gv2/brick
>> -
>> Cumulative Stats:
>>Block Size:256b+ 512b+
>> 1024b+
>>  No. of Reads: 1189 8
>> 12
>> No. of Writes:4  3245
>>883
>>
>>Block Size:   2048b+4096b+
>> 8192b+
>>  No. of Reads:   1020
>>  2
>> No. of Writes: 1087312228
>> 124080
>>
>>Block Size:  16384b+   32768b+
>>  65536b+
>>  No. of Reads:0 1
>> 52
>> No. of Writes: 5188  3617
>>   5532
>>
>>Block Size: 131072b+
>>  No. of Reads:70191
>> No. of Writes:   634192
>>  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
>>Fop
>>  -   ---   ---   ---   
>>   
>>   0.00   0.00 us   0.00 us   0.00 us  2
>> FORGET
>>   0.00   0.00 us   0.00 us   0.00 us202
>>  RELEASE
>>   0.00   0.00 us   0.00 us   0.00 us   1297
>> RELEASEDIR
>>   0.00  14.50 us   9.00 us  20.00 us  4
>>  READDIR
>>   0.00  38.00 us   9.00 us 120.00 us  7
>> GETXATTR
>>   0.00  66.00 us  34.00 us 128.00 us  6
>>   OPEN
>>   0.00 137.25 us  52.00 us 195.00 us  4
>>  SETATTR
>>   0.00  23.19 us  11.00 us  46.00 us 26
>>  INODELK
>>   0.00  41.58 us  18.00 us  79.00 us 24
>>  OPENDIR
>>   0.00 166.70 us  15.00 us 775.00 us 27
>> READDIRP
>>   0.01 135.29 us  12.00 us   11695.00 us221
>> STATFS
>>   0.01 176.54 us  22.00 us   22944.00 us364
>>  FSTAT
>>   0.02 626.21 us  13.00 us   17308.00 us168
>>   STAT
>>   0.09 834.84 us   9.00 us   34337.00 us607
>> LOOKUP
>>   0.73 146.18 us   6.00 us   52255.00 us  29329
>> FINODELK
>>   1.03 298.20 us  42.00 us   43711.00 us  20204
>> FXATTROP
>>  15.388903.40 us 213.00 us  213832.00 us  10102
>>  WRITE
>>  39.14   26796.37 us 222.00 us  122696.00 us   8538
>>   READ
>>  43.59   39536.79 us 259.00 us  183630.00 us   6446
>>  FSYNC
>>
>> Duration: 15078 seconds
>>Data Read: 9207377205 bytes
>> Data Written: 86214017762 bytes
>>
>> Interval 2 Stats:
>>Block Size:256b+ 512b+
>> 1024b+
>>  No. of Reads:   17 0
>>  0
>> No. of Writes:043
>>  7
>>
>>Block Size:   2048b+4096b+
>> 8192b+
>>  No. of Reads:0 7
>>  0
>> No. of Writes:   16  1881
>>   1010
>>
>>Block Size:  16384b+   32768b+
>>  65536b+
>>  No. of Reads:0 0
>>  6
>> No. of Writes:  305   586
>>   2359
>>
>>Block Size: 131072b+
>>  No. of Reads: 7162
>> No. of Writes:

[ovirt-users] Re: oVirt Performance (Horrific)

2019-03-06 Thread Krutika Dhananjay
Hi,

Could you share the following pieces of information to begin with -

1. output of `gluster volume info $AFFECTED_VOLUME_NAME`
2. glusterfs version you're running

-Krutika


On Sat, Mar 2, 2019 at 3:38 AM Drew R  wrote:

> Saw some people asking for profile info.  So I had started a migration
> from a 6TB WDGold 2+1arb replicated gluster to a 1TB samsung ssd 2+1 rep
> gluster and it's been running a while for a 100GB file thin provisioned
> with like 28GB actually used.  Here is the profile info.  I started the
> profiler like 5 minutes ago. The migration had been running for like
> 30minutes:
>
> gluster volume profile gv2 info
> Brick: 10.30.30.122:/gluster_bricks/gv2/brick
> -
> Cumulative Stats:
>Block Size:256b+ 512b+
> 1024b+
>  No. of Reads: 1189 8
>   12
> No. of Writes:4  3245
>  883
>
>Block Size:   2048b+4096b+
> 8192b+
>  No. of Reads:   1020
>2
> No. of Writes: 1087312228
> 124080
>
>Block Size:  16384b+   32768b+
>  65536b+
>  No. of Reads:0 1
>   52
> No. of Writes: 5188  3617
> 5532
>
>Block Size: 131072b+
>  No. of Reads:70191
> No. of Writes:   634192
>  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
>  Fop
>  -   ---   ---   ---   
> 
>   0.00   0.00 us   0.00 us   0.00 us  2
> FORGET
>   0.00   0.00 us   0.00 us   0.00 us202
>  RELEASE
>   0.00   0.00 us   0.00 us   0.00 us   1297
> RELEASEDIR
>   0.00  14.50 us   9.00 us  20.00 us  4
>  READDIR
>   0.00  38.00 us   9.00 us 120.00 us  7
> GETXATTR
>   0.00  66.00 us  34.00 us 128.00 us  6
> OPEN
>   0.00 137.25 us  52.00 us 195.00 us  4
>  SETATTR
>   0.00  23.19 us  11.00 us  46.00 us 26
>  INODELK
>   0.00  41.58 us  18.00 us  79.00 us 24
>  OPENDIR
>   0.00 166.70 us  15.00 us 775.00 us 27
> READDIRP
>   0.01 135.29 us  12.00 us   11695.00 us221
> STATFS
>   0.01 176.54 us  22.00 us   22944.00 us364
>  FSTAT
>   0.02 626.21 us  13.00 us   17308.00 us168
> STAT
>   0.09 834.84 us   9.00 us   34337.00 us607
> LOOKUP
>   0.73 146.18 us   6.00 us   52255.00 us  29329
> FINODELK
>   1.03 298.20 us  42.00 us   43711.00 us  20204
> FXATTROP
>  15.388903.40 us 213.00 us  213832.00 us  10102
>  WRITE
>  39.14   26796.37 us 222.00 us  122696.00 us   8538
> READ
>  43.59   39536.79 us 259.00 us  183630.00 us   6446
>  FSYNC
>
> Duration: 15078 seconds
>Data Read: 9207377205 bytes
> Data Written: 86214017762 bytes
>
> Interval 2 Stats:
>Block Size:256b+ 512b+
> 1024b+
>  No. of Reads:   17 0
>0
> No. of Writes:043
>7
>
>Block Size:   2048b+4096b+
> 8192b+
>  No. of Reads:0 7
>0
> No. of Writes:   16  1881
> 1010
>
>Block Size:  16384b+   32768b+
>  65536b+
>  No. of Reads:0 0
>6
> No. of Writes:  305   586
> 2359
>
>Block Size: 131072b+
>  No. of Reads: 7162
> No. of Writes:  610
>  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
>  Fop
>  -   ---   ---   ---   
> 
>   0.00   0.00 us   0.00 us   0.00 us  6
>  RELEASE
>   0.00   0.00 us   0.00 us   0.00 us 20
> RELEASEDIR
>   0.00  14.50 us   9.00 us  20.00 us  4
>  READDIR
>   0.00  38.00 us   9.00 us 120.00 us  7
> GETXATTR
>   0.00  66.00 us  34.00 us 128.00 us  6
> OPEN
>   0.00 137.25 us  52.00 us 195.00 us  4
>  SETATTR
>   0.00  23.19 us  11.00 us  46.00 us 26
>  INODELK
>   0.00  40.05 us  18.00 us  79.00 us 20
>  OPENDIR
>   0.00 180.33 us  16.00 us 775.00 us 21
> READDIRP
>   0.01 181.77 us  12.00 us   11695.00 us149
> STATFS
>   0.01 511.23 

[ovirt-users] Re: Two Hosts with Self Hosted Engine - HA / Failover & NFS

2019-03-06 Thread Strahil
I don't think that you can achieve it with only 2 nodes, as you can't protect 
yourself of split brain.
Ovirt supports only glusterfs of replica 3 arbiter 1. If you create your own 
glusterfs , you can use glusterd2 with a "remote arbiter" in another location. 
That will give you protection from split brain and as it is remote - the 
latency won't kill your write speed.

I can recommend you to get a VM in your environment (not hosted on any of the 2 
hosts) or a small machine with a SSD  and use that as pure arbiter.

Using one of the 2 nodes as arbiter brick is not going to help, as when that 
node fails - the cluster on the other node will stop working.
It's the same with hosting NFS on one of the machines.

As far as I know DRBD is not yet fully integrated, but you can give it a 
try.Still, using only 2 nodes gives no protection from split brain.

Best Regards,
Strahil NikolovOn Mar 7, 2019 01:28, sha...@lifestylepanel.com wrote:
>
> Is it possible to have only two physical hosts with NFS and be able to do VM 
> HA / Failover between these hosts? 
>
> Both hosts are identical with RAID Drive Arrays of 8TB. 
>
> If so, can anybody point me to any docs or examples on exactly how the 
> Storage setup is done so that NFS will replicate across the hosts? 
>
> If not what file system should I use to achieve this? 
>
> Thanks 
> Shane
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5H4NDGUBBFYWI65KFNVJVVQO3O5HNPN2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DZ7ZL7QLZQHUCPUTNJ3OYSOXHUM7YZKM/


[ovirt-users] Re: Two Hosts with Self Hosted Engine - HA / Failover & NFS

2019-03-06 Thread Jayme
Shane,

This may be possible, I'm sure others will chime in.  I do think you'd save
yourself a lot of headaches if you were able to do three server
Hyper-converged infrastructure setup with GlusterFS in either replica 3 or
replica 3 arbiter 1.  You will get a very good HA solution out of a 3 node
HCI build.   It's difficult to avoid split brain with only two nodes.

On Wed, Mar 6, 2019 at 7:29 PM  wrote:

> Is it possible to have only two physical hosts with NFS and be able to do
> VM HA / Failover between these hosts?
>
> Both hosts are identical with RAID Drive Arrays of 8TB.
>
> If so, can anybody point me to any docs or examples on exactly how the
> Storage setup is done so that NFS will replicate across the hosts?
>
> If not what file system should I use to achieve this?
>
> Thanks
> Shane
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5H4NDGUBBFYWI65KFNVJVVQO3O5HNPN2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CNTUCKKSZFUZCIOIJJRDGGNNMEPEXCCB/


[ovirt-users] Two Hosts with Self Hosted Engine - HA / Failover & NFS

2019-03-06 Thread shanep
Is it possible to have only two physical hosts with NFS and be able to do VM HA 
/ Failover between these hosts?

Both hosts are identical with RAID Drive Arrays of 8TB.

If so, can anybody point me to any docs or examples on exactly how the Storage 
setup is done so that NFS will replicate across the hosts?

If not what file system should I use to achieve this?

Thanks
Shane
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5H4NDGUBBFYWI65KFNVJVVQO3O5HNPN2/


[ovirt-users] Re: Infiniband Usage / Support

2019-03-06 Thread Andrey Rusakov
Hi,

looking for help...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GOIHBWFJPPSJJRCBMTRBGMSZSKDOIXZI/


[ovirt-users] Re: Quota Actual Consumption

2019-03-06 Thread Andrey Rusakov
Hi,

looking for help...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I253SUYFTDRBRNACBAELZJD7UVOMP5F7/


[ovirt-users] Install Hyerpvisor via PXE boot

2019-03-06 Thread wodel youchi
Hi,

Is it possible to install oVirt Hypervisor using PXE boot install.

I did a simple dhcpd+tftp+httpd install and I configured a test with CentOS
7 and it works both for legacy and efi.

When I tried with the Hypervisor ISO, the image booted but I didn't any
package for installation, digging a little I found the ks.cfg which contain
this :

cd /tmp
rpm2cpio 
/run/install/repo/Packages/redhat-virtualization-host-image-update*|cpio
-ivd
squashfs=$(find|grep squashfs|grep -v meta)
ln -s $squashfs /tmp/squashfs
%end
liveimg --url=file:///tmp/squashfs

I modified my grub.cfg like this :
menuentry 'Install oVirtH 4.2' --class fedora --class gnu-linux
--class gnu --class os {
 linuxefi  /networkboot/ovhyper4.2/vmlinuz
inst.ks=http://192.168.1.50/pub/oh42/ks.cfg
inst.stage2=http://192.168.1.50/pub/oh42/
 initrdefi /networkboot/ovhyper4.2/initrd.img
}

This time I got Fatal error : file:///tmp/squashfs not found.

Regards.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OZ7SHIJDHGT2DOZBHV3KAZF5ZYRVP6OS/


[ovirt-users] ipxe-roms-qemu question

2019-03-06 Thread laco.h...@gmail.com

Hi all,

in the past we have used customized ipxe (to allow boot over network 
with 10G cards), now we have finally updated our hypervisors to the 
latest ipxe-roms-qemu
Of course the sum now differs and during live-migration the libvirtd 
throws this error:


Mar  4 11:37:14 hypevisor-01 libvirtd: 2019-03-04 10:37:14.084+: 
15862: error : qemuMigrationJobCheckStatus:1313 : operation failed: 
migration out job: unexpectedly failed
Mar  4 11:37:15 hypevisor-01 libvirtd: 2019-03-04T10:37:13.941040Z 
qemu-kvm: Length mismatch: :00:03.0/virtio-net-pci.rom: 0x2 in 
!= 0x4: Invalid argument
Mar  4 11:37:15 hypevisor-0 libvirtd: 2019-03-04T10:37:13.941090Z 
qemu-kvm: error while loading state for instance 0x0 of device 'ram'
Mar  4 11:37:15 hypevisor-0 libvirtd: 2019-03-04T10:37:13.941530Z 
qemu-kvm: load of migration failed: Invalid argument



is there an easy command we can use to identify what guests are still 
using the old .rom and must be powercycled ?


Thank you in advance
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YWGV6PZLO7OFMHLE4F6KBHP777TV5MFX/


[ovirt-users] Re: OVN Deployment

2019-03-06 Thread Simone Tiraboschi
On Wed, Mar 6, 2019 at 3:44 PM Bryan Sockel 
wrote:

> Hi,
>
> I am looking to implement OVN with in my setup, but could use some
> guidance on the implementation.  As part of the implementation i am
> planning on moving from engine being installed on a physical server to
> running as a VM with in my environment.
>
> I do not need to retain any historical data.  The plan would be to import
> the vm's into a new deployment of oVirt after getting everything setup.
> Currently i have a single host i plan on using for the initial deployment,
> but as vm's are moved over, i will be adding additional hosts.
>
> With the deployment of oVirt 4.3.x the ovn provider is installed
> correctly.  Is there a problem with installing the OVN Controller on the
> Hosted Engine?
>

It's installed by default.


>
> Has any one found a good guide for setting up and deploying ovn in an
> oVirt hosted-engine environment?
>
>
> Thank You,
>
> Bryan Sockel
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CPBLH5E6KTUIOHE6I4GQN25S3LD5W5VP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4DMTEA6GY73HIBYDKR235YDXCE6LOHQS/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-06 Thread Simone Tiraboschi
On Wed, Mar 6, 2019 at 3:09 PM Strahil Nikolov 
wrote:

> Hi Simone,
>
> thanks for your reply.
>
> >Are you really sure that the issue was on the ping?
> >on storage errors the broker restart itself and while the broker is
> restarting >the agent cannot ask the broker to trigger the gateway monitor
> (the ping one) and >so that error message.
>
> It seemed so in that moment, but I'm not so sure , right now :)
>
> >Which kind of storage are you using?
> >can you please attach /var/log/ovirt-hosted-engine-ha/broker.log ?
>
> I'm using glustervs v5 from ovirt 4.3.1 with FUSE mount.
> Please , have a look in the attached logs.
>

Nothing seems that strange there but that error.
Can you please try with ovirt-ha-agent and ovirt-ha-broker in debug mode?
you have to set level=DEBUG in [logger_root] section
in /etc/ovirt-hosted-engine-ha/agent-log.conf
and /etc/ovirt-hosted-engine-ha/broker-log.conf and restart the two
services.


>
> Best Regards,
> Strahil Nikolov
>
> В сряда, 6 март 2019 г., 9:53:20 ч. Гринуич+2, Simone Tiraboschi <
> stira...@redhat.com> написа:
>
>
>
>
> On Wed, Mar 6, 2019 at 6:13 AM Strahil  wrote:
>
> Hi guys,
>
> After updating to 4.3.1 I had an issue where the ovirt-ha-broker was
> complaining that it couldn't ping the gateway.
>
>
> Are you really sure that the issue was on the ping?
> on storage errors the broker restart itself and while the broker is
> restarting the agent cannot ask the broker to trigger the gateway monitor
> (the ping one) and so that error message.
>
>
> As I have seen that before - I stopped ovirt-ha-agent, ovirt-ha-broker,
> vdsmd, supervdsmd and sanlock on the nodes and reinitialized the lockspace.
>
> I gues s I didn't do it properly as now I receive:
>
> ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
> Failed extracting VM OVF from the OVF_STORE volume, falling back to initial
> vm.conf
>
> Any hints how to fix this ? Of course a redeploy is possible, but I prefer
> to recover from that.
>
>
> Which kind of storage are you using?
> can you please attach /var/log/ovirt-hosted-engine-ha/broker.log ?
>
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OU3FKLEPH7AHT2LO2IYZ47RJHRA72C3Z/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BNV7AVUBLOV2UDVBTYN23ZEZ2Q4TJYHV/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3YOLMXMNXPT4B32Y4CYPNQRQXWA2UO3/


[ovirt-users] OVN Deployment

2019-03-06 Thread Bryan Sockel
Hi,


I am looking to implement OVN with in my setup, but could use some guidance 
on the implementation.  As part of the implementation i am planning on 
moving from engine being installed on a physical server to running as a VM 
with in my environment.


I do not need to retain any historical data.  The plan would be to import 
the vm's into a new deployment of oVirt after getting everything setup.  
Currently i have a single host i plan on using for the initial deployment, 
but as vm's are moved over, i will be adding additional hosts.


With the deployment of oVirt 4.3.x the ovn provider is installed correctly.  
Is there a problem with installing the OVN Controller on the Hosted Engine?


Has any one found a good guide for setting up and deploying ovn in an oVirt 
hosted-engine environment?



Thank You,

Bryan Sockel___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CPBLH5E6KTUIOHE6I4GQN25S3LD5W5VP/


[ovirt-users] Re: Hosted engine not starting after 4.3 Upgrade - cannot find OVF_STORE

2019-03-06 Thread Shawn Southern
Thank you!

The ownership of the volume file had changed to root:root.  I changed it back 
to vdsm:kvm and the hosted engine started.

For anyone else who runs in to this, the file was in:

/rhev/data-center/mnt/glusterSD/ovirtnode-02:_vmstore/79376c46-b80c-4c44-bbb1-80c0714a4b52/images/48ee766b-185d-4928-a046-b048d65af2a6

The errors in vdsm.log that pointed to this:
2019-03-06 08:16:24,470-0500 INFO  (jsonrpc/4) [vdsm.api] START 
getVolumeInfo(sdUUID=u'79376c46-b80c-4c44-bbb1-80c0714a4b52', 
spUUID=u'----', 
imgUUID=u'48ee766b-185d-4928-a046-b048d65af2a6', 
volUUID=u'687e9c0d-e988-4f76-89ff-931685acdf76', options=None) from=::1,37228, 
task_id=8170eec2-b3f7-488c-adda-3f1d9b1d0c57 (api:48)
2019-03-06 08:16:24,472-0500 INFO  (jsonrpc/4) [vdsm.api] FINISH getVolumeInfo 
error=Volume does not exist: (u'687e9c0d-e988-4f76-89ff-931685acdf76',) 
from=::1,37228, task_id=8170eec2-b3f7-488c-adda-3f1d9b1d0c57 (api:52)
2019-03-06 08:16:24,472-0500 INFO  (jsonrpc/4) [storage.TaskManager.Task] 
(Task='8170eec2-b3f7-488c-adda-3f1d9b1d0c57') aborting: Task is aborted: 
"Volume does not exist: (u'687e9c0d-e988-4f76-89ff-931685acdf76',)" - code 201 
(task:1181)
2019-03-06 08:16:24,472-0500 INFO  (jsonrpc/4) [storage.Dispatcher] FINISH 
getVolumeInfo error=Volume does not exist: 
(u'687e9c0d-e988-4f76-89ff-931685acdf76',) (dispatcher:81)

From: Jayme 
Sent: March 6, 2019 6:51 AM
To: Shawn Southern 
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Hosted engine not starting after 4.3 Upgrade - 
cannot find OVF_STORE

 It sure if this is the same bug I hit but check ownership of the cam images. 
There’s a bug in 4.3 upgrade that changes ownership to root and causes vms to 
not start until you change back to vdsm

On Wed, Mar 6, 2019 at 4:57 AM Shawn Southern 
mailto:shawn.south...@entegrus.com>> wrote:
After running 'hosted-engine --vm-start', the status of the hosted engine VM is:

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirtnode-01
Host ID: 3
Engine status  : {"reason": "bad vm status", "health": 
"bad", "vm": "down_unexpected", "detail": "Down"}
Score  : 0
stopped: False
Local maintenance  : False
crc32  : 7e3db850
local_conf_timestamp   : 3509
Host timestamp : 3508
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3508 (Tue Mar  5 16:03:30 2019)
host-id=3
score=0
vm_conf_refresh_time=3509 (Tue Mar  5 16:03:31 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Wed Dec 31 20:05:37 1969


The /var/log/libvirt/qemu/HostedEngine.log has no entries since the hosted 
engine VM was rebooted.

/var/log/ovirt-hosted-engine-ha/agent.log:
MainThread::ERROR::2019-03-05 
16:07:31,916::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Failed scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID': 
u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID': 
u'48ee766b-185d-4928-a046-b048d65af2a6'} failed:
(code=201, message=Volume does not exist: 
(u'687e9c0d-e988-4f76-89ff-931685acdf76',))
MainThread::ERROR::2019-03-05 
16:07:31,916::config_ovf::84::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Unable to identify the OVF_STORE volume, falling back to initial vm.conf. 
Please ensure you already added your first data domain for regular VMs
MainThread::INFO::2019-03-05 
16:07:31,971::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineUnexpectedlyDown (score: 0)
MainThread::ERROR::2019-03-05 
16:07:42,304::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Failed scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID': 
u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID': 
u'48ee766b-185d-4928-a046-b048d65af2a6'} failed:
(code=201, message=Volume does not exist: 
(u'687e9c0d-e988-4f76-89ff-931685acdf76',))
MainThread::ERROR::2019-03-05 
16:07:42,305::config_ovf::84::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Unable to identify the OVF_STORE volume, falling back to initial vm.conf. 
Please ensure you already added your first data domain for regular VMs
MainThread::INFO::2019-03-05 

[ovirt-users] Re: How to Copy-Paste without QXL ?

2019-03-06 Thread Victor Toso
Hi,

On Tue, Mar 05, 2019 at 05:01:30PM +0100, jeanbaptiste.coup...@nfrance.com 
wrote:
> Hello Victor,
> 
> Thanks for answer.
> 
> Attached, output of remote-viewer, with --debug --spice-debug. During this
> debug session I've tried some time to paste content (via CTRL+SHIFT+V)
> 
> Regards,
> Jean-Baptiste,
> 
> -Message d'origine-
> De : Victor Toso  
> Envoyé : mardi 5 mars 2019 15:39
> À : jeanbapti...@nfrance.com
> Cc : users@ovirt.org
> Objet : Re: [ovirt-users] How to Copy-Paste without QXL ?
> 
> Hi Jean,
> 
> On Tue, Mar 05, 2019 at 02:20:59PM -, jeanbapti...@nfrance.com wrote:
> > Hello Guys,
> > 
> > I try to make work copy and paste from a virt-viewer client to a Linux 
> > VM (Centos 6).
> > I have install spice-vdagent (service run). Guest is configured as QXL 
> > + Spice. I also have test :
> > - Enable SPICE clipboard copy and paste
> > - Enable VirtIO serial console
> > 
> > But Paste (CTRL + SHIFT + V ?) not work
> > 
> > Is this function can work without QXL driver ?
> 
> The copy, like drag, those are features that rely on
> spice-vdagent, not QXL.
> 
> I assume that spice-vdagent and spice-vdagentd are running without errors in
> your CentOS 6 box? If that's the case, you can share some logs from the
> client remote-viewer or virt-viewer --debug --spice-debug.
> 
> Cheers,
> Victor

> [nfrance@astreinte T??l??chargements]$ remote-viewer --debug --spice-debug
> (remote-viewer:4108): virt-viewer-DEBUG: Opening display to 
> file:///tmp/mozilla_nfrance0/console.vv
> (remote-viewer:4108): virt-viewer-DEBUG: Guest (null) has unsupported file 
> display type
> (remote-viewer:4108): virt-viewer-DEBUG: Opening display to 
> file:///home/nfrance/T%C3%A9l%C3%A9chargements/console(2).vv
> (remote-viewer:4108): virt-viewer-DEBUG: Guest (null) has unsupported file 
> display type
> (remote-viewer:4108): virt-viewer-DEBUG: Opening display to 
> file:///home/nfrance/T%C3%A9l%C3%A9chargements/console.vv
> (remote-viewer:4108): virt-viewer-DEBUG: Guest (null) has a spice display
> (remote-viewer:4108): GSpice-DEBUG: spice-session.c:286 New session (compiled 
> from package spice-gtk 0.33)

What is your Client OS? spice-gtk 0.33 was released in Oct 7,
2016... You might benefit by trying something newer (last was
0.36 done in Jan 11, 2019)

> (remote-viewer:4108): GSpice-DEBUG: spice-session.c:290 Supported channels: 
> main, display, inputs, cursor, playback, record, smartcard, usbredir, webdav
> (remote-viewer:4108): GSpice-DEBUG: usb-device-manager.c:523 auto-connect 
> filter set to 0x03,-1,-1,-1,0|-1,-1,-1,-1,1
> (remote-viewer:4108): virt-viewer-DEBUG: Start fetching oVirt main entry point
> (remote-viewer:4108): virt-viewer-DEBUG: After open connection callback fd=-1
> (remote-viewer:4108): virt-viewer-DEBUG: Opening connection to display at 
> file:///home/nfrance/T%C3%A9l%C3%A9chargements/console.vv
> (remote-viewer:4108): GSpice-DEBUG: usb-device-manager.c:523 auto-connect 
> filter set to -1,-1,-1,-1,0
> (remote-viewer:4108): virt-viewer-DEBUG: fullscreen display 0: 0
> (remote-viewer:4108): virt-viewer-DEBUG: app is not in full screen
> (remote-viewer:4108): GSpice-DEBUG: spice-session.c:1743 no migration in 
> progress
> (remote-viewer:4108): GSpice-DEBUG: spice-channel.c:137 main-1:0: 
> spice_channel_constructed
> (remote-viewer:4108): GSpice-DEBUG: spice-session.c:2246 main-1:0: new main 
> channel, switching
> (remote-viewer:4108): GSpice-DEBUG: spice-gtk-session.c:1099 Changing main 
> channel from (nil) to 0x15bb19200
> (remote-viewer:4108): virt-viewer-DEBUG: New spice channel 0x15bb19200 
> SpiceMainChannel 0
> (remote-viewer:4108): virt-viewer-DEBUG: notebook show status 0x15b442230
> (remote-viewer:4108): GSpice-DEBUG: usb-device-manager.c:1008 device added 
> 8087:07dc (0x15b5a57b0)
> (remote-viewer:4108): GSpice-DEBUG: usb-device-manager.c:1008 device added 
> 0b97:7772 (0x15b60c390)
> (remote-viewer:4108): GSpice-DEBUG: usb-device-manager.c:1008 device added 
> 04f2:b3b1 (0x15b2bfdc0)
> (remote-viewer:4108): GSpice-DEBUG: usb-device-manager.c:1008 device added 
> 1199:9063 (0x15b59ec20)
> (remote-viewer:4108): GSpice-DEBUG: spice-channel.c:2610 main-1:0: Open 
> coroutine starting 0x15bb19200
> (remote-viewer:4108): GSpice-DEBUG: spice-channel.c:2451 main-1:0: Started 
> background coroutine 0x15bb18860
> (remote-viewer:4108): GSpice-DEBUG: spice-session.c:2192 main-1:0: Using TLS, 
> port 5901
> (remote-viewer:4108): GSpice-DEBUG: spice-session.c:2125 open host 
> 172.20.0.7:5901
> (remote-viewer:4108): GSpice-DEBUG: spice-session.c:2047 main-1:0: connecting 
> 0x7f8e65c50a60...
> (remote-viewer:4108): GSpice-DEBUG: spice-session.c:2031 main-1:0: connect 
> ready
> (remote-viewer:4108): GSpice-DEBUG: spice-channel.c:2379 main-1:0: Load CA, 
> file: (null), data: 0x15bfe0a00
> (remote-viewer:4108): Spice-DEBUG: ssl_verify.c:400:verify_subject: subjects 
> match
> (remote-viewer:4108): GSpice-DEBUG: spice-channel.c:1302 main-1:0: channel 
> type 1 id 0 num common 

[ovirt-users] ipxe-roms-qemu question

2019-03-06 Thread Ladislav Humenik

Hi all,

in the past we have used customized ipxe (to allow boot over network 
with 10G cards), now we have finally updated our hypervisors to the 
latest ipxe-roms-qemu
Of course the sum now differs and during live-migration the libvirtd 
throws this error:


Mar  4 11:37:14 hypevisor-01 libvirtd: 2019-03-04 10:37:14.084+: 
15862: error : qemuMigrationJobCheckStatus:1313 : operation failed: 
migration out job: unexpectedly failed
Mar  4 11:37:15 hypevisor-01 libvirtd: 2019-03-04T10:37:13.941040Z 
qemu-kvm: Length mismatch: :00:03.0/virtio-net-pci.rom: 0x2 in 
!= 0x4: Invalid argument
Mar  4 11:37:15 hypevisor-0 libvirtd: 2019-03-04T10:37:13.941090Z 
qemu-kvm: error while loading state for instance 0x0 of device 'ram'
Mar  4 11:37:15 hypevisor-0 libvirtd: 2019-03-04T10:37:13.941530Z 
qemu-kvm: load of migration failed: Invalid argument



is there an easy command we can use to identify what guests are still 
using the old .rom and must be powercycled ?


Thank you in advance

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EUK66G7PCC6AHM6FP64FDBLI3E5INYJX/


[ovirt-users] Re: Hosted engine not starting after 4.3 Upgrade - cannot find OVF_STORE

2019-03-06 Thread Jayme
 It sure if this is the same bug I hit but check ownership of the cam
images. There’s a bug in 4.3 upgrade that changes ownership to root and
causes vms to not start until you change back to vdsm

On Wed, Mar 6, 2019 at 4:57 AM Shawn Southern 
wrote:

> After running 'hosted-engine --vm-start', the status of the hosted engine
> VM is:
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : ovirtnode-01
> Host ID: 3
> Engine status  : {"reason": "bad vm status", "health":
> "bad", "vm": "down_unexpected", "detail": "Down"}
> Score  : 0
> stopped: False
> Local maintenance  : False
> crc32  : 7e3db850
> local_conf_timestamp   : 3509
> Host timestamp : 3508
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=3508 (Tue Mar  5 16:03:30 2019)
> host-id=3
> score=0
> vm_conf_refresh_time=3509 (Tue Mar  5 16:03:31 2019)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineUnexpectedlyDown
> stopped=False
> timeout=Wed Dec 31 20:05:37 1969
>
>
> The /var/log/libvirt/qemu/HostedEngine.log has no entries since the hosted
> engine VM was rebooted.
>
> /var/log/ovirt-hosted-engine-ha/agent.log:
> MainThread::ERROR::2019-03-05
> 16:07:31,916::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
> Failed scanning for OVF_STORE due to Command Volume.getInfo with args
> {'storagepoolID': '----',
> 'storagedomainID': '79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID':
> u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID':
> u'48ee766b-185d-4928-a046-b048d65af2a6'} failed:
> (code=201, message=Volume does not exist:
> (u'687e9c0d-e988-4f76-89ff-931685acdf76',))
> MainThread::ERROR::2019-03-05
> 16:07:31,916::config_ovf::84::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
> Unable to identify the OVF_STORE volume, falling back to initial vm.conf.
> Please ensure you already added your first data domain for regular VMs
> MainThread::INFO::2019-03-05
> 16:07:31,971::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
> Current state EngineUnexpectedlyDown (score: 0)
> MainThread::ERROR::2019-03-05
> 16:07:42,304::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
> Failed scanning for OVF_STORE due to Command Volume.getInfo with args
> {'storagepoolID': '----',
> 'storagedomainID': '79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID':
> u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID':
> u'48ee766b-185d-4928-a046-b048d65af2a6'} failed:
> (code=201, message=Volume does not exist:
> (u'687e9c0d-e988-4f76-89ff-931685acdf76',))
> MainThread::ERROR::2019-03-05
> 16:07:42,305::config_ovf::84::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
> Unable to identify the OVF_STORE volume, falling back to initial vm.conf.
> Please ensure you already added your first data domain for regular VMs
> MainThread::INFO::2019-03-05
> 16:07:42,365::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
> Current state EngineUnexpectedlyDown (score: 0)
> MainThread::ERROR::2019-03-05
> 16:07:51,791::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
> Failed scanning for OVF_STORE due to Command Volume.getInfo with args
> {'storagepoolID': '----',
> 'storagedomainID': '79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID':
> u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID':
> u'48ee766b-185d-4928-a046-b048d65af2a6'} failed:
> (code=201, message=Volume does not exist:
> (u'687e9c0d-e988-4f76-89ff-931685acdf76',))
> MainThread::ERROR::2019-03-05
> 16:07:51,792::config_ovf::84::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
> Unable to identify the OVF_STORE volume, falling back to initial vm.conf.
> Please ensure you already added your first data domain for regular VMs
> MainThread::INFO::2019-03-05
> 16:07:51,850::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
> Current state EngineUnexpectedlyDown (score: 0)
> MainThread::INFO::2019-03-05
> 16:08:01,868::states::684::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine down, local host does not have best score
> MainThread::ERROR::2019-03-05
> 

[ovirt-users] Re: Info about Cinderlib integration testing

2019-03-06 Thread Benny Zlotnik
Also, which driver are you planning on trying?

And there are some known issues we fixed in the upcoming 4.3.2,
like setting correct permissions to /usr/share/ovirt-engine/cinderlib
it should be owned by the ovirt user

We'll be happy to receive bug reports

On Wed, Mar 6, 2019, 13:44 Benny Zlotnik  wrote:

> unfortunately we don't have proper packaging for cinderlib at the moment,
> it needs to be installed via pip,
> pip install cinderlib
>
> also you need to enable the config value ManagedBlockDomainSupported
>
> On Wed, Mar 6, 2019, 13:24 Gianluca Cecchi 
> wrote:
>
>> Hello,
>> I have updated an environment from 4.2.8 to 4.3.1.
>> During setup I selected:
>>
>>   --== PRODUCT OPTIONS ==--
>>
>>   Set up Cinderlib integration
>>   (Currently in tech preview. For more info -
>>
>> https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
>> )
>>   (Yes, No) [No]: Yes
>> . . .
>>   --== DATABASE CONFIGURATION ==--
>>
>>   Where is the ovirt cinderlib database located? (Local, Remote)
>> [Local]:
>>   Setup can configure the local postgresql server automatically
>> for the CinderLib to run. This may conflict with existing applications.
>>   Would you like Setup to automatically configure postgresql and
>> create CinderLib database, or prefer to perform that manually? (Automatic,
>> Manual) [Automatic]:
>> . . .
>>   --== CONFIGURATION PREVIEW ==--
>> . . .
>>   CinderLib database secured connection   : False
>>   CinderLib database user name: ovirt_cinderlib
>>   CinderLib database name : ovirt_cinderlib
>>   CinderLib database host : localhost
>>   CinderLib database port : 5432
>>   CinderLib database host name validation : False
>>   Set up Cinderlib integration: True
>>   Configure local CinderLib database  : True
>>
>> at the end I upgraded cluster and dc compatibility version to 4.3.
>> When I go in the UI and add storage domain, in "Domain Function" field I
>> don't see the "ManagedBlockStorage" between the available ones
>>
>> I see that the RDBMS has been created, but empty... is it the expected
>> result?
>>
>> -bash-4.2$ psql ovirt_cinderlib
>> psql (9.2.24, server 10.6)
>> WARNING: psql version 9.2, server version 10.0.
>>  Some psql features might not work.
>> Type "help" for help.
>>
>> ovirt_cinderlib=# \d
>> No relations found.
>> ovirt_cinderlib=#
>>
>> Any hint about enabling/testing cinderlib integration with 4.3.1?
>> Current packages on engine:
>>
>> [root@ovmgr1 ~]# rpm -qa | grep -i cinder
>> openstack-java-cinder-model-3.2.5-1.el7.noarch
>> ovirt-engine-setup-plugin-cinderlib-4.3.1.1-1.el7.noarch
>> openstack-java-cinder-client-3.2.5-1.el7.noarch
>> [root@ovmgr1 ~]#
>>
>> and "yum search cinder" command doesn't give any more packages than those
>> already installed
>> Thanks in advance,
>> Gianluca
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZNDOD3FNAO3JII3UL7H4APLJVPWXSVQ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MBM5VG3O5HX3UENQ7T6664LNOJBA2DJM/


[ovirt-users] Re: Info about Cinderlib integration testing

2019-03-06 Thread Eyal Shenitzky
Hey Gianluca,

The process of adding a cinderlib DB is similar to the engine DB.

The cinderlib DB will be used by cinderlib process, so you will not see
there anything until you will use this feature.

The "Managed block storage" domain function will be available only if you
configured the cinderlib DB in the engine setup and *manually configured
the engine to support cinderlib.*

You can use the following links to learn more about the cinderlib
integration:

   - Managed block storage feature page -
   
https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
   - Managed block storage deep dive session -
   https://www.youtube.com/watch?v=F3JttBkjsX8

Note that there are more manual steps that should be done in order to use
it in this stage.

On Wed, Mar 6, 2019 at 1:24 PM Gianluca Cecchi 
wrote:

> Hello,
> I have updated an environment from 4.2.8 to 4.3.1.
> During setup I selected:
>
>   --== PRODUCT OPTIONS ==--
>
>   Set up Cinderlib integration
>   (Currently in tech preview. For more info -
>
> https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
> )
>   (Yes, No) [No]: Yes
> . . .
>   --== DATABASE CONFIGURATION ==--
>
>   Where is the ovirt cinderlib database located? (Local, Remote)
> [Local]:
>   Setup can configure the local postgresql server automatically
> for the CinderLib to run. This may conflict with existing applications.
>   Would you like Setup to automatically configure postgresql and
> create CinderLib database, or prefer to perform that manually? (Automatic,
> Manual) [Automatic]:
> . . .
>   --== CONFIGURATION PREVIEW ==--
> . . .
>   CinderLib database secured connection   : False
>   CinderLib database user name: ovirt_cinderlib
>   CinderLib database name : ovirt_cinderlib
>   CinderLib database host : localhost
>   CinderLib database port : 5432
>   CinderLib database host name validation : False
>   Set up Cinderlib integration: True
>   Configure local CinderLib database  : True
>
> at the end I upgraded cluster and dc compatibility version to 4.3.
> When I go in the UI and add storage domain, in "Domain Function" field I
> don't see the "ManagedBlockStorage" between the available ones
>
> I see that the RDBMS has been created, but empty... is it the expected
> result?
>
> -bash-4.2$ psql ovirt_cinderlib
> psql (9.2.24, server 10.6)
> WARNING: psql version 9.2, server version 10.0.
>  Some psql features might not work.
> Type "help" for help.
>
> ovirt_cinderlib=# \d
> No relations found.
> ovirt_cinderlib=#
>
> Any hint about enabling/testing cinderlib integration with 4.3.1?
> Current packages on engine:
>
> [root@ovmgr1 ~]# rpm -qa | grep -i cinder
> openstack-java-cinder-model-3.2.5-1.el7.noarch
> ovirt-engine-setup-plugin-cinderlib-4.3.1.1-1.el7.noarch
> openstack-java-cinder-client-3.2.5-1.el7.noarch
> [root@ovmgr1 ~]#
>
> and "yum search cinder" command doesn't give any more packages than those
> already installed
> Thanks in advance,
> Gianluca
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZNDOD3FNAO3JII3UL7H4APLJVPWXSVQ/
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K6FHYAYDVC7NMYWQBRS7B4LE7Y5YO5VE/


[ovirt-users] Re: Info about Cinderlib integration testing

2019-03-06 Thread Benny Zlotnik
unfortunately we don't have proper packaging for cinderlib at the moment,
it needs to be installed via pip,
pip install cinderlib

also you need to enable the config value ManagedBlockDomainSupported

On Wed, Mar 6, 2019, 13:24 Gianluca Cecchi 
wrote:

> Hello,
> I have updated an environment from 4.2.8 to 4.3.1.
> During setup I selected:
>
>   --== PRODUCT OPTIONS ==--
>
>   Set up Cinderlib integration
>   (Currently in tech preview. For more info -
>
> https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
> )
>   (Yes, No) [No]: Yes
> . . .
>   --== DATABASE CONFIGURATION ==--
>
>   Where is the ovirt cinderlib database located? (Local, Remote)
> [Local]:
>   Setup can configure the local postgresql server automatically
> for the CinderLib to run. This may conflict with existing applications.
>   Would you like Setup to automatically configure postgresql and
> create CinderLib database, or prefer to perform that manually? (Automatic,
> Manual) [Automatic]:
> . . .
>   --== CONFIGURATION PREVIEW ==--
> . . .
>   CinderLib database secured connection   : False
>   CinderLib database user name: ovirt_cinderlib
>   CinderLib database name : ovirt_cinderlib
>   CinderLib database host : localhost
>   CinderLib database port : 5432
>   CinderLib database host name validation : False
>   Set up Cinderlib integration: True
>   Configure local CinderLib database  : True
>
> at the end I upgraded cluster and dc compatibility version to 4.3.
> When I go in the UI and add storage domain, in "Domain Function" field I
> don't see the "ManagedBlockStorage" between the available ones
>
> I see that the RDBMS has been created, but empty... is it the expected
> result?
>
> -bash-4.2$ psql ovirt_cinderlib
> psql (9.2.24, server 10.6)
> WARNING: psql version 9.2, server version 10.0.
>  Some psql features might not work.
> Type "help" for help.
>
> ovirt_cinderlib=# \d
> No relations found.
> ovirt_cinderlib=#
>
> Any hint about enabling/testing cinderlib integration with 4.3.1?
> Current packages on engine:
>
> [root@ovmgr1 ~]# rpm -qa | grep -i cinder
> openstack-java-cinder-model-3.2.5-1.el7.noarch
> ovirt-engine-setup-plugin-cinderlib-4.3.1.1-1.el7.noarch
> openstack-java-cinder-client-3.2.5-1.el7.noarch
> [root@ovmgr1 ~]#
>
> and "yum search cinder" command doesn't give any more packages than those
> already installed
> Thanks in advance,
> Gianluca
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZNDOD3FNAO3JII3UL7H4APLJVPWXSVQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGKEVQSEASA4OFP7GAZ6CT34FTWXEGYV/


[ovirt-users] Re: alertMessage, [Warning! Low confirmed free space on gluster volume M2Stick1]

2019-03-06 Thread Robert O'Kane

forgot the Gluster Versions:

Hypervisors:

glusterfs-3.12.15-1.el7.x86_64
glusterfs-api-3.12.15-1.el7.x86_64
glusterfs-cli-3.12.15-1.el7.x86_64
glusterfs-client-xlators-3.12.15-1.el7.x86_64
glusterfs-events-3.12.15-1.el7.x86_64
glusterfs-fuse-3.12.15-1.el7.x86_64
glusterfs-geo-replication-3.12.15-1.el7.x86_64
glusterfs-gnfs-3.12.15-1.el7.x86_64
glusterfs-libs-3.12.15-1.el7.x86_64
glusterfs-rdma-3.12.15-1.el7.x86_64
glusterfs-server-3.12.15-1.el7.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64
python2-gluster-3.12.15-1.el7.x86_64
vdsm-gluster-4.20.46-1.el7.x86_64

engine:

glusterfs-3.12.15-1.el7.x86_64
glusterfs-api-3.12.15-1.el7.x86_64
glusterfs-cli-3.12.15-1.el7.x86_64
glusterfs-client-xlators-3.12.15-1.el7.x86_64
glusterfs-libs-3.12.15-1.el7.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64







On 03/06/2019 09:22 AM, Robert O'Kane wrote:

Hello,

With my first "in Ovirt" made Gluster Storage I am getting some annoying 
Warnings.

On the Hypervisor(s) engine.log :

2019-03-05 13:07:45,281+01 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler5) [59957167] START, 
GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = Hausesel3, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='d7db584e-03e3-4a37-abc7-73012a9f5ba8', 
volumeName='M2Stick1'}), log id: 74482de6
2019-03-05 13:07:46,814+01 INFO  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler10) [6d40c5d0] Failed to acquire lock and wait lock 
'EngineLock:{exclusiveLocks='[27f8ed93-c857-41ae-af16-e1af9f0b62d4=GLUSTER]', sharedLocks=''}'
2019-03-05 13:07:46,823+01 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler5) [59957167] 
FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@868edb00, log id: 
74482de6



I find no other correlated messages in the Gluster logs.  Where else should I 
look?

It (seems) to work very well. Just these "Warnings" which only worry me due to the 
"Failed to acquire lock" messages.
This is one of 3 Gluster Storage Domains. The other 2 were "Hand made" and 
exist since Ovirt-3.5 and show no messages.


1x standalone engine
6x Hypervisors  in 2 clusters.

One other special condition:

I am in the processes of moving my VM's to a second cluster (same Data Center) 
with a different defined Gluster Network. (New 10Gb cards).
All Hypervisors see all Networks. but since there is only one SPM, the SPM is never a 
"Gluster Peer" of all Domains due to
the "only one Gluster Network per Cluster" definition.  Is this the 
Problem/Situation?
There is another "Hand Made" Domain in the new Cluster but it does not have any 
problems.  The only difference between the two is that the
new Domain was created over the Ovirt Web interface.

Cheers,

Robert O'Kane



engine:

libgovirt-0.3.4-1.el7.x86_64
libvirt-bash-completion-4.5.0-10.el7_6.4.x86_64
libvirt-client-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-interface-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-network-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-nodedev-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-nwfilter-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-qemu-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-secret-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-core-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-disk-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-iscsi-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-logical-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-mpath-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-rbd-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-scsi-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-kvm-4.5.0-10.el7_6.4.x86_64
libvirt-glib-1.0.0-1.el7.x86_64
libvirt-libs-4.5.0-10.el7_6.4.x86_64
libvirt-python-4.5.0-1.el7.x86_64
ovirt-ansible-cluster-upgrade-1.1.10-1.el7.noarch
ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
ovirt-ansible-engine-setup-1.1.6-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.2-1.el7.noarch
ovirt-ansible-image-template-1.1.9-1.el7.noarch
ovirt-ansible-infra-1.1.10-1.el7.noarch
ovirt-ansible-manageiq-1.1.13-1.el7.noarch
ovirt-ansible-repositories-1.1.3-1.el7.noarch
ovirt-ansible-roles-1.1.6-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.0-1.el7.noarch
ovirt-ansible-v2v-conversion-host-1.9.0-1.el7.noarch
ovirt-ansible-vm-infra-1.1.12-1.el7.noarch
ovirt-cockpit-sso-0.0.4-1.el7.noarch
ovirt-engine-4.2.8.2-1.el7.noarch
ovirt-engine-api-explorer-0.0.2-1.el7.centos.noarch
ovirt-engine-backend-4.2.8.2-1.el7.noarch
ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
ovirt-engine-dashboard-1.2.4-1.el7.noarch
ovirt-engine-dbscripts-4.2.8.2-1.el7.noarch

[ovirt-users] Info about Cinderlib integration testing

2019-03-06 Thread Gianluca Cecchi
Hello,
I have updated an environment from 4.2.8 to 4.3.1.
During setup I selected:

  --== PRODUCT OPTIONS ==--

  Set up Cinderlib integration
  (Currently in tech preview. For more info -

https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
)
  (Yes, No) [No]: Yes
. . .
  --== DATABASE CONFIGURATION ==--

  Where is the ovirt cinderlib database located? (Local, Remote)
[Local]:
  Setup can configure the local postgresql server automatically for
the CinderLib to run. This may conflict with existing applications.
  Would you like Setup to automatically configure postgresql and
create CinderLib database, or prefer to perform that manually? (Automatic,
Manual) [Automatic]:
. . .
  --== CONFIGURATION PREVIEW ==--
. . .
  CinderLib database secured connection   : False
  CinderLib database user name: ovirt_cinderlib
  CinderLib database name : ovirt_cinderlib
  CinderLib database host : localhost
  CinderLib database port : 5432
  CinderLib database host name validation : False
  Set up Cinderlib integration: True
  Configure local CinderLib database  : True

at the end I upgraded cluster and dc compatibility version to 4.3.
When I go in the UI and add storage domain, in "Domain Function" field I
don't see the "ManagedBlockStorage" between the available ones

I see that the RDBMS has been created, but empty... is it the expected
result?

-bash-4.2$ psql ovirt_cinderlib
psql (9.2.24, server 10.6)
WARNING: psql version 9.2, server version 10.0.
 Some psql features might not work.
Type "help" for help.

ovirt_cinderlib=# \d
No relations found.
ovirt_cinderlib=#

Any hint about enabling/testing cinderlib integration with 4.3.1?
Current packages on engine:

[root@ovmgr1 ~]# rpm -qa | grep -i cinder
openstack-java-cinder-model-3.2.5-1.el7.noarch
ovirt-engine-setup-plugin-cinderlib-4.3.1.1-1.el7.noarch
openstack-java-cinder-client-3.2.5-1.el7.noarch
[root@ovmgr1 ~]#

and "yum search cinder" command doesn't give any more packages than those
already installed
Thanks in advance,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZNDOD3FNAO3JII3UL7H4APLJVPWXSVQ/


[ovirt-users] Re: virt-viewer centos7.6

2019-03-06 Thread Michal Skrivanek


> On 5 Mar 2019, at 18:35, p.stanifo...@leedsbeckett.ac.uk wrote:
> 
> Hello is there a newer version than 5 for virt-viewer for Centos?

That’s a CentOS question really.
But apparently not.
 
Any specific issue you’re trying to fix?

Thanks,
michal

> 
> Thanks,
>   Paul S.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7IUB6RBX6R5ESIZTRDFKAYULBR23IFMK/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TJODTVC6SE23ITA6YHCMATCNU35QXKTR/


[ovirt-users] Install oVirt using the Cockpit wizard

2019-03-06 Thread max zhang
I wanto try to install ovirt4.3.1,follow the manual of ovirt,but I encounted 
this error:
[root@localhost ~]# sudo subscription-manager repos 
--enable="rhel-7-server-rpms"
System certificates corrupted. Please reregister.

And I have no idea to deal it
Thanks.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LTZPUHFD733Z6W32IZAKYCBYWFVBA53H/


[ovirt-users] Hosted engine not starting after 4.3 Upgrade - cannot find OVF_STORE

2019-03-06 Thread Shawn Southern
After running 'hosted-engine --vm-start', the status of the hosted engine VM is:

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirtnode-01
Host ID: 3
Engine status  : {"reason": "bad vm status", "health": 
"bad", "vm": "down_unexpected", "detail": "Down"}
Score  : 0
stopped: False
Local maintenance  : False
crc32  : 7e3db850
local_conf_timestamp   : 3509
Host timestamp : 3508
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3508 (Tue Mar  5 16:03:30 2019)
host-id=3
score=0
vm_conf_refresh_time=3509 (Tue Mar  5 16:03:31 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Wed Dec 31 20:05:37 1969


The /var/log/libvirt/qemu/HostedEngine.log has no entries since the hosted 
engine VM was rebooted.

/var/log/ovirt-hosted-engine-ha/agent.log:
MainThread::ERROR::2019-03-05 
16:07:31,916::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Failed scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID': 
u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID': 
u'48ee766b-185d-4928-a046-b048d65af2a6'} failed:
(code=201, message=Volume does not exist: 
(u'687e9c0d-e988-4f76-89ff-931685acdf76',))
MainThread::ERROR::2019-03-05 
16:07:31,916::config_ovf::84::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Unable to identify the OVF_STORE volume, falling back to initial vm.conf. 
Please ensure you already added your first data domain for regular VMs
MainThread::INFO::2019-03-05 
16:07:31,971::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineUnexpectedlyDown (score: 0)
MainThread::ERROR::2019-03-05 
16:07:42,304::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Failed scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID': 
u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID': 
u'48ee766b-185d-4928-a046-b048d65af2a6'} failed:
(code=201, message=Volume does not exist: 
(u'687e9c0d-e988-4f76-89ff-931685acdf76',))
MainThread::ERROR::2019-03-05 
16:07:42,305::config_ovf::84::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Unable to identify the OVF_STORE volume, falling back to initial vm.conf. 
Please ensure you already added your first data domain for regular VMs
MainThread::INFO::2019-03-05 
16:07:42,365::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineUnexpectedlyDown (score: 0)
MainThread::ERROR::2019-03-05 
16:07:51,791::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Failed scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID': 
u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID': 
u'48ee766b-185d-4928-a046-b048d65af2a6'} failed:
(code=201, message=Volume does not exist: 
(u'687e9c0d-e988-4f76-89ff-931685acdf76',))
MainThread::ERROR::2019-03-05 
16:07:51,792::config_ovf::84::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Unable to identify the OVF_STORE volume, falling back to initial vm.conf. 
Please ensure you already added your first data domain for regular VMs
MainThread::INFO::2019-03-05 
16:07:51,850::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineUnexpectedlyDown (score: 0)
MainThread::INFO::2019-03-05 
16:08:01,868::states::684::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
 Engine down, local host does not have best score
MainThread::ERROR::2019-03-05 
16:08:02,196::config_ovf::42::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store)
 Failed scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'79376c46-b80c-4c44-bbb1-80c0714a4b52', 'volumeID': 
u'687e9c0d-e988-4f76-89ff-931685acdf76', 'imageID': 

[ovirt-users] alertMessage, [Warning! Low confirmed free space on gluster volume M2Stick1]

2019-03-06 Thread Robert O'Kane

Hello,

With my first "in Ovirt" made Gluster Storage I am getting some annoying 
Warnings.

On the Hypervisor(s) engine.log :

2019-03-05 13:07:45,281+01 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler5) [59957167] START, 
GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = Hausesel3, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='d7db584e-03e3-4a37-abc7-73012a9f5ba8', 
volumeName='M2Stick1'}), log id: 74482de6
2019-03-05 13:07:46,814+01 INFO  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler10) [6d40c5d0] Failed to acquire lock and wait lock 
'EngineLock:{exclusiveLocks='[27f8ed93-c857-41ae-af16-e1af9f0b62d4=GLUSTER]', sharedLocks=''}'
2019-03-05 13:07:46,823+01 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler5) [59957167] 
FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@868edb00, log id: 
74482de6



I find no other correlated messages in the Gluster logs.  Where else should I 
look?

It (seems) to work very well. Just these "Warnings" which only worry me due to the 
"Failed to acquire lock" messages.
This is one of 3 Gluster Storage Domains. The other 2 were "Hand made" and 
exist since Ovirt-3.5 and show no messages.


1x standalone engine
6x Hypervisors  in 2 clusters.

One other special condition:

I am in the processes of moving my VM's to a second cluster (same Data Center) 
with a different defined Gluster Network. (New 10Gb cards).
All Hypervisors see all Networks. but since there is only one SPM, the SPM is never a 
"Gluster Peer" of all Domains due to
the "only one Gluster Network per Cluster" definition.  Is this the 
Problem/Situation?
There is another "Hand Made" Domain in the new Cluster but it does not have any 
problems.  The only difference between the two is that the
new Domain was created over the Ovirt Web interface.

Cheers,

Robert O'Kane



engine:

libgovirt-0.3.4-1.el7.x86_64
libvirt-bash-completion-4.5.0-10.el7_6.4.x86_64
libvirt-client-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-interface-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-network-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-nodedev-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-nwfilter-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-qemu-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-secret-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-core-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-disk-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-iscsi-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-logical-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-mpath-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-rbd-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-scsi-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-kvm-4.5.0-10.el7_6.4.x86_64
libvirt-glib-1.0.0-1.el7.x86_64
libvirt-libs-4.5.0-10.el7_6.4.x86_64
libvirt-python-4.5.0-1.el7.x86_64
ovirt-ansible-cluster-upgrade-1.1.10-1.el7.noarch
ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
ovirt-ansible-engine-setup-1.1.6-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.2-1.el7.noarch
ovirt-ansible-image-template-1.1.9-1.el7.noarch
ovirt-ansible-infra-1.1.10-1.el7.noarch
ovirt-ansible-manageiq-1.1.13-1.el7.noarch
ovirt-ansible-repositories-1.1.3-1.el7.noarch
ovirt-ansible-roles-1.1.6-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.0-1.el7.noarch
ovirt-ansible-v2v-conversion-host-1.9.0-1.el7.noarch
ovirt-ansible-vm-infra-1.1.12-1.el7.noarch
ovirt-cockpit-sso-0.0.4-1.el7.noarch
ovirt-engine-4.2.8.2-1.el7.noarch
ovirt-engine-api-explorer-0.0.2-1.el7.centos.noarch
ovirt-engine-backend-4.2.8.2-1.el7.noarch
ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
ovirt-engine-dashboard-1.2.4-1.el7.noarch
ovirt-engine-dbscripts-4.2.8.2-1.el7.noarch
ovirt-engine-dwh-4.2.4.3-1.el7.noarch
ovirt-engine-dwh-setup-4.2.4.3-1.el7.noarch
ovirt-engine-extension-aaa-jdbc-1.1.7-1.el7.centos.noarch
ovirt-engine-extension-aaa-ldap-1.3.8-1.el7.noarch
ovirt-engine-extension-aaa-ldap-setup-1.3.8-1.el7.noarch
ovirt-engine-extensions-api-impl-4.2.8.2-1.el7.noarch
ovirt-engine-lib-4.2.8.2-1.el7.noarch
ovirt-engine-metrics-1.1.8.1-1.el7.noarch
ovirt-engine-restapi-4.2.8.2-1.el7.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
ovirt-engine-setup-4.2.8.2-1.el7.noarch
ovirt-engine-setup-base-4.2.8.2-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.2.8.2-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.2.8.2-1.el7.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.2.8.2-1.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.2.8.2-1.el7.noarch
ovirt-engine-tools-4.2.8.2-1.el7.noarch
ovirt-engine-tools-backup-4.2.8.2-1.el7.noarch