[ovirt-users] Re: ovirt-guest-agent for CentOS 8

2020-03-23 Thread Brandon Johnson
Is there a plan to implement the SSO functionality from ovirt-ga elsewhere or 
break out the function for Linux guests using a desktop interface such as Gnome?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FOV3DS4AUPHZHJ4A4TODVTVVSBQHGMO6/


[ovirt-users] upload image using python api

2020-03-23 Thread David David
hi
can't upload disk image with that script:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
this error message is appeared when i trying to upload image:
# python upload_disk.py --engine-url https://alias-e.localdomain
--username admin@internal --disk-format raw --sd-name iscsi-test-7 -c
ca.pem /home/linux1.raw
Checking image...
Image format: raw
Disk format: raw
Disk content type: data
Disk provisioned size: 42949672960
Disk initial size: 42949672960
Disk name: linux1.raw
Connecting...
Password:
Creating disk...
Creating transfer session...
Uploading image...
Traceback (most recent call last):
  File "upload_disk.py", line 288, in 
with ui.ProgressBar() as pb:
TypeError: __init__() takes at least 2 arguments (1 given)

software using:
ovirt-engine 4.3.8.2-1
python-ovirt-engine-sdk4-4.3.2-2
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QKXTBSQPWNQUWE22PRYUZRZ6Q4KD5UME/


[ovirt-users] Re: storage allocation of template

2020-03-23 Thread yam yam
oh, sorry It seems that hard link is generated
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EPHQQHWUSPKTWKY3XZ2GIX34V2OREJBW/


[ovirt-users] Re: Speed Issues

2020-03-23 Thread Strahil Nikolov
On March 24, 2020 12:08:08 AM GMT+02:00, Jayme  wrote:
>I too struggle with speed issues in hci. Latency is a big problem with
>writes for me especially when dealing with small file workloads. How
>are
>you testing exactly?
>
>Look into enabling libgfapi and try some comparisons with that. People
>have
>been saying it’s much faster, but it’s not a default option and has a
>few
>bugs. Redhat devs do not appear to be giving its implementation any
>priority unfortunately.
>
>I’ve been considering switching to nfs storage because I’m seeing much
>better performance in testing with it. I have some nvme drives on the
>way
>and am curious how they would perform in hci but I’m thinking the issue
>is
>not a disk bottleneck (that appears very obvious in your case as well)
>
>
>
>On Mon, Mar 23, 2020 at 6:44 PM Christian Reiss
>
>wrote:
>
>> Hey folks,
>>
>> gluster related question. Having SSD in a RAID that can do 2 GB
>writes
>> and Reads (actually above, but meh) in a 3-way HCI cluster connected
>> with 10gbit connection things are pretty slow inside gluster.
>>
>> I have these settings:
>>
>> Options Reconfigured:
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> cluster.shd-max-threads: 8
>> features.shard: on
>> features.shard-block-size: 64MB
>> server.event-threads: 8
>> user.cifs: off
>> cluster.shd-wait-qlength: 1
>> cluster.locking-scheme: granular
>> cluster.eager-lock: enable
>> performance.low-prio-threads: 32
>> network.ping-timeout: 30
>> cluster.granular-entry-heal: enable
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> cluster.choose-local: true
>> client.event-threads: 16
>> performance.strict-o-direct: on
>> network.remote-dio: enable
>> performance.client-io-threads: on
>> nfs.disable: on
>> storage.fips-mode-rchecksum: on
>> transport.address-family: inet
>> cluster.readdir-optimize: on
>> cluster.metadata-self-heal: on
>> cluster.data-self-heal: on
>> cluster.entry-self-heal: on
>> cluster.data-self-heal-algorithm: full
>> features.uss: enable
>> features.show-snapshot-directory: on
>> features.barrier: disable
>> auto-delete: enable
>> snap-activate-on-create: enable
>>
>> Writing inside the /gluster_bricks yields those 2GB/sec writes,
>Reading
>> the same.
>>
>> Reading inside the /rhev/data-center/mnt/glusterSD/ dir reads go down
>to
>> 366mb/sec while writes plummet to to 200mb/sec.
>>
>> Summed up: Writing into the SSD Raid in the lvm/xfs gluster brick
>> directory is fast, writing into the mounted gluster dir is horribly
>slow.
>>
>> The above can be seen and repeated on all 3 servers. The network can
>do
>> full 10gbit (tested with, among others: rsync, iperf3).
>>
>> Anyone with some idea on whats missing/ going on here?
>>
>> Thanks folks,
>> as always stay safe and healthy!
>>
>> --
>> with kind regards,
>> mit freundlichen Gruessen,
>>
>> Christian Reiss
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/OMAAERV4IUISYEWD4QP5OAM4DK4JTTLF/
>>

Hey Chris,,

You got some options.
1. To speedup the reads in HCI - you can use the option :
cluster.choose-local: on
2. You can adjust the server and client event-threads 
3. You can use NFS Ganesha (which connects to all servers via libgfapi)  as a 
NFS Server.
In such case you have to use some clustering like ctdb or pacemaker.
Note:disable cluster.choose-local if you use this one
4 You can try the built-in NFS , although it's deprecated (NFS Ganesha is fully 
supported)
5.  Create a gluster profile during the tests. I have seen numerous improperly 
selected tests -> so test with real-world  workload. Synthetic tests are not 
good.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/44WU5WNVGQMX24FTCFOMLFWEOCIDNZQT/


[ovirt-users] storage allocation of template

2020-03-23 Thread yam yam
Hello,

I want to know whether a base disk image is physically copied whenever the VM 
is created from a template.

I've just created a VM based on template with 1 disk and checked VM's virtual 
disk images.
there were (1)base image copied? from the template, (2)new delta image.

Here, (1) image was identical to that of the template and I know that image is 
used as read-only mode.
So, I am wondering why base (disk) image is inefficiently copied instead of 
directly referencing template's image as backing file.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PRMEKQD25KVQIEVGI6C3GRYEBU4HURC7/


[ovirt-users] safe to have perf and dstat on ovirt node?

2020-03-23 Thread Gianluca Cecchi
Hello,
I would need to have on ovirt node these three packages:

- ipmitool
- perf
- dstat

Comparing latest RHVH (4.3.8) and latest ovirt-node-ng (4.3.9) I see that:
- ipmitool is present on both
- perf is present as an installable package in RHVH and not in ovirt-node-ng
- dstat is not available in any of them

Why the difference about perf?
Is it safe to install perf and dstat on ovirt-node-ng taking them from
CentOS updates?
This is for a lab and the packages would be used to compute some metrics
about performance of the node.

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BWH5TOCOETW7KITURPOEWQG65JVW6C7I/


[ovirt-users] Re: Speed Issues

2020-03-23 Thread Jayme
I too struggle with speed issues in hci. Latency is a big problem with
writes for me especially when dealing with small file workloads. How are
you testing exactly?

Look into enabling libgfapi and try some comparisons with that. People have
been saying it’s much faster, but it’s not a default option and has a few
bugs. Redhat devs do not appear to be giving its implementation any
priority unfortunately.

I’ve been considering switching to nfs storage because I’m seeing much
better performance in testing with it. I have some nvme drives on the way
and am curious how they would perform in hci but I’m thinking the issue is
not a disk bottleneck (that appears very obvious in your case as well)



On Mon, Mar 23, 2020 at 6:44 PM Christian Reiss 
wrote:

> Hey folks,
>
> gluster related question. Having SSD in a RAID that can do 2 GB writes
> and Reads (actually above, but meh) in a 3-way HCI cluster connected
> with 10gbit connection things are pretty slow inside gluster.
>
> I have these settings:
>
> Options Reconfigured:
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.shd-max-threads: 8
> features.shard: on
> features.shard-block-size: 64MB
> server.event-threads: 8
> user.cifs: off
> cluster.shd-wait-qlength: 1
> cluster.locking-scheme: granular
> cluster.eager-lock: enable
> performance.low-prio-threads: 32
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.choose-local: true
> client.event-threads: 16
> performance.strict-o-direct: on
> network.remote-dio: enable
> performance.client-io-threads: on
> nfs.disable: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> cluster.readdir-optimize: on
> cluster.metadata-self-heal: on
> cluster.data-self-heal: on
> cluster.entry-self-heal: on
> cluster.data-self-heal-algorithm: full
> features.uss: enable
> features.show-snapshot-directory: on
> features.barrier: disable
> auto-delete: enable
> snap-activate-on-create: enable
>
> Writing inside the /gluster_bricks yields those 2GB/sec writes, Reading
> the same.
>
> Reading inside the /rhev/data-center/mnt/glusterSD/ dir reads go down to
> 366mb/sec while writes plummet to to 200mb/sec.
>
> Summed up: Writing into the SSD Raid in the lvm/xfs gluster brick
> directory is fast, writing into the mounted gluster dir is horribly slow.
>
> The above can be seen and repeated on all 3 servers. The network can do
> full 10gbit (tested with, among others: rsync, iperf3).
>
> Anyone with some idea on whats missing/ going on here?
>
> Thanks folks,
> as always stay safe and healthy!
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OMAAERV4IUISYEWD4QP5OAM4DK4JTTLF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5FGWRA4X53LPH42FHWEEQ7HLTZJQUGOL/


[ovirt-users] Speed Issues

2020-03-23 Thread Christian Reiss

Hey folks,

gluster related question. Having SSD in a RAID that can do 2 GB writes 
and Reads (actually above, but meh) in a 3-way HCI cluster connected 
with 10gbit connection things are pretty slow inside gluster.


I have these settings:

Options Reconfigured:
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.shd-max-threads: 8
features.shard: on
features.shard-block-size: 64MB
server.event-threads: 8
user.cifs: off
cluster.shd-wait-qlength: 1
cluster.locking-scheme: granular
cluster.eager-lock: enable
performance.low-prio-threads: 32
network.ping-timeout: 30
cluster.granular-entry-heal: enable
storage.owner-gid: 36
storage.owner-uid: 36
cluster.choose-local: true
client.event-threads: 16
performance.strict-o-direct: on
network.remote-dio: enable
performance.client-io-threads: on
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
cluster.readdir-optimize: on
cluster.metadata-self-heal: on
cluster.data-self-heal: on
cluster.entry-self-heal: on
cluster.data-self-heal-algorithm: full
features.uss: enable
features.show-snapshot-directory: on
features.barrier: disable
auto-delete: enable
snap-activate-on-create: enable

Writing inside the /gluster_bricks yields those 2GB/sec writes, Reading 
the same.


Reading inside the /rhev/data-center/mnt/glusterSD/ dir reads go down to 
366mb/sec while writes plummet to to 200mb/sec.


Summed up: Writing into the SSD Raid in the lvm/xfs gluster brick 
directory is fast, writing into the mounted gluster dir is horribly slow.


The above can be seen and repeated on all 3 servers. The network can do 
full 10gbit (tested with, among others: rsync, iperf3).


Anyone with some idea on whats missing/ going on here?

Thanks folks,
as always stay safe and healthy!

--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OMAAERV4IUISYEWD4QP5OAM4DK4JTTLF/


[ovirt-users] Re: Moving Hosted Engine

2020-03-23 Thread Joseph Goldman

Hi Anton,

 I believe so - but I believe you can do a hosted-engine deploy with 
the backup file specified so it auto-restores with it.


 I need to try it on a test system I haven't done much DR testing yet. 
My builds have been rushed into production due to time constraints, and 
although I backup and ship off-site so I have it all I actually haven't 
done a full breakdown and re-restore yet :/


 I'll be interested to know how you go, if you dont mind.

Thanks,
Joe

On 2020-03-23 3:32 PM, Anton Louw wrote:



Hi Joseph,

Thanks for the reply. So in short, the process will then be to 1) Take 
a backup of the Hosted Engine, 2) Do a clean-up of the Hosted Engine, 
and then lastly redeploy on one of the nodes using hosted-engine 
deploy? After that then I will do a restore?


Thanks


*Anton Louw*
*Cloud Engineer: Storage and Virtualization* at *Vox*

*T:* 087 805  | *D:* 087 805 1572
*M:* N/A
*E:* anton.l...@voxtelecom.co.za 
*A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
www.vox.co.za 

F 


T 


I 


L 


Y 

#VoxBrand 


*Disclaimer*

The contents of this email are confidential to the sender and the 
intended recipient. Unless the contents are clearly and entirely of a 
personal nature, they are subject to copyright in favour of the 
holding company of the Vox group of companies. Any recipient who 
receives this email in error should immediately report the error to 
the sender and permanently delete this email from all storage devices.


This email has been scanned for viruses and malware, and may have been 
automatically archived by *Mimecast Ltd*, an innovator in Software as 
a Service (SaaS) for business. Providing a *safer* and *more useful* 
place for your human generated data. Specializing in; Security, 
archiving and compliance. To find out more Click Here 
.




*From:*Joseph Goldman 
*Sent:* 20 March 2020 14:23
*To:* users@ovirt.org
*Subject:* [ovirt-users] Re: Moving Hosted Engine

It is my understanding that yes taking a full backup and then redploying
into a new storage domain using the backup is the correct way - I dont
think you'd need a new physical server for it though unless you mean you
are running on bare-metal.

On 2020-03-20 11:04 PM, anton.l...@voxtelecom.co.za 
 wrote:

> Hi All,
>
> I am in serious need of some help. We currently have our Hosted 
Engine running on a Storage Domain, but that storage Domain is 
connected to a San that needs to be decommissioned. What would I need 
to do in order to get my Hosted Engine running on a different Storage 
Domain?

>
> Will I need to build a new CentOS server, download and Install the 
oVirt Setup, and then restore the backup that I took of my original 
Hosted Engine, using “engine-backup --mode=backup”?

>
> I tried looking around at backup solutions that would restore the 
whole VM to a different Storage Domain, but I can’t seem to find anything.

>
> Any advise would be greatly appreciated.
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 

> Privacy Statement: https://www.ovirt.org/privacy-policy.html 

> oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 

> List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGI2T46W62JE42O6XCGDDY3GUMXX4AUO/ 


___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 

Privacy Statement: https://www.ovirt.org/privacy-policy.html 

oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 

List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NYLHUQPK72QUNLSZB2TMA4RWFOGLNX6P/ 





___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVi

[ovirt-users] Re: Can't connect storage domain when using VLAN

2020-03-23 Thread briandumont
Strahil,

Thank you for the note.  I have decided to use iSCSI for my storage 
configuration.  So I don't need help with this specific issue any longer.

However, I am having a very similar issue with my migration network.

I will post that separately was I have some information.

Appreciate the help,
Brian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R6DNY2KLV4Z2JJFX4SGAMF632XPWXJUA/


[ovirt-users] bonding and SR-IOV

2020-03-23 Thread Nathanaël Blanchet

Hello,

I successfully attached VF to a VM on a single PF nic following 
https://access.redhat.com/articles/3215851.


I want the vm with VF to be migratable, but the nic hotplug/replug is 
too long. So I tested the mode 1 bonding (active-backup) with a virtio 
nic following this


https://www.ovirt.org/develop/release-management/features/network/liveMigrationSupportForSRIOV.html

On a single PF, the resilience tests with migration are correct with 1 
or 2 second downtime.


But now I want to do the same on a typical host with LACP PF:

Considering vlan13, I must bond a virtio nic on a brv13/brv13 profile 
with SR-IOV/brv13 profile as the primary interface.


Everything work when the primary interface is brv13/brv13 profile (work 
as a traditionnal bridge config), but not when changing on the 
SR-IOV/brv13 profile.


I know in this case that the SR-IOV profile is connected to a LACP 
interface, but does it mean that I have to configure the VF vnic in the 
VM to be part of the LACP(bond1) as a single member and in that case to 
do active-backup bond0 with two members bond1 and brv13 nic?


Hope to be clear enough.

Thanks for your help.


--
Nathanaël Blanchet

Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KBCQUNROEYDLWEI2PZ6FGXDJK42GD3M5/