[ovirt-users] Re: Add Node to a single node installation with self hosted engine.

2020-10-27 Thread Parth Dhanjal
Hey!
I think this should work -

gluster volume add-brick replica 3 test-volume newServer1:/brick1
newServer2:/brick1


On Wed, Oct 28, 2020 at 4:07 AM marcel d'heureuse 
wrote:

> Hi Strahil,
>
> where can I find some documents for the conversion to replica? works this
> also for the engine brick?
>
> br
> marcel
>
> Am 27. Oktober 2020 16:40:59 MEZ schrieb Strahil Nikolov via Users <
> users@ovirt.org>:
>>
>> Hello Gobinda,
>>
>> I know that gluster can easily convert distributed volume to replica volume, 
>> so why it is not possible to first convert to replica and then add the nodes 
>> as HCI ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В вторник, 27 октомври 2020 г., 08:20:56 Гринуич+2, Gobinda Das 
>>  написа:
>>
>>
>>
>>
>>
>> Hello Marcel,
>>  For a note, you can't expand your single gluster node cluster to 3 
>> nodes.Only you can add compute nodes.
>> If you want to add compute nodes then you do not need any glusterfs packages 
>> to be installed. Only ovirt packages are enough to add host as a compute 
>> node.
>>
>> On Tue, Oct 27, 2020 at 10:28 AM Parth Dhanjal  wrote:
>>
>>> Hey Marcel!
>>>
>>> You have to install the required glusterfs packages and then deploy the 
>>> gluster setup on the 2 new hosts. After creating the required LVs, VGs, 
>>> thinpools, mount points and bricks, you'll have to expand the 
>>> gluster-cluster from the current host using add-brick functionality from 
>>> gluster. After this you can add the 2 new hosts to your existing 
>>> ovirt-engine.
>>>
>>> On Mon, Oct 26, 2020 at 7:40 PM Marcel d'Heureuse  
>>> wrote:
>>>
 Hi,

 I got a problem with my Ovirt installation. Normally we deliver Ovirt as
 single node installation and we told our service guys if the internal
 client will have more redundancy we need two more server and add this to
 the single node installation. i thought that no one would order two new
 servers.

 Now I have the problem to get the system running.

 First item is, that this environment has no internet access. So I can't
 install software by update with yum.
 The Ovirt installation is running on Ovirt node 4.3.9 boot USB Stick. All
 three servers have the same software installed.
 On the single node I have installed the hosted Engine package 1,1 GB to
 deploy the self-hosted engine without internet. That works.

 Gluster, Ovirt, Self-Hosted engine are running on the server 01.

 What should I do first?

 Deploy the Glusterfs first and then add the two new hosts to the single
 node installation?
 Or should I deploy a new Ovirt System to the two new hosts and add later
 the cleaned host to the new Ovirt System?

 I have not found any items in this mailing list which gives me an idea
 what I should do now.


 Br
 Marcel
 --
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/TNV5WNKTGUSU5DB2CFR67FROXMMDCPCD/

 --
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VXJVTPD24CQWTKS3QGINEKGT3NXVCP5/
>>>
>>>
>>>
>> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RJQZYYR2ZKE2EVAF7ZARL5ZXEEHMSBJ3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D22Y3ZKIAPPRLVSBBBLZOIRMDHAWIYJ3/


[ovirt-users] Re: vdsm with NFS storage reboot or shutdown more than 15 minutes. with error failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy

2020-10-27 Thread lifuqi...@sunyainfo.com
Hi, Strahil,
Thank you for your reply.
I've try setting host to maintenance and the host reboot immediately, What 
does vdsm do when setting host to maintenance? Thank you
   
Best Regards
Mark Lee

From: Strahil Nikolov via Users
Date: 2020-10-27 23:44
To: users; lifuqi...@sunyainfo.com
Subject: [ovirt-users] Re: vdsm with NFS storage reboot or shutdown more than 
15 minutes. with error failed to unmount 
/rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
When you set a host to maintenance from oVirt API/UI, one of the tasks is to 
umount any shared storage (incluing the NFS you got). Then rebooting should 
work like a charm.
 
Why did you reboot without putting the node in maintenance ?
 
P.S.: Do not confuse rebooting with fencing - the latter kills the node 
ungracefully in order to safely start HA VMs on another node.
 
Best Regards,
Strahil Nikolov
 
 
 
 
 
 
В вторник, 27 октомври 2020 г., 10:27:01 Гринуич+2, lifuqi...@sunyainfo.com 
 написа: 
 
 
 
 
 
 
 
Hi everyone:
Description of problem:
 
When exec "reboot" or "shutdown -h 0" cmd on vdsm server, the vdsm server 
will reboot or shutdown more than 30 minutes. the screen shows '[FAILED] Failed 
unmouting /rhev/data-center/mnt/172.18.81.41:_home_nfs_data'. 
other messages may be useful: [] watchdog: watchdog0: watchdog did not stop! 
[]systemd-shutdown[5594]: Failed to unmount 
/rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
[]systemd-shutdown[1]: Failed to wait for process: Protocol error
[]systemd-shutdown[5595]: Failed to remount '/' read-only: Device or resource 
busy
[]systemd-shutdown[1]: Failed to wait for process: Protocol error
dracut Warning: Killing all remaining processes
dracut Warning: Killing all remaining processes
 
Version-Release number of selected component (if applicable):
Software Version:4.2.8.2-1.el7
OS: CentOS Linux release 7.5.1804 (Core)
How reproducible:
100%
Steps to Reproduce:
1. my test enviroment is one Ovirt engine(172.17.81.17) with 4 vdsm servers, 
exec "reboot" cmd in one of the vdsm servers(172.17.99.105), the server will 
reboot more than 30 minutes.ovirt-engine : 172.17.81.17/16
vdsm: 172.17.99.105/16
nfs server: 172.17.81.14/16Actual results:
As above. the server will reboot more than 30 minutes
Expected results:
the server will reboot in a short time.
What I have done:
I have capture packet in nfs server while vdsm is rebooting, I found vdsm is 
always sending nfs packet to nfs server circularly as follows:this is some log 
files while I reboot vdsm 172.17.99.105 in 2020-10-26 22:12:34. Some conclusion 
is:
1. the vdsm.log said the vdsm 2020-10-26 22:12:34,461+0800 ERROR (check/loop) 
[storage.Monitor] Error checking path 
/rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/metadata
2. the sanlock.log said 2020-10-26 22:13:05 1454 [3301]: s1 delta_renew read 
timeout 10 sec offset 0 
/rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/ids
3. there is nothing message import to this issue.The logs is in the 
attachment.I'm very appreciate if anyone can help me. Thank you.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C2GATAD35SUVWTIF3W3J3DXC53AANYC7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T3ETYUH2QDB7ZVUNWLATSVSPU7TIU76I/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFTTU2HBVI3JTNGS6SS77CQITRSHTH3Y/


[ovirt-users] Re: Migrated disk from NFS to iSCSI - Unable to Boot

2020-10-27 Thread Wesley Stewart
I think I figured this one out.  Looks like the disk format changed when
moving to block storage.  The VM template could not cope with this change.
I deleted the VM, and attached the disk to a new VM, and everything worked
fine.


On Tue, Oct 27, 2020 at 8:41 AM Wesley Stewart  wrote:

> This is a new one.
>
> I migrated from an NFS share to an iSCSI share on a small single node
> oVirt system (Currently running 4.3.10).
>
> After migrating a disk (Virtual Machine -> Disk -> Move), I was unable to
> boot to it.  The console tells me "No bootable device".  This is a Centos7
> guest.
>
> I booted into a CentOS7 ISO and tried a few things...
>
> fdisk -l shows me a 40GB disk (/dev/sda).
> fsck -f tells me "bad magic number in superblock"
>
> lvdisplay and pvdisplay show nothing.  Even if I can't boot to the drive I
> would love to recover a couple of documents from here if possible.  Does
> anyone have any suggestions?  I am running out of ideas.
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PW26DIBGKOJZM7PWACLBTF37TQLNJFBG/


[ovirt-users] Re: Add Node to a single node installation with self hosted engine.

2020-10-27 Thread marcel d'heureuse
Hi Strahil,

where can I find some documents for the conversion to replica? works this also 
for the engine brick? 

br
marcel

Am 27. Oktober 2020 16:40:59 MEZ schrieb Strahil Nikolov via Users 
:
>Hello Gobinda,
>
>I know that gluster can easily convert distributed volume to replica
>volume, so why it is not possible to first convert to replica and then
>add the nodes as HCI ?
>
>Best Regards,
>Strahil Nikolov
>
>
>
>
>
>
>В вторник, 27 октомври 2020 г., 08:20:56 Гринуич+2, Gobinda Das
> написа: 
>
>
>
>
>
>Hello Marcel,
> For a note, you can't expand your single gluster node cluster to 3
>nodes.Only you can add compute nodes.
>If you want to add compute nodes then you do not need any glusterfs
>packages to be installed. Only ovirt packages are enough to add host as
>a compute node.
>
>On Tue, Oct 27, 2020 at 10:28 AM Parth Dhanjal 
>wrote:
>> Hey Marcel!
>> 
>> You have to install the required glusterfs packages and then deploy
>the gluster setup on the 2 new hosts. After creating the required LVs,
>VGs, thinpools, mount points and bricks, you'll have to expand the
>gluster-cluster from the current host using add-brick functionality
>from gluster. After this you can add the 2 new hosts to your existing
>ovirt-engine.
>> 
>> On Mon, Oct 26, 2020 at 7:40 PM Marcel d'Heureuse
> wrote:
>>> Hi,
>>> 
>>> I got a problem with my Ovirt installation. Normally we deliver
>Ovirt as
>>> single node installation and we told our service guys if the
>internal
>>> client will have more redundancy we need two more server and add
>this to
>>> the single node installation. i thought that no one would order two
>new
>>> servers.
>>> 
>>> Now I have the problem to get the system running.
>>> 
>>> First item is, that this environment has no internet access. So I
>can't
>>> install software by update with yum.
>>> The Ovirt installation is running on Ovirt node 4.3.9 boot USB
>Stick. All
>>> three servers have the same software installed.
>>> On the single node I have installed the hosted Engine package 1,1 GB
>to
>>> deploy the self-hosted engine without internet. That works.
>>> 
>>> Gluster, Ovirt, Self-Hosted engine are running on the server 01.
>>> 
>>> What should I do first?
>>> 
>>> Deploy the Glusterfs first and then add the two new hosts to the
>single
>>> node installation?
>>> Or should I deploy a new Ovirt System to the two new hosts and add
>later
>>> the cleaned host to the new Ovirt System?
>>> 
>>> I have not found any items in this mailing list which gives me an
>idea
>>> what I should do now.
>>> 
>>> 
>>> Br
>>> Marcel
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/TNV5WNKTGUSU5DB2CFR67FROXMMDCPCD/
>>> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VXJVTPD24CQWTKS3QGINEKGT3NXVCP5/
>> 
>> 
>
>
>-- 
>
>
>Thanks,
>Gobinda
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives: 
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/DVHGMMOK3HYHA6DZLAOGSPDBIMRO5FWT/
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/B6INNLTZH7FMC5M4HLKN7YKZCZ3Q77E5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RJQZYYR2ZKE2EVAF7ZARL5ZXEEHMSBJ3/


[ovirt-users] oVirt 4.4 Upgrade issue with pki for libvirt-vnc

2020-10-27 Thread lee . hanel
Greetings,

After reverting the ovirt_disk module to the ovirt_disk_28, I'm able to get 
passed that step, however now I'm running into a new issue.

When it tries to start the vm after moving it from local storage to the hosted 
storage, I get the following errors:

2020-10-27 21:42:17,334+ ERROR (vm/9562a74e) [virt.vm] 
(vmId='9562a74e-2e6c-433b-ac0a-75a2acc7398d') The vm start process failed 
(vm:872)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 802, in 
_startUnderlyingVm
self._run()
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2615, in _run
dom.createWithFlags(flags)
  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in 
wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1265, in 
createWithFlags
if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', 
dom=self)
libvirt.libvirtError: internal error: process exited while connecting to 
monitor: 2020-10-27T21:42:16.133517Z qemu-kvm: -object 
tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no:
 Cannot load certificate '/etc/pki/vdsm/libvirt-vnc/server-cert.pem' & key 
'/etc/pki/vdsm/libvirt-vnc/server-key.pem': Error while reading file.
2020-10-27 21:42:17,335+ INFO  (vm/9562a74e) [virt.vm] 
(vmId='9562a74e-2e6c-433b-ac0a-75a2acc7398d') Changed state to Down: internal 
error: process exited while connecting to monitor: 2020-10-27T21:42:16.133517Z 
qemu-kvm: -object 
tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no:
 Cannot load certificate '/etc/pki/vdsm/libvirt-vnc/server-cert.pem' & key 
'/etc/pki/vdsm/libvirt-vnc/server-key.pem': Error while reading file. (code=1) 
(vm:1636)

the permissions on the files appears to be correct.

https://bugzilla.redhat.com/show_bug.cgi?id=1634742 appears similar, but i took 
the added precaution of completely removing the vdsm packages and /etc/pki/vdsm 
and /etc/libvirt.

Anyone have any additional troubleshooting steps?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BI2WFKXMB35XFO2GJPFTTGG77M3NI2FY/


[ovirt-users] Re: Improve glusterfs performance

2020-10-27 Thread suporte
No, is not a replica gluster, is just one brick one volume, one single server 
storage. 

This is what I get: 

# gluster volume set data group virt 
volume set: failed: Cannot set cluster.shd-max-threads for a non-replicate 
volume. 

Since I don't have a replica volume I cannot optimize it for oVirt? 
So, it's better to use an iScsi or NFS for a single storage server? 

Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 19:20:21 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

I was left with the impression that you got 2 bricks (2 raids) on a single 
server. 

As I understand , you cannot get a 'replica 3 arbiter 1' or a 'replica 3' 
volumes , right ? 


When you create your volume , you need to use the UI's dropdown "Optimize for 
virt" or the command 'gluster volume set  group virt' in order to 
implement the optimal settings for Virtualization.Of course , some tunings can 
be made like how many I/O threads to be used and some caching - but even the 
defaults are OK. 

Have you checked anything from the list I send you ? 


Best Regards, 
Strahil Nikolov 






В вторник, 27 октомври 2020 г., 19:31:15 Гринуич+2, supo...@logicworks.pt 
 написа: 





Well, each volume has only one brick. Each volume has one dedicated server. 
Each server storage has the disks in raid10. 
I made the gluster volume like this 
gluster volume create data transport tcp gfs.domain.com:/home/brick1 
And than create a data domain in ovirt: gfs.domain.com:/data 

is this correct? 

Thanks 

José 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 17:09:26 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

Why don't you use the devices of the 2 bricks in a single striped LV or raid0 
md device ? 

Distributed volumes spread the files among the bricks and the performance is 
limited to the brick's device speed. 

Best Regards, 
Strahil Nikolov 






В вторник, 27 октомври 2020 г., 18:09:47 Гринуич+2,  
написа: 





Yes, the VM's disks are on the distributed volume 

They were created like this: 
gluster volume create data transport tcp gfs2.domain.com:/home/brick1 


 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 15:53:57 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

Are the VM's disks located on the distributed Volume ? 

Best Regards, 
Strahil Nikolov 






В вторник, 27 октомври 2020 г., 17:21:49 Гринуич+2, supo...@logicworks.pt 
 написа: 





The engine version is 4.3.9.4-1.el7 
I have 2 simple glusterfs, a volume with one brick gfs1 with an old version of 
gluster (3.7.6) and a volume with one brick gfs2 with version 6.8 
2 hosts, node3 is up to date, node 2 is not up to date: 
RHEL - 7 – 7.1908.0.el7.centos 
CentOS Linux 7 (Core) 
3.10.0 – 1062.18.1.el7.x86_64 
2.12.0 – 33.1.el7_7.4 
libvirt-4.5.0-23.el7_7.6 
vdsm-4.30.43-1.el7 
0.14.0 – 7.el7 
glusterfs-6.8-1.el7 
librbd1-10.2.5-4.el7 
openvswitch-2.11.0-4.el7 

gfs1 has 3 nic's gigabit in mode 802.3ad 
gfs2 has 4 nic's gigabit in mode 802.3ad 

A VM performes better in node3/gfs1: 

# dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync 1+0 registos dentro 
1+0 registos fora 1073741824 bytes (1,1 GB) copiados, 12,3365 s, 87,0 MB/s 
in node2/gfs2: 

dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync 1+0 records in 1+0 
records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 51.0159 s, 21.0 MB/s 

For windows machine is the same. It helps if you have windows drives up to 
date. 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Domingo, 25 De Outubro de 2020 19:58:12 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

It seems that your e-mail went to the spam. 

I would start with isolating the issue ? 
1. Is this a VM specific issue or a more-wide issue. 
- Can you move another VM on that storage domain and verify performance ? 
- Can you create/migrate same OS-type VM and check performance? 
- What about running a VM with different version of Windows or even better -> 
Linux ? 


Also I would check the systems from bottom up. 
Check the following: 
- Are you using Latest Firmware/OS updates for the Hosts and Engine ? 
- What is your cmdline : 
cat /proc/cmdline 
- Tuned profile 
- Are you using a HW controller for the Gluster bricks ? Health of controller 
and disks ? 
- Is the FS aligned properly to the HW controller (stripe unit and stripe width 
)? More details on 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#Hardware_RAID
 

- FS Fragmentation ? Is your FS full ? 

- Gluster volume status ? Client count ? (for example: gluster volume status 
all client-list) 
- Gluster version, cluster.op-version and 

[ovirt-users] Re: Improve glusterfs performance

2020-10-27 Thread Strahil Nikolov via Users
I was left with the impression that you got 2 bricks (2 raids) on a single 
server.

As I understand , you cannot get a 'replica 3 arbiter 1' or a 'replica 3' 
volumes , right ?


When you create your volume , you need to use the UI's dropdown "Optimize for 
virt" or the command 'gluster volume set  group virt' in order to 
implement the optimal settings for Virtualization.Of course , some tunings can 
be made like how many I/O threads to be used and some caching - but even the 
defaults are OK.

Have you checked anything from the list I send you ?


Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 19:31:15 Гринуич+2, supo...@logicworks.pt 
 написа: 





Well, each volume has only one brick. Each volume has one dedicated server. 
Each server storage has the disks in raid10.
I made the gluster volume like this 
gluster volume create data transport tcp gfs.domain.com:/home/brick1
And than create a data domain in ovirt: gfs.domain.com:/data

is this correct?

Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 17:09:26
Assunto: Re: [ovirt-users] Improve glusterfs performance

Why don't you use the devices of the 2 bricks in a single striped LV or raid0 
md device ?

Distributed volumes spread the files among the bricks and the performance is 
limited to the brick's device speed.

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 18:09:47 Гринуич+2,  
написа: 





Yes, the VM's disks are on the distributed volume

They were created like this: 
gluster volume create data transport tcp gfs2.domain.com:/home/brick1



De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 15:53:57
Assunto: Re: [ovirt-users] Improve glusterfs performance

Are the VM's disks located on the distributed Volume ?

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 17:21:49 Гринуич+2, supo...@logicworks.pt 
 написа: 





The engine version is 4.3.9.4-1.el7
I have 2 simple glusterfs, a volume with one brick gfs1 with an old version of 
gluster (3.7.6) and a volume with one brick gfs2 with version 6.8
2 hosts, node3 is up to date, node 2 is not up to date:
RHEL - 7 – 7.1908.0.el7.centos
CentOS Linux 7 (Core)
3.10.0 – 1062.18.1.el7.x86_64
2.12.0 – 33.1.el7_7.4
libvirt-4.5.0-23.el7_7.6
vdsm-4.30.43-1.el7
0.14.0 – 7.el7
glusterfs-6.8-1.el7
librbd1-10.2.5-4.el7
openvswitch-2.11.0-4.el7

gfs1 has 3 nic's gigabit in mode 802.3ad
gfs2 has 4 nic's gigabit in mode 802.3ad

A VM performes better in node3/gfs1:

# dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync  1+0 registos dentro 
1+0 registos fora 1073741824 bytes (1,1 GB) copiados, 12,3365 s, 87,0 MB/s
in node2/gfs2:

dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync  1+0 records in 1+0 
records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 51.0159 s, 21.0 MB/s

For windows machine is the same. It helps if you have windows drives up to date.


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Domingo, 25 De Outubro de 2020 19:58:12
Assunto: Re: [ovirt-users] Improve glusterfs performance

It seems that your e-mail went to the spam.

I would start with isolating the issue ?
1. Is this a VM specific issue or a more-wide issue.
- Can you move another VM on that storage domain and verify performance ?
- Can you create/migrate same OS-type VM and check performance?
- What about running a VM with different version of Windows or even better -> 
Linux ?


Also I would check the systems from bottom up.
Check the following:
- Are you using Latest Firmware/OS updates for the Hosts and Engine ?
- What is your cmdline :
cat /proc/cmdline
- Tuned profile
- Are you using a HW controller for the Gluster bricks ? Health of controller 
and disks ?
- Is the FS aligned properly to the HW controller (stripe unit and stripe width 
)? More details on 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#Hardware_RAID

- FS Fragmentation ? Is your FS full ?

- Gluster volume status ? Client count ? (for example: gluster volume status 
all client-list)
- Gluster version, cluster.op-version and cluster.max-op-version ?

- If your network is slower than the bricks' backend (disks faster than 
network) , you can set cluster.choose-local to 'yes'

- Any errors and warnings in the gluster logs ?


Best Regards,
Strahil Nikolov





В четвъртък, 22 октомври 2020 г., 13:59:04 Гринуич+3,  
написа: 





Hello,

For example, a window machine runs to slow, usually the disk is allways in 100%
The group virt settings is this?:
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
features.shard=on
user.cifs=off
client.event-threads=4
server.event-threads=4

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread Strahil Nikolov via Users
The default value is the one found via "gluster volume set help | grep -A 3 
cluster.min-free-disk". In my case it is 10%.
The best option is to use a single brick on either software/hardware raid of 
equal in performance and size disks. Size should be the same or gluster will 
write proportionally based on the storage ration between bricks (when using in 
distributed volume instead of a raid).

For example if hdd1 is 500G and hdd2 is 1T , gluster will try to put double the 
data (thus double the load) on hdd2 . Yet , spinning disks have nearly the same 
IOPS ratings and then hdd2 will slow down your volume.


For example I got 3 x 500GB SATA disks in a raid0 per each (data) node , so I 
can store slow and not so important data.

Most optimal is to use a HW raid10 with 10 disks (2-3TB each disk) , but this 
leads to increased costs and reduced storage availability.If you can afford 
SSDs/NVMEs - go for it , you will need some tuning but it will perform way 
better.

Your distributed volume is still OK, but like in this case - performance cannot 
be guaranteed and you will eventually suffer from various issues that could be 
avoided with proper design of the oVirt DC.

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 19:20:31 Гринуич+2, supo...@logicworks.pt 
 написа: 





So, in oVirt if I want a single storage, without high availability, what is the 
best solution?
Can I use gluster replica 1 for a single storage?

by the way, cluster.min-free-disk: 1% not 10% the default value. And I still 
cannot remove the disk.
I think I will destroy the volume and configure it again.

Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 17:11:14
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

Nope,

oficially oVirt supports only replica 3 (replica 3 arbiter 1) or replica 1 
(which actually is a single brick distributed ) volumes.
If you have issues related to the Gluster Volume - liek this case , the 
community support will be "best effort".

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 18:24:07 Гринуич+2,  
написа: 





But there are also distributed volume types, right?
Replicated is if you want high availability, that is not the case.

Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 15:49:22
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

You have exactly 90% used space.
The Gluster's default protection value is exactly 10%:


Option: cluster.min-free-disk
Default Value: 10%
Description: Percentage/Size of disk space, after which the process starts 
balancing out the cluster, and logs will appear in log files

I would recommend you to temporarily drop that value till you do the cleanup.

gluster volume set data cluster.min-free-disk 5%

To restore the default values of any option:
gluster volume reset data cluster.min-free-disk


P.S.: Keep in mind that if you have sparse disks for your VMs , they will be 
unpaused and start eating the last space you got left ... so be quick :)

P.S.2: I hope you know that the only supported volume types are 
'distributed-replicated' and 'replicated' :)

Best Regards,
Strahil Nikolov








В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicworks.pt 
 написа: 





This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success

]# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/md127   50G  2.5G   48G   5% /
devtmpfs    7.7G 0  7.7G   0% /dev
tmpfs   7.7G   16K  7.7G   1% /dev/shm
tmpfs   7.7G   98M  7.6G   2% /run
tmpfs   7.7G 0  7.7G   0% /sys/fs/cgroup
/dev/md126 1016M  194M  822M  20% /boot
/dev/md125  411G  407G  4.1G 100% /home
gfs2.domain.com:/data 461G  414G   48G  90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data
tmpfs   1.6G 0  1.6G   0% /run/user/0


# gluster volume status data
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs2.domain.com:/home/brick1    49154 0  Y   8908
Brick gfs2.domain.com:/brickx 49155 0  Y   8931

Task Status of Volume data
--
There are no active volume tasks

# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 

[ovirt-users] Re: Improve glusterfs performance

2020-10-27 Thread suporte
Well, each volume has only one brick. Each volume has one dedicated server. 
Each server storage has the disks in raid10. 
I made the gluster volume like this 
gluster volume create data transport tcp gfs.domain.com:/home/brick1 
And than create a data domain in ovirt: gfs.domain.com:/data 

is this correct? 

Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 17:09:26 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

Why don't you use the devices of the 2 bricks in a single striped LV or raid0 
md device ? 

Distributed volumes spread the files among the bricks and the performance is 
limited to the brick's device speed. 

Best Regards, 
Strahil Nikolov 






В вторник, 27 октомври 2020 г., 18:09:47 Гринуич+2,  
написа: 





Yes, the VM's disks are on the distributed volume 

They were created like this: 
gluster volume create data transport tcp gfs2.domain.com:/home/brick1 


 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 15:53:57 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

Are the VM's disks located on the distributed Volume ? 

Best Regards, 
Strahil Nikolov 






В вторник, 27 октомври 2020 г., 17:21:49 Гринуич+2, supo...@logicworks.pt 
 написа: 





The engine version is 4.3.9.4-1.el7 
I have 2 simple glusterfs, a volume with one brick gfs1 with an old version of 
gluster (3.7.6) and a volume with one brick gfs2 with version 6.8 
2 hosts, node3 is up to date, node 2 is not up to date: 
RHEL - 7 – 7.1908.0.el7.centos 
CentOS Linux 7 (Core) 
3.10.0 – 1062.18.1.el7.x86_64 
2.12.0 – 33.1.el7_7.4 
libvirt-4.5.0-23.el7_7.6 
vdsm-4.30.43-1.el7 
0.14.0 – 7.el7 
glusterfs-6.8-1.el7 
librbd1-10.2.5-4.el7 
openvswitch-2.11.0-4.el7 

gfs1 has 3 nic's gigabit in mode 802.3ad 
gfs2 has 4 nic's gigabit in mode 802.3ad 

A VM performes better in node3/gfs1: 

# dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync 1+0 registos dentro 
1+0 registos fora 1073741824 bytes (1,1 GB) copiados, 12,3365 s, 87,0 MB/s 
in node2/gfs2: 

dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync 1+0 records in 1+0 
records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 51.0159 s, 21.0 MB/s 

For windows machine is the same. It helps if you have windows drives up to 
date. 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Domingo, 25 De Outubro de 2020 19:58:12 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

It seems that your e-mail went to the spam. 

I would start with isolating the issue ? 
1. Is this a VM specific issue or a more-wide issue. 
- Can you move another VM on that storage domain and verify performance ? 
- Can you create/migrate same OS-type VM and check performance? 
- What about running a VM with different version of Windows or even better -> 
Linux ? 


Also I would check the systems from bottom up. 
Check the following: 
- Are you using Latest Firmware/OS updates for the Hosts and Engine ? 
- What is your cmdline : 
cat /proc/cmdline 
- Tuned profile 
- Are you using a HW controller for the Gluster bricks ? Health of controller 
and disks ? 
- Is the FS aligned properly to the HW controller (stripe unit and stripe width 
)? More details on 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#Hardware_RAID
 

- FS Fragmentation ? Is your FS full ? 

- Gluster volume status ? Client count ? (for example: gluster volume status 
all client-list) 
- Gluster version, cluster.op-version and cluster.max-op-version ? 

- If your network is slower than the bricks' backend (disks faster than 
network) , you can set cluster.choose-local to 'yes' 

- Any errors and warnings in the gluster logs ? 


Best Regards, 
Strahil Nikolov 





В четвъртък, 22 октомври 2020 г., 13:59:04 Гринуич+3,  
написа: 





Hello, 

For example, a window machine runs to slow, usually the disk is allways in 100% 
The group virt settings is this?: 
performance.quick-read=off 
performance.read-ahead=off 
performance.io-cache=off 
performance.low-prio-threads=32 
network.remote-dio=enable 
features.shard=on 
user.cifs=off 
client.event-threads=4 
server.event-threads=4 
performance.client-io-threads=on 


 
De: "Strahil Nikolov"  
Para: "users" , supo...@logicworks.pt 
Enviadas: Quarta-feira, 21 De Outubro de 2020 19:22:14 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

Usually, oVirt uses the 'virt' group of settings. 

What are you symptoms ? 

Best Regards, 
Strahil Nikolov 






В сряда, 21 октомври 2020 г., 16:44:50 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello, 

Can anyone help me in how can I improve the performance of glusterfs to work 
with oVirt? 

Thanks 

-- 
 
Jose Ferradeira 
http://www.logicworks.pt 

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread suporte
So, in oVirt if I want a single storage, without high availability, what is the 
best solution? 
Can I use gluster replica 1 for a single storage? 

by the way, cluster.min-free-disk: 1% not 10% the default value. And I still 
cannot remove the disk. 
I think I will destroy the volume and configure it again. 

Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 17:11:14 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

Nope, 

oficially oVirt supports only replica 3 (replica 3 arbiter 1) or replica 1 
(which actually is a single brick distributed ) volumes. 
If you have issues related to the Gluster Volume - liek this case , the 
community support will be "best effort". 

Best Regards, 
Strahil Nikolov 






В вторник, 27 октомври 2020 г., 18:24:07 Гринуич+2,  
написа: 





But there are also distributed volume types, right? 
Replicated is if you want high availability, that is not the case. 

Thanks 

José 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 15:49:22 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

You have exactly 90% used space. 
The Gluster's default protection value is exactly 10%: 


Option: cluster.min-free-disk 
Default Value: 10% 
Description: Percentage/Size of disk space, after which the process starts 
balancing out the cluster, and logs will appear in log files 

I would recommend you to temporarily drop that value till you do the cleanup. 

gluster volume set data cluster.min-free-disk 5% 

To restore the default values of any option: 
gluster volume reset data cluster.min-free-disk 


P.S.: Keep in mind that if you have sparse disks for your VMs , they will be 
unpaused and start eating the last space you got left ... so be quick :) 

P.S.2: I hope you know that the only supported volume types are 
'distributed-replicated' and 'replicated' :) 

Best Regards, 
Strahil Nikolov 








В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicworks.pt 
 написа: 





This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success 

]# df -h 
Filesystem Size Used Avail Use% Mounted on 
/dev/md127 50G 2.5G 48G 5% / 
devtmpfs 7.7G 0 7.7G 0% /dev 
tmpfs 7.7G 16K 7.7G 1% /dev/shm 
tmpfs 7.7G 98M 7.6G 2% /run 
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup 
/dev/md126 1016M 194M 822M 20% /boot 
/dev/md125 411G 407G 4.1G 100% /home 
gfs2.domain.com:/data 461G 414G 48G 90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data 
tmpfs 1.6G 0 1.6G 0% /run/user/0 


# gluster volume status data 
Status of volume: data 
Gluster process TCP Port RDMA Port Online Pid 
-- 
Brick gfs2.domain.com:/home/brick1 49154 0 Y 8908 
Brick gfs2.domain.com:/brickx 49155 0 Y 8931 

Task Status of Volume data 
-- 
There are no active volume tasks 

# gluster volume info data 

Volume Name: data 
Type: Distribute 
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gfs2.domain.com:/home/brick1 
Brick2: gfs2.domain.com:/brickx 
Options Reconfigured: 
cluster.min-free-disk: 1% 
cluster.data-self-heal-algorithm: full 
performance.low-prio-threads: 32 
features.shard-block-size: 512MB 
features.shard: on 
storage.owner-gid: 36 
storage.owner-uid: 36 
transport.address-family: inet 
nfs.disable: on 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

So what is the output of "df" agianst : 
- all bricks in the volume (all nodes) 
- on the mount point in /rhev/mnt/ 

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space. 
Also what is the status of the volume: 

gluster volume status  
gluster volume info  


Best Regards, 
Strahil Nikolov 






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello, 

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message: 

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',) 

Any idea? 
Thanks 

José 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

Any option to extend the Gluster Volume ? 

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and 

[ovirt-users] Re: oVirt 4.4 Upgrade issue?

2020-10-27 Thread lee . hanel
Asaf,

That worked.  now on to new problems. :)


Thanks,
Lee
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/425ID2IF3FDSKIFGUN4YLMCQTVJNXTOI/


[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread Strahil Nikolov via Users
Nope,

oficially oVirt supports only replica 3 (replica 3 arbiter 1) or replica 1 
(which actually is a single brick distributed ) volumes.
If you have issues related to the Gluster Volume - liek this case , the 
community support will be "best effort".

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 18:24:07 Гринуич+2,  
написа: 





But there are also distributed volume types, right?
Replicated is if you want high availability, that is not the case.

Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 15:49:22
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

You have exactly 90% used space.
The Gluster's default protection value is exactly 10%:


Option: cluster.min-free-disk
Default Value: 10%
Description: Percentage/Size of disk space, after which the process starts 
balancing out the cluster, and logs will appear in log files

I would recommend you to temporarily drop that value till you do the cleanup.

gluster volume set data cluster.min-free-disk 5%

To restore the default values of any option:
gluster volume reset data cluster.min-free-disk


P.S.: Keep in mind that if you have sparse disks for your VMs , they will be 
unpaused and start eating the last space you got left ... so be quick :)

P.S.2: I hope you know that the only supported volume types are 
'distributed-replicated' and 'replicated' :)

Best Regards,
Strahil Nikolov








В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicworks.pt 
 написа: 





This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success

]# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/md127   50G  2.5G   48G   5% /
devtmpfs    7.7G 0  7.7G   0% /dev
tmpfs   7.7G   16K  7.7G   1% /dev/shm
tmpfs   7.7G   98M  7.6G   2% /run
tmpfs   7.7G 0  7.7G   0% /sys/fs/cgroup
/dev/md126 1016M  194M  822M  20% /boot
/dev/md125  411G  407G  4.1G 100% /home
gfs2.domain.com:/data 461G  414G   48G  90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data
tmpfs   1.6G 0  1.6G   0% /run/user/0


# gluster volume status data
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs2.domain.com:/home/brick1    49154 0  Y   8908
Brick gfs2.domain.com:/brickx 49155 0  Y   8931

Task Status of Volume data
--
There are no active volume tasks

# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gfs2.domain.com:/home/brick1
Brick2: gfs2.domain.com:/brickx
Options Reconfigured:
cluster.min-free-disk: 1%
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

So what is the output of "df" agianst :
- all bricks in the volume (all nodes)
- on the mount point in /rhev/mnt/

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space.
Also what is the status of the volume:

gluster volume status 
gluster volume info  


Best Regards,
Strahil Nikolov






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello,

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message:

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',)

Any idea?
Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

Any option to extend the Gluster Volume ?

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml

Once you got the VM running on a pure-KVM 

[ovirt-users] Re: Improve glusterfs performance

2020-10-27 Thread Strahil Nikolov via Users
Why don't you use the devices of the 2 bricks in a single striped LV or raid0 
md device ?

Distributed volumes spread the files among the bricks and the performance is 
limited to the brick's device speed.

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 18:09:47 Гринуич+2,  
написа: 





Yes, the VM's disks are on the distributed volume

They were created like this: 
gluster volume create data transport tcp gfs2.domain.com:/home/brick1



De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 15:53:57
Assunto: Re: [ovirt-users] Improve glusterfs performance

Are the VM's disks located on the distributed Volume ?

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 17:21:49 Гринуич+2, supo...@logicworks.pt 
 написа: 





The engine version is 4.3.9.4-1.el7
I have 2 simple glusterfs, a volume with one brick gfs1 with an old version of 
gluster (3.7.6) and a volume with one brick gfs2 with version 6.8
2 hosts, node3 is up to date, node 2 is not up to date:
RHEL - 7 – 7.1908.0.el7.centos
CentOS Linux 7 (Core)
3.10.0 – 1062.18.1.el7.x86_64
2.12.0 – 33.1.el7_7.4
libvirt-4.5.0-23.el7_7.6
vdsm-4.30.43-1.el7
0.14.0 – 7.el7
glusterfs-6.8-1.el7
librbd1-10.2.5-4.el7
openvswitch-2.11.0-4.el7

gfs1 has 3 nic's gigabit in mode 802.3ad
gfs2 has 4 nic's gigabit in mode 802.3ad

A VM performes better in node3/gfs1:

# dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync  1+0 registos dentro 
1+0 registos fora 1073741824 bytes (1,1 GB) copiados, 12,3365 s, 87,0 MB/s
in node2/gfs2:

dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync  1+0 records in 1+0 
records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 51.0159 s, 21.0 MB/s

For windows machine is the same. It helps if you have windows drives up to date.


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Domingo, 25 De Outubro de 2020 19:58:12
Assunto: Re: [ovirt-users] Improve glusterfs performance

It seems that your e-mail went to the spam.

I would start with isolating the issue ?
1. Is this a VM specific issue or a more-wide issue.
- Can you move another VM on that storage domain and verify performance ?
- Can you create/migrate same OS-type VM and check performance?
- What about running a VM with different version of Windows or even better -> 
Linux ?


Also I would check the systems from bottom up.
Check the following:
- Are you using Latest Firmware/OS updates for the Hosts and Engine ?
- What is your cmdline :
cat /proc/cmdline
- Tuned profile
- Are you using a HW controller for the Gluster bricks ? Health of controller 
and disks ?
- Is the FS aligned properly to the HW controller (stripe unit and stripe width 
)? More details on 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#Hardware_RAID

- FS Fragmentation ? Is your FS full ?

- Gluster volume status ? Client count ? (for example: gluster volume status 
all client-list)
- Gluster version, cluster.op-version and cluster.max-op-version ?

- If your network is slower than the bricks' backend (disks faster than 
network) , you can set cluster.choose-local to 'yes'

- Any errors and warnings in the gluster logs ?


Best Regards,
Strahil Nikolov





В четвъртък, 22 октомври 2020 г., 13:59:04 Гринуич+3,  
написа: 





Hello,

For example, a window machine runs to slow, usually the disk is allways in 100%
The group virt settings is this?:
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
features.shard=on
user.cifs=off
client.event-threads=4
server.event-threads=4
performance.client-io-threads=on



De: "Strahil Nikolov" 
Para: "users" , supo...@logicworks.pt
Enviadas: Quarta-feira, 21 De Outubro de 2020 19:22:14
Assunto: Re: [ovirt-users] Improve glusterfs performance

Usually, oVirt uses the 'virt' group of settings.

What are you symptoms ?

Best Regards,
Strahil Nikolov






В сряда, 21 октомври 2020 г., 16:44:50 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

Can anyone help me in how can I improve the performance of glusterfs to work 
with oVirt?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPU4QFTMUMMFGUA4PYG6624KLSHVLNX4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code 

[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread suporte
But there are also distributed volume types, right? 
Replicated is if you want high availability, that is not the case. 

Thanks 

José 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 15:49:22 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

You have exactly 90% used space. 
The Gluster's default protection value is exactly 10%: 


Option: cluster.min-free-disk 
Default Value: 10% 
Description: Percentage/Size of disk space, after which the process starts 
balancing out the cluster, and logs will appear in log files 

I would recommend you to temporarily drop that value till you do the cleanup. 

gluster volume set data cluster.min-free-disk 5% 

To restore the default values of any option: 
gluster volume reset data cluster.min-free-disk 


P.S.: Keep in mind that if you have sparse disks for your VMs , they will be 
unpaused and start eating the last space you got left ... so be quick :) 

P.S.2: I hope you know that the only supported volume types are 
'distributed-replicated' and 'replicated' :) 

Best Regards, 
Strahil Nikolov 








В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicworks.pt 
 написа: 





This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success 

]# df -h 
Filesystem Size Used Avail Use% Mounted on 
/dev/md127 50G 2.5G 48G 5% / 
devtmpfs 7.7G 0 7.7G 0% /dev 
tmpfs 7.7G 16K 7.7G 1% /dev/shm 
tmpfs 7.7G 98M 7.6G 2% /run 
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup 
/dev/md126 1016M 194M 822M 20% /boot 
/dev/md125 411G 407G 4.1G 100% /home 
gfs2.domain.com:/data 461G 414G 48G 90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data 
tmpfs 1.6G 0 1.6G 0% /run/user/0 


# gluster volume status data 
Status of volume: data 
Gluster process TCP Port RDMA Port Online Pid 
-- 
Brick gfs2.domain.com:/home/brick1 49154 0 Y 8908 
Brick gfs2.domain.com:/brickx 49155 0 Y 8931 

Task Status of Volume data 
-- 
There are no active volume tasks 

# gluster volume info data 

Volume Name: data 
Type: Distribute 
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gfs2.domain.com:/home/brick1 
Brick2: gfs2.domain.com:/brickx 
Options Reconfigured: 
cluster.min-free-disk: 1% 
cluster.data-self-heal-algorithm: full 
performance.low-prio-threads: 32 
features.shard-block-size: 512MB 
features.shard: on 
storage.owner-gid: 36 
storage.owner-uid: 36 
transport.address-family: inet 
nfs.disable: on 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

So what is the output of "df" agianst : 
- all bricks in the volume (all nodes) 
- on the mount point in /rhev/mnt/ 

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space. 
Also what is the status of the volume: 

gluster volume status  
gluster volume info  


Best Regards, 
Strahil Nikolov 






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello, 

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message: 

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',) 

Any idea? 
Thanks 

José 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

Any option to extend the Gluster Volume ? 

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host. 
Then you can start the VM , while you are recovering from the situation. 

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml 

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy). 

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size. 

Best Regards, 
Strahil Nikolov 



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil, 

I just set cluster.min-free-disk to 1%: 
# gluster volume info data 

Volume Name: data 
Type: 

[ovirt-users] Re: Improve glusterfs performance

2020-10-27 Thread suporte
Yes, the VM's disks are on the distributed volume 

They were created like this: 
gluster volume create data transport tcp gfs2.domain.com:/home/brick1 



De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 15:53:57 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

Are the VM's disks located on the distributed Volume ? 

Best Regards, 
Strahil Nikolov 






В вторник, 27 октомври 2020 г., 17:21:49 Гринуич+2, supo...@logicworks.pt 
 написа: 





The engine version is 4.3.9.4-1.el7 
I have 2 simple glusterfs, a volume with one brick gfs1 with an old version of 
gluster (3.7.6) and a volume with one brick gfs2 with version 6.8 
2 hosts, node3 is up to date, node 2 is not up to date: 
RHEL - 7 – 7.1908.0.el7.centos 
CentOS Linux 7 (Core) 
3.10.0 – 1062.18.1.el7.x86_64 
2.12.0 – 33.1.el7_7.4 
libvirt-4.5.0-23.el7_7.6 
vdsm-4.30.43-1.el7 
0.14.0 – 7.el7 
glusterfs-6.8-1.el7 
librbd1-10.2.5-4.el7 
openvswitch-2.11.0-4.el7 

gfs1 has 3 nic's gigabit in mode 802.3ad 
gfs2 has 4 nic's gigabit in mode 802.3ad 

A VM performes better in node3/gfs1: 

# dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync 1+0 registos dentro 
1+0 registos fora 1073741824 bytes (1,1 GB) copiados, 12,3365 s, 87,0 MB/s 
in node2/gfs2: 

dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync 1+0 records in 1+0 
records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 51.0159 s, 21.0 MB/s 

For windows machine is the same. It helps if you have windows drives up to 
date. 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Domingo, 25 De Outubro de 2020 19:58:12 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

It seems that your e-mail went to the spam. 

I would start with isolating the issue ? 
1. Is this a VM specific issue or a more-wide issue. 
- Can you move another VM on that storage domain and verify performance ? 
- Can you create/migrate same OS-type VM and check performance? 
- What about running a VM with different version of Windows or even better -> 
Linux ? 


Also I would check the systems from bottom up. 
Check the following: 
- Are you using Latest Firmware/OS updates for the Hosts and Engine ? 
- What is your cmdline : 
cat /proc/cmdline 
- Tuned profile 
- Are you using a HW controller for the Gluster bricks ? Health of controller 
and disks ? 
- Is the FS aligned properly to the HW controller (stripe unit and stripe width 
)? More details on 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#Hardware_RAID
 

- FS Fragmentation ? Is your FS full ? 

- Gluster volume status ? Client count ? (for example: gluster volume status 
all client-list) 
- Gluster version, cluster.op-version and cluster.max-op-version ? 

- If your network is slower than the bricks' backend (disks faster than 
network) , you can set cluster.choose-local to 'yes' 

- Any errors and warnings in the gluster logs ? 


Best Regards, 
Strahil Nikolov 





В четвъртък, 22 октомври 2020 г., 13:59:04 Гринуич+3,  
написа: 





Hello, 

For example, a window machine runs to slow, usually the disk is allways in 100% 
The group virt settings is this?: 
performance.quick-read=off 
performance.read-ahead=off 
performance.io-cache=off 
performance.low-prio-threads=32 
network.remote-dio=enable 
features.shard=on 
user.cifs=off 
client.event-threads=4 
server.event-threads=4 
performance.client-io-threads=on 


 
De: "Strahil Nikolov"  
Para: "users" , supo...@logicworks.pt 
Enviadas: Quarta-feira, 21 De Outubro de 2020 19:22:14 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

Usually, oVirt uses the 'virt' group of settings. 

What are you symptoms ? 

Best Regards, 
Strahil Nikolov 






В сряда, 21 октомври 2020 г., 16:44:50 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello, 

Can anyone help me in how can I improve the performance of glusterfs to work 
with oVirt? 

Thanks 

-- 
 
Jose Ferradeira 
http://www.logicworks.pt 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPU4QFTMUMMFGUA4PYG6624KLSHVLNX4/
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ULVQAVXUSFDBB26OFM2J4A5AYGYNQU45/


[ovirt-users] Re: Improve glusterfs performance

2020-10-27 Thread Strahil Nikolov via Users
Are the VM's disks located on the distributed Volume ?

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 17:21:49 Гринуич+2, supo...@logicworks.pt 
 написа: 





The engine version is 4.3.9.4-1.el7
I have 2 simple glusterfs, a volume with one brick gfs1 with an old version of 
gluster (3.7.6) and a volume with one brick gfs2 with version 6.8
2 hosts, node3 is up to date, node 2 is not up to date:
RHEL - 7 – 7.1908.0.el7.centos
CentOS Linux 7 (Core)
3.10.0 – 1062.18.1.el7.x86_64
2.12.0 – 33.1.el7_7.4
libvirt-4.5.0-23.el7_7.6
vdsm-4.30.43-1.el7
0.14.0 – 7.el7
glusterfs-6.8-1.el7
librbd1-10.2.5-4.el7
openvswitch-2.11.0-4.el7

gfs1 has 3 nic's gigabit in mode 802.3ad
gfs2 has 4 nic's gigabit in mode 802.3ad

A VM performes better in node3/gfs1:

# dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync  1+0 registos dentro 
1+0 registos fora 1073741824 bytes (1,1 GB) copiados, 12,3365 s, 87,0 MB/s
in node2/gfs2:

dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync  1+0 records in 1+0 
records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 51.0159 s, 21.0 MB/s

For windows machine is the same. It helps if you have windows drives up to date.


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Domingo, 25 De Outubro de 2020 19:58:12
Assunto: Re: [ovirt-users] Improve glusterfs performance

It seems that your e-mail went to the spam.

I would start with isolating the issue ?
1. Is this a VM specific issue or a more-wide issue.
- Can you move another VM on that storage domain and verify performance ?
- Can you create/migrate same OS-type VM and check performance?
- What about running a VM with different version of Windows or even better -> 
Linux ?


Also I would check the systems from bottom up.
Check the following:
- Are you using Latest Firmware/OS updates for the Hosts and Engine ?
- What is your cmdline :
cat /proc/cmdline
- Tuned profile
- Are you using a HW controller for the Gluster bricks ? Health of controller 
and disks ?
- Is the FS aligned properly to the HW controller (stripe unit and stripe width 
)? More details on 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#Hardware_RAID

- FS Fragmentation ? Is your FS full ?

- Gluster volume status ? Client count ? (for example: gluster volume status 
all client-list)
- Gluster version, cluster.op-version and cluster.max-op-version ?

- If your network is slower than the bricks' backend (disks faster than 
network) , you can set cluster.choose-local to 'yes'

- Any errors and warnings in the gluster logs ?


Best Regards,
Strahil Nikolov





В четвъртък, 22 октомври 2020 г., 13:59:04 Гринуич+3,  
написа: 





Hello,

For example, a window machine runs to slow, usually the disk is allways in 100%
The group virt settings is this?:
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
features.shard=on
user.cifs=off
client.event-threads=4
server.event-threads=4
performance.client-io-threads=on



De: "Strahil Nikolov" 
Para: "users" , supo...@logicworks.pt
Enviadas: Quarta-feira, 21 De Outubro de 2020 19:22:14
Assunto: Re: [ovirt-users] Improve glusterfs performance

Usually, oVirt uses the 'virt' group of settings.

What are you symptoms ?

Best Regards,
Strahil Nikolov






В сряда, 21 октомври 2020 г., 16:44:50 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

Can anyone help me in how can I improve the performance of glusterfs to work 
with oVirt?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPU4QFTMUMMFGUA4PYG6624KLSHVLNX4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MP4JMRX3RUCXW45GVKIBMGAOLON4NKL2/


[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread Strahil Nikolov via Users
You have exactly 90% used space.
The Gluster's default protection value is exactly 10%:


Option: cluster.min-free-disk
Default Value: 10%
Description: Percentage/Size of disk space, after which the process starts 
balancing out the cluster, and logs will appear in log files

I would recommend you to temporarily drop that value till you do the cleanup.

gluster volume set data cluster.min-free-disk 5%

To restore the default values of any option:
gluster volume reset data cluster.min-free-disk


P.S.: Keep in mind that if you have sparse disks for your VMs , they will be 
unpaused and start eating the last space you got left ... so be quick :)

P.S.2: I hope you know that the only supported volume types are 
'distributed-replicated' and 'replicated' :)

Best Regards,
Strahil Nikolov








В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicworks.pt 
 написа: 





This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success

]# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/md127   50G  2.5G   48G   5% /
devtmpfs    7.7G 0  7.7G   0% /dev
tmpfs   7.7G   16K  7.7G   1% /dev/shm
tmpfs   7.7G   98M  7.6G   2% /run
tmpfs   7.7G 0  7.7G   0% /sys/fs/cgroup
/dev/md126 1016M  194M  822M  20% /boot
/dev/md125  411G  407G  4.1G 100% /home
gfs2.domain.com:/data 461G  414G   48G  90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data
tmpfs   1.6G 0  1.6G   0% /run/user/0


# gluster volume status data
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs2.domain.com:/home/brick1    49154 0  Y   8908
Brick gfs2.domain.com:/brickx 49155 0  Y   8931

Task Status of Volume data
--
There are no active volume tasks

# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gfs2.domain.com:/home/brick1
Brick2: gfs2.domain.com:/brickx
Options Reconfigured:
cluster.min-free-disk: 1%
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

So what is the output of "df" agianst :
- all bricks in the volume (all nodes)
- on the mount point in /rhev/mnt/

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space.
Also what is the status of the volume:

gluster volume status 
gluster volume info  


Best Regards,
Strahil Nikolov






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello,

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message:

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',)

Any idea?
Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

Any option to extend the Gluster Volume ?

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy).

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size.

Best Regards,
Strahil Nikolov



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil,

I just set cluster.min-free-disk to 1%:
# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started

[ovirt-users] Re: Add Node to a single node installation with self hosted engine.

2020-10-27 Thread Strahil Nikolov via Users
Hello Gobinda,

I know that gluster can easily convert distributed volume to replica volume, so 
why it is not possible to first convert to replica and then add the nodes as 
HCI ?

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 08:20:56 Гринуич+2, Gobinda Das 
 написа: 





Hello Marcel,
 For a note, you can't expand your single gluster node cluster to 3 nodes.Only 
you can add compute nodes.
If you want to add compute nodes then you do not need any glusterfs packages to 
be installed. Only ovirt packages are enough to add host as a compute node.

On Tue, Oct 27, 2020 at 10:28 AM Parth Dhanjal  wrote:
> Hey Marcel!
> 
> You have to install the required glusterfs packages and then deploy the 
> gluster setup on the 2 new hosts. After creating the required LVs, VGs, 
> thinpools, mount points and bricks, you'll have to expand the gluster-cluster 
> from the current host using add-brick functionality from gluster. After this 
> you can add the 2 new hosts to your existing ovirt-engine.
> 
> On Mon, Oct 26, 2020 at 7:40 PM Marcel d'Heureuse  wrote:
>> Hi,
>> 
>> I got a problem with my Ovirt installation. Normally we deliver Ovirt as
>> single node installation and we told our service guys if the internal
>> client will have more redundancy we need two more server and add this to
>> the single node installation. i thought that no one would order two new
>> servers.
>> 
>> Now I have the problem to get the system running.
>> 
>> First item is, that this environment has no internet access. So I can't
>> install software by update with yum.
>> The Ovirt installation is running on Ovirt node 4.3.9 boot USB Stick. All
>> three servers have the same software installed.
>> On the single node I have installed the hosted Engine package 1,1 GB to
>> deploy the self-hosted engine without internet. That works.
>> 
>> Gluster, Ovirt, Self-Hosted engine are running on the server 01.
>> 
>> What should I do first?
>> 
>> Deploy the Glusterfs first and then add the two new hosts to the single
>> node installation?
>> Or should I deploy a new Ovirt System to the two new hosts and add later
>> the cleaned host to the new Ovirt System?
>> 
>> I have not found any items in this mailing list which gives me an idea
>> what I should do now.
>> 
>> 
>> Br
>> Marcel
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TNV5WNKTGUSU5DB2CFR67FROXMMDCPCD/
>> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VXJVTPD24CQWTKS3QGINEKGT3NXVCP5/
> 
> 


-- 


Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DVHGMMOK3HYHA6DZLAOGSPDBIMRO5FWT/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B6INNLTZH7FMC5M4HLKN7YKZCZ3Q77E5/


[ovirt-users] Re: Improve glusterfs performance

2020-10-27 Thread suporte
The engine version is 4.3.9.4-1.el7 
I have 2 simple glusterfs, a volume with one brick gfs1 with an old version of 
gluster (3.7.6) and a volume with one brick gfs2 with version 6.8 
2 hosts, node3 is up to date, node 2 is not up to date: 
RHEL - 7 – 7.1908.0.el7.centos 
CentOS Linux 7 (Core) 
3.10.0 – 1062.18.1.el7.x86_64 
2.12.0 – 33.1.el7_7.4 
libvirt-4.5.0-23.el7_7.6 
vdsm-4.30.43-1.el7 
0.14.0 – 7.el7 
glusterfs-6.8-1.el7 
librbd1-10.2.5-4.el7 
openvswitch-2.11.0-4.el7 

gfs1 has 3 nic's gigabit in mode 802.3ad 
gfs2 has 4 nic's gigabit in mode 802.3ad 

A VM performes better in node3/gfs1: 

# dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync 
1+0 registos dentro 
1+0 registos fora 
1073741824 bytes (1,1 GB) copiados, 12,3365 s, 87,0 MB/s 

in node2/gfs2: 

dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync 
1+0 records in 
1+0 records out 
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 51.0159 s, 21.0 MB/s 


For windows machine is the same. It helps if you have windows drives up to 
date. 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Domingo, 25 De Outubro de 2020 19:58:12 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

It seems that your e-mail went to the spam. 

I would start with isolating the issue ? 
1. Is this a VM specific issue or a more-wide issue. 
- Can you move another VM on that storage domain and verify performance ? 
- Can you create/migrate same OS-type VM and check performance? 
- What about running a VM with different version of Windows or even better -> 
Linux ? 


Also I would check the systems from bottom up. 
Check the following: 
- Are you using Latest Firmware/OS updates for the Hosts and Engine ? 
- What is your cmdline : 
cat /proc/cmdline 
- Tuned profile 
- Are you using a HW controller for the Gluster bricks ? Health of controller 
and disks ? 
- Is the FS aligned properly to the HW controller (stripe unit and stripe width 
)? More details on 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#Hardware_RAID
 

- FS Fragmentation ? Is your FS full ? 

- Gluster volume status ? Client count ? (for example: gluster volume status 
all client-list) 
- Gluster version, cluster.op-version and cluster.max-op-version ? 

- If your network is slower than the bricks' backend (disks faster than 
network) , you can set cluster.choose-local to 'yes' 

- Any errors and warnings in the gluster logs ? 


Best Regards, 
Strahil Nikolov 





В четвъртък, 22 октомври 2020 г., 13:59:04 Гринуич+3,  
написа: 





Hello, 

For example, a window machine runs to slow, usually the disk is allways in 100% 
The group virt settings is this?: 
performance.quick-read=off 
performance.read-ahead=off 
performance.io-cache=off 
performance.low-prio-threads=32 
network.remote-dio=enable 
features.shard=on 
user.cifs=off 
client.event-threads=4 
server.event-threads=4 
performance.client-io-threads=on 


 
De: "Strahil Nikolov"  
Para: "users" , supo...@logicworks.pt 
Enviadas: Quarta-feira, 21 De Outubro de 2020 19:22:14 
Assunto: Re: [ovirt-users] Improve glusterfs performance 

Usually, oVirt uses the 'virt' group of settings. 

What are you symptoms ? 

Best Regards, 
Strahil Nikolov 






В сряда, 21 октомври 2020 г., 16:44:50 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello, 

Can anyone help me in how can I improve the performance of glusterfs to work 
with oVirt? 

Thanks 

-- 
 
Jose Ferradeira 
http://www.logicworks.pt 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPU4QFTMUMMFGUA4PYG6624KLSHVLNX4/
 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F7YGTFMKHN75WHKU5GBSP5RZURW4YSBF/


[ovirt-users] Migrated disk from NFS to iSCSI - Unable to Boot

2020-10-27 Thread Wesley Stewart
This is a new one.

I migrated from an NFS share to an iSCSI share on a small single node oVirt
system (Currently running 4.3.10).

After migrating a disk (Virtual Machine -> Disk -> Move), I was unable to
boot to it.  The console tells me "No bootable device".  This is a Centos7
guest.

I booted into a CentOS7 ISO and tried a few things...

fdisk -l shows me a 40GB disk (/dev/sda).
fsck -f tells me "bad magic number in superblock"

lvdisplay and pvdisplay show nothing.  Even if I can't boot to the drive I
would love to recover a couple of documents from here if possible.  Does
anyone have any suggestions?  I am running out of ideas.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/53HYTD32FP62FVALY6E5NNEDYG2XYOS5/


[ovirt-users] vdsm with NFS storage reboot or shutdown more than 15 minutes. with error failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy

2020-10-27 Thread lifuqi...@sunyainfo.com

Hi everyone:
I met problem as follows:

Description of problem:

When exec "reboot" or "shutdown -h 0" cmd on vdsm server, the vdsm server 
will reboot or shutdown more than 30 minutes. the screen shows '[FAILED] Failed 
unmouting /rhev/data-center/mnt/172.18.81.41:_home_nfs_data'. 

other messages may be useful:
This message was shown in the screen.
[] watchdog: watchdog0: watchdog did not stop! []systemd-shutdown[5594]: Failed 
to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or 
resource busy
[]systemd-shutdown[1]: Failed to wait for process: Protocol error
[]systemd-shutdown[5595]: Failed to remount '/' read-only: Device or resource 
busy
[]systemd-shutdown[1]: Failed to wait for process: Protocol error
dracut Warning: Killing all remaining processes
dracut Warning: Killing all remaining processes

Version-Release number of selected component (if applicable):
Software Version:4.2.8.2-1.el7
OS: CentOS Linux release 7.5.1804 (Core)How reproducible:
100%
Steps to Reproduce:
1. my test enviroment is one Ovirt engine(172.17.81.17) with 4 vdsm servers, 
exec "reboot" cmd in one of the vdsm servers(172.17.99.105), the server will 
reboot more than 30 minutes.ovirt-engine : 172.17.81.17/16
vdsm: 172.17.99.105/16
nfs server: 172.17.81.14/16Actual results:
As above. the server will reboot more than 30 minutes

Expected results:
the server will reboot in a short time.
What I have done:
I have capture packet in nfs server while vdsm is rebooting, I found vdsm 
server keeps sending nfs packet to nfs server circularly ;there are some log 
files while I reboot vdsm 172.17.99.105 in 2020-10-26 22:12:34. Some conclusion 
is:
1. the vdsm.log said the vdsm 2020-10-26 22:12:34,461+0800 ERROR (check/loop) 
[storage.Monitor] Error checking path 
/rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/metadata
2. the sanlock.log said 2020-10-26 22:13:05 1454 [3301]: s1 delta_renew read 
timeout 10 sec offset 0 
/rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/ids
3. there is nothing message import to this issue.The logs is in the 
attachment.I'm very appreciate if anyone can help me. Thank you.Your 
Sincerely,Mark Lee
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/42ZSGQSQ55FAIW6D7TMQHIDBBBO4EEK6/


[ovirt-users] Re: Add Node to a single node installation with self hosted engine.

2020-10-27 Thread marcel d'heureuse
moin,

I start to generate the physical bricks and create an iso brick which is also 
available on the single node. 

the iso brick I create as redundant and I would like to move all data from iso 
single install to the Redundanz iso. than I delete the single node iso and make 
a new one and add this to the Redundant iso. 

I think this will be work but I am not sure if the engine volume will accept 
this.

br
marcel

Am 27. Oktober 2020 07:18:47 MEZ schrieb Gobinda Das :
>Hello Marcel,
> For a note, you can't expand your single gluster node cluster to 3
>nodes.Only you can add compute nodes.
>If you want to add compute nodes then you do not need any glusterfs
>packages to be installed. Only ovirt packages are enough to add host as
>a
>compute node.
>
>On Tue, Oct 27, 2020 at 10:28 AM Parth Dhanjal 
>wrote:
>
>> Hey Marcel!
>>
>> You have to install the required glusterfs packages and then deploy
>the
>> gluster setup on the 2 new hosts. After creating the required LVs,
>VGs,
>> thinpools, mount points and bricks, you'll have to expand the
>> gluster-cluster from the current host using add-brick functionality
>from
>> gluster. After this you can add the 2 new hosts to your existing
>> ovirt-engine.
>>
>> On Mon, Oct 26, 2020 at 7:40 PM Marcel d'Heureuse
>
>> wrote:
>>
>>> Hi,
>>>
>>> I got a problem with my Ovirt installation. Normally we deliver
>Ovirt as
>>> single node installation and we told our service guys if the
>internal
>>> client will have more redundancy we need two more server and add
>this to
>>> the single node installation. i thought that no one would order two
>new
>>> servers.
>>>
>>> Now I have the problem to get the system running.
>>>
>>> First item is, that this environment has no internet access. So I
>can't
>>> install software by update with yum.
>>> The Ovirt installation is running on Ovirt node 4.3.9 boot USB
>Stick. All
>>> three servers have the same software installed.
>>> On the single node I have installed the hosted Engine package 1,1 GB
>to
>>> deploy the self-hosted engine without internet. That works.
>>>
>>> Gluster, Ovirt, Self-Hosted engine are running on the server 01.
>>>
>>> What should I do first?
>>>
>>> Deploy the Glusterfs first and then add the two new hosts to the
>single
>>> node installation?
>>> Or should I deploy a new Ovirt System to the two new hosts and add
>later
>>> the cleaned host to the new Ovirt System?
>>>
>>> I have not found any items in this mailing list which gives me an
>idea
>>> what I should do now.
>>>
>>>
>>> Br
>>> Marcel
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/TNV5WNKTGUSU5DB2CFR67FROXMMDCPCD/
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VXJVTPD24CQWTKS3QGINEKGT3NXVCP5/
>>
>
>
>-- 
>
>
>Thanks,
>Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZCCSWOKJVNA6JSQMEDRFC4EGFCAKQSB/


[ovirt-users] update host to 4.4: manual ovn config necessary?

2020-10-27 Thread Gianluca Cecchi
Hello,
I have updated an external engine from 4.3 to 4.4 and the OVN configuration
seems to have been retained:

[root@ovmgr1 ovirt-engine]# ovn-nbctl show
switch fc2fc4e8-ff71-4ec3-ba03-536a870cd483
(ovirt-ovn192-1e252228-ade7-47c8-acda-5209be358fcf)
switch 101d686d-7930-4176-b41a-b306d7c30a1a
(ovirt-ovn17217-4bb1d1a7-020d-4843-9ac7-dc4204b528e5)
port c1ec60a4-b4f3-4cb5-8985-43c086156e83
addresses: ["00:1a:4a:19:01:89 dynamic"]
port 174b69f8-00ed-4e25-96fc-7db11ea8a8b9
addresses: ["00:1a:4a:19:01:59 dynamic"]
port ccbd6188-78eb-437b-9df9-9929e272974b
addresses: ["00:1a:4a:19:01:88 dynamic"]
port 7e96ca70-c9e3-4efe-9ac5-e56c18476437
addresses: ["00:1a:4a:19:01:83 dynamic"]
port d2c2d9f1-8fc3-4f17-9ada-76fe3a168e65
addresses: ["00:1a:4a:19:01:5e dynamic"]
port 4d13d63e-5ff3-41c1-9b6b-feac343b514b
addresses: ["00:1a:4a:19:01:60 dynamic"]
port 66359e79-56c4-47e0-8196-2241706329f6
addresses: ["00:1a:4a:19:01:68 dynamic"]
switch 87012fa6-ffaa-4fb0-bd91-b3eb7c0a2fc1
(ovirt-ovn193-d43a7928-0dc8-49d3-8755-5d766dff821a)
port 2ae7391b-4297-4247-a315-99312f6392e6
addresses: ["00:1a:4a:19:01:51 dynamic"]
switch 9e77163a-c4e4-4abf-a554-0388e6b5e4ce
(ovirt-ovn172-4ac7ba24-aad5-432d-b1d2-672eaeea7d63)
[root@ovmgr1 ovirt-engine]#

Then I updated one of the 3 Linux hosts (not node ng) through remove from
web admin gui, install from scratch of CentOS 8.2 OS, configure repos and
then add new host (with the same name) in engine and I was able to connect
to storage (iSCSI) and start VMs in general on the host.
Coming to OVN part it seems it has not been configured on the upgraded host.
Is it expected?

Eg on engine I only see chassis for the 2 hosts still in 4.3:

[root@ovmgr1 ovirt-engine]# ovn-sbctl show
Chassis "b8872ab5-4606-4a79-b77d-9d956a18d349"
hostname: "ov301.mydomain"
Encap geneve
ip: "10.4.192.34"
options: {csum="true"}
Port_Binding "174b69f8-00ed-4e25-96fc-7db11ea8a8b9"
Port_Binding "66359e79-56c4-47e0-8196-2241706329f6"
Chassis "ddecf0da-4708-4f93-958b-6af365a5eeca"
hostname: "ov300.mydomain"
Encap geneve
ip: "10.4.192.33"
options: {csum="true"}
Port_Binding "ccbd6188-78eb-437b-9df9-9929e272974b"
[root@ovmgr1 ovirt-engine]#

What to do to add the upgraded 4.4 host? Can they live together for the OVN
part?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HWXNXE4BXWGKDU7SJ325ZEYJRYXSPWML/


[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-27 Thread info
The xyz.co.za domain is pointing (redirect) to the server IP, maybe that is the 
problem

 

journalctl -u cockpit

-- Logs begin at Tue 2020-10-27 11:59:03 SAST, end at Tue 2020-10-27 12:06:33 
SAST. --

Oct 27 12:01:03 node01.xyz.co.za systemd[1]: Starting Cockpit Web Service...

Oct 27 12:01:03 node01. xyz.co.za systemd[1]: Started Cockpit Web Service.

Oct 27 12:01:03 node01. xyz.co.za cockpit-tls[2118]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.

Oct 27 12:05:05 node01. xyz.co.za systemd[1]: Starting Cockpit Web Service...

Oct 27 12:05:05 node01. xyz.co.za systemd[1]: Started Cockpit Web Service.

Oct 27 12:05:05 node01. xyz.co.za cockpit-tls[2410]: cockpit-tls: 
gnutls_handshake failed: Decryption has failed.

 

Yours Sincerely,

 

Henni 

 

From: i...@worldhostess.com  
Sent: Tuesday, 27 October 2020 16:45
To: 'Yedidyah Bar David' 
Cc: 'Edward Berger' ; 'users' 
Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
frustrated]

 

Installing oVirt as a self-hosted engine using the Cockpit web interface and 
AGAIN the cockpit start with the http without the default self-signed cert 
(https)

 

Any suggestions, I think this is my first problem. 

 

Yours Sincerely,

 

Henni 

 

From: Yedidyah Bar David mailto:d...@redhat.com> > 
Sent: Monday, 26 October 2020 17:57
To: i...@worldhostess.com  
Cc: Edward Berger mailto:edwber...@gmail.com> >; users 
mailto:users@ovirt.org> >
Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
frustrated]

 

On Mon, Oct 26, 2020 at 11:47 AM mailto:i...@worldhostess.com> > wrote:

Quick question, after installing Ovirt before “hosted-engine –deploy” is the 
website with http? Not https.

 

You mean the cockpit interface? On port 9090? That's not part of oVirt, it's in 
every el8 machine. Should be https, by default with a self-signed cert.

 

 

Yours Sincerely,

 

Henni 

 

From: Yedidyah Bar David mailto:d...@redhat.com> > 
Sent: Sunday, 25 October 2020 16:46
To: Simone Tiraboschi mailto:stira...@redhat.com> >
Cc: i...@worldhostess.com  ; Edward Berger 
mailto:edwber...@gmail.com> >; users mailto:users@ovirt.org> >
Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
frustrated]

 

On Sat, Oct 24, 2020 at 12:31 PM Simone Tiraboschi mailto:stira...@redhat.com> > wrote:

Hi Henni,

your issue is just here:

2020-10-16 11:20:59,445+0200 DEBUG var changed: host "localhost" var 
"hostname_resolution_output" type "" value: "{
"changed": true,
"cmd": "getent ahosts node01.xyz.co.za   | grep 
STREAM",
"delta": "0:00:00.004671",
"end": "2020-10-16 11:20:59.179399",
"failed": false,
"rc": 0,
"start": "2020-10-16 11:20:59.174728",
"stderr": "",
"stderr_lines": [],
"stdout": "156.38.192.226  STREAM node01.xyz.co.za 
 ",
"stdout_lines": [
"156.38.192.226  STREAM node01.xyz.co.za  "
]
}"

 

but then...

 

2020-10-16 12:15:43,079+0200 DEBUG var changed: host "localhost" var 
"he_vm_ip_addr" type "" 
value: ""156.38.192.226""
2020-10-16 12:15:43,079+0200 DEBUG var changed: host "localhost" var 
"he_vm_ip_prefix" type "" value: "29"
...
2020-10-16 12:15:43,079+0200 DEBUG var changed: host "localhost" var 
"he_cloud_init_host_name" type "" value: ""engine01""
2020-10-16 12:15:43,079+0200 DEBUG var changed: host "localhost" var 
"he_cloud_init_domain_name" type "" value: ""xyz.co.za 
 ""

So,

your host is named node01.xyz.co.za   and it resolves 
to 156.38.192.226,

then you are trying to create a VM named engine01.xyz.co.za 
  and you are trying to configure it with a static 
set IPv4 address which is still 156.38.192.226.
This is enough to explain all the subsequent networking issues.


Please try again using two distinct IP addresses for the node and the engine VM.

 

Thanks, Simone!

 


ciao,

Simone

 

 

On Sat, Oct 24, 2020 at 8:35 AM mailto:i...@worldhostess.com> > wrote:

File 1

/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20201024072802-qzvr7t.log

 

2020-10-24 08:10:14,990+0200 ERROR otopi.plugins.gr_he_common.core.misc 
misc._terminate:167 Hosted Engine deployment failed: please check the logs for 
the issue, fix accordingly or re-deploy from scratch.

2020-10-24 08:10:14,990+0200 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:SEND \ 

 

Yours Sincerely,

 

Henni 

 

From: i...@worldhostess.com   
mailto:i...@worldhostess.com> > 
Sent: Saturday, 24 October 2020 14:03
To: 'Yedidyah Bar David' mailto:d...@redhat.com> >
Cc: 'Edward Berger' mailto:edwber...@gmail.com> >; 
'users' mailto:users@ovirt.org> >
Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
frustrated]

 

Can anyone 

[ovirt-users] when configuring multi path logical network selection area is empty, hence not able to configure multipathing

2020-10-27 Thread dhanaraj.ramesh--- via Users
Hi team,

I have 4 node cluster where in each node I configured 2 dedicated 10 gig NIC 
with dedicated subnet each (  NIC 1 = 10.10.10.0/24, NIC 2 = 10.10.20.0/24 ) 
and on the array side I configured 2 targets with 10.10.10.0 /24  & another 2 
target with 10.10.20.0/24 subnet. without any errors I could check in all four 
paths and able to mount iscsi luns in all 4 nodes. However when i try to 
configure mutipathing at Data center level I could see all the paths but not 
the logical network, it stays empty although I configured logical network label 
for both NICs with dedicated names as ISCSI1 & ISCI2. these logical names 
visible and green at the host network level, no errors, they are just L2 IP 
config 

Am I missing something here?  what  else I should do to enable multiplathing 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5VLH7CG5N5CTATV3SQBSKMOW33VQUJOU/


[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread suporte
This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success 

]# df -h 
Filesystem Size Used Avail Use% Mounted on 
/dev/md127 50G 2.5G 48G 5% / 
devtmpfs 7.7G 0 7.7G 0% /dev 
tmpfs 7.7G 16K 7.7G 1% /dev/shm 
tmpfs 7.7G 98M 7.6G 2% /run 
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup 
/dev/md126 1016M 194M 822M 20% /boot 
/dev/md125 411G 407G 4.1G 100% /home 
gfs2.domain.com:/data 461G 414G 48G 90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data 
tmpfs 1.6G 0 1.6G 0% /run/user/0 


# gluster volume status data 
Status of volume: data 
Gluster process TCP Port RDMA Port Online Pid 
-- 
Brick gfs2.domain.com:/home/brick1 49154 0 Y 8908 
Brick gfs2.domain.com:/brickx 49155 0 Y 8931 

Task Status of Volume data 
-- 
There are no active volume tasks 

# gluster volume info data 

Volume Name: data 
Type: Distribute 
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gfs2.domain.com:/home/brick1 
Brick2: gfs2.domain.com:/brickx 
Options Reconfigured: 
cluster.min-free-disk: 1% 
cluster.data-self-heal-algorithm: full 
performance.low-prio-threads: 32 
features.shard-block-size: 512MB 
features.shard: on 
storage.owner-gid: 36 
storage.owner-uid: 36 
transport.address-family: inet 
nfs.disable: on 


De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

So what is the output of "df" agianst : 
- all bricks in the volume (all nodes) 
- on the mount point in /rhev/mnt/ 

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space. 
Also what is the status of the volume: 

gluster volume status  
gluster volume info  


Best Regards, 
Strahil Nikolov 






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello, 

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message: 

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',) 

Any idea? 
Thanks 

José 

 
De: "Strahil Nikolov"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27 
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full 

Any option to extend the Gluster Volume ? 

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host. 
Then you can start the VM , while you are recovering from the situation. 

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml 

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy). 

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size. 

Best Regards, 
Strahil Nikolov 



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil, 

I just set cluster.min-free-disk to 1%: 
# gluster volume info data 

Volume Name: data 
Type: Distribute 
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 1 
Transport-type: tcp 
Bricks: 
Brick1: node2.domain.com:/home/brick1 
Options Reconfigured: 
cluster.min-free-disk: 1% 
cluster.data-self-heal-algorithm: full 
performance.low-prio-threads: 32 
features.shard-block-size: 512MB 
features.shard: on 
storage.owner-gid: 36 
storage.owner-uid: 36 
transport.address-family: inet 
nfs.disable: on 

But still get the same error: Error while executing action: Cannot move Virtual 
Disk. Low disk space on Storage Domain 
I restarted the glusterfs volume. 
But I can not do anything with the VM disk. 


I know that filling the bricks is very bad, we lost access to the VM. I think 
there should be a mechanism to prevent stopping the VM. 
we should continue to have access to the VM to free some space. 

If you have a VM with a Thin Provision disk, if the VM fills the entire disk, 
we got the same problem. 

Any idea? 

Thanks 

José 



 
De: "Strahil Nikolov"  
Para: "users" , supo...@logicworks.pt 
Enviadas: Segunda-feira, 21 De Setembro de 2020 21:28:10 
Assunto: Re: [ovirt-users] Gluster Domain Storage full 

Usually gluster has a 10% reserver defined in 

[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-27 Thread info
Installing oVirt as a self-hosted engine using the Cockpit web interface and 
AGAIN the cockpit start with the http without the default self-signed cert 
(https)

 

Any suggestions, I think this is my first problem. 

 

Yours Sincerely,

 

Henni 

 

From: Yedidyah Bar David  
Sent: Monday, 26 October 2020 17:57
To: i...@worldhostess.com
Cc: Edward Berger ; users 
Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
frustrated]

 

On Mon, Oct 26, 2020 at 11:47 AM mailto:i...@worldhostess.com> > wrote:

Quick question, after installing Ovirt before “hosted-engine –deploy” is the 
website with http? Not https.

 

You mean the cockpit interface? On port 9090? That's not part of oVirt, it's in 
every el8 machine. Should be https, by default with a self-signed cert.

 

 

Yours Sincerely,

 

Henni 

 

From: Yedidyah Bar David mailto:d...@redhat.com> > 
Sent: Sunday, 25 October 2020 16:46
To: Simone Tiraboschi mailto:stira...@redhat.com> >
Cc: i...@worldhostess.com  ; Edward Berger 
mailto:edwber...@gmail.com> >; users mailto:users@ovirt.org> >
Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
frustrated]

 

On Sat, Oct 24, 2020 at 12:31 PM Simone Tiraboschi mailto:stira...@redhat.com> > wrote:

Hi Henni,

your issue is just here:

2020-10-16 11:20:59,445+0200 DEBUG var changed: host "localhost" var 
"hostname_resolution_output" type "" value: "{
"changed": true,
"cmd": "getent ahosts node01.xyz.co.za   | grep 
STREAM",
"delta": "0:00:00.004671",
"end": "2020-10-16 11:20:59.179399",
"failed": false,
"rc": 0,
"start": "2020-10-16 11:20:59.174728",
"stderr": "",
"stderr_lines": [],
"stdout": "156.38.192.226  STREAM node01.xyz.co.za 
 ",
"stdout_lines": [
"156.38.192.226  STREAM node01.xyz.co.za  "
]
}"

 

but then...

 

2020-10-16 12:15:43,079+0200 DEBUG var changed: host "localhost" var 
"he_vm_ip_addr" type "" 
value: ""156.38.192.226""
2020-10-16 12:15:43,079+0200 DEBUG var changed: host "localhost" var 
"he_vm_ip_prefix" type "" value: "29"
...
2020-10-16 12:15:43,079+0200 DEBUG var changed: host "localhost" var 
"he_cloud_init_host_name" type "" value: ""engine01""
2020-10-16 12:15:43,079+0200 DEBUG var changed: host "localhost" var 
"he_cloud_init_domain_name" type "" value: ""xyz.co.za 
 ""

So,

your host is named node01.xyz.co.za   and it resolves 
to 156.38.192.226,

then you are trying to create a VM named engine01.xyz.co.za 
  and you are trying to configure it with a static 
set IPv4 address which is still 156.38.192.226.
This is enough to explain all the subsequent networking issues.


Please try again using two distinct IP addresses for the node and the engine VM.

 

Thanks, Simone!

 


ciao,

Simone

 

 

On Sat, Oct 24, 2020 at 8:35 AM mailto:i...@worldhostess.com> > wrote:

File 1

/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20201024072802-qzvr7t.log

 

2020-10-24 08:10:14,990+0200 ERROR otopi.plugins.gr_he_common.core.misc 
misc._terminate:167 Hosted Engine deployment failed: please check the logs for 
the issue, fix accordingly or re-deploy from scratch.

2020-10-24 08:10:14,990+0200 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:204 DIALOG:SEND \ 

 

Yours Sincerely,

 

Henni 

 

From: i...@worldhostess.com   
mailto:i...@worldhostess.com> > 
Sent: Saturday, 24 October 2020 14:03
To: 'Yedidyah Bar David' mailto:d...@redhat.com> >
Cc: 'Edward Berger' mailto:edwber...@gmail.com> >; 
'users' mailto:users@ovirt.org> >
Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
frustrated]

 

Can anyone explain to me how to use the “screen -d -r” option

 

Start the deployment script:

# hosted-engine --deploy


To escape the script at any time, use the Ctrl+D keyboard combination to abort 
deployment. In the event of session timeout or connection disruption, run 
screen -d -r to recover the deployment session.

 

 

Yours Sincerely,

 

Henni 

 

From: Yedidyah Bar David mailto:d...@redhat.com> > 
Sent: Wednesday, 21 October 2020 15:04
To: i...@worldhostess.com  
Cc: Edward Berger mailto:edwber...@gmail.com> >; users 
mailto:users@ovirt.org> >
Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & 
frustrated]

 

On Wed, Oct 21, 2020 at 4:19 AM mailto:i...@worldhostess.com> > wrote:

Did you try to ssh to the engine VM?
ssh is disconnecting within 1 second to 30 seconds, impossible to 
perform anything. 

 

ssh to the host? Or to the engine vm?

 

If to the host, then you have some severe networking issues, I suggest to 
handle this first.

 


Command line install " hosted-engine --deploy" it gets to this point (see 
below) and disconnect 

[ovirt-users] Re: problems installing standard Linux as nodes in 4.4

2020-10-27 Thread Martin Perina
Hi Gianluca,

happy to hear that your issue was fixed!

Just please be aware that iptables support for hosts has been deprecated
and it's completely unsupported for cluster levels 4.4 and up. So unless
you switch your cluster to firewalld, you will not be able to upgrade your
cluster to 4.4 version. You can take a look at documentation how to prepare
custom firewall rules for firewalld:

https://www.ovirt.org/documentation/administration_guide/#Configuring_Host_Firewall_Rules

Regards,
Martin


On Mon, Oct 26, 2020 at 7:22 PM Gianluca Cecchi 
wrote:

> On Thu, Oct 15, 2020 at 12:25 PM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Thu, Oct 15, 2020 at 10:41 AM Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>>
>>>
>>> Any feedback on my latest comments?
>>> In the meantime here:
>>>
>>> https://drive.google.com/file/d/1iN37znRtCo2vgyGTH_ymLhBJfs-2pWDr/view?usp=sharing
>>> you can find inside the sosreport in tar.gz format, where I have
>>> modified some file names and context in respect of hostnames.
>>> The only file I have not put inside is the dump of the database, but I
>>> can run any query you like in case.
>>>
>>> Gianluca
>>>
>>>
>>
>> I have also tried to put debug into the engine.
>>
>>
> So after huge debugging work with Dana Elfassy and Martin Necas (thank you
> very much to both!) and coordination of Sandro we found the culprit!
>
> Inside firewall custom rules of my engine I had this (note the double
> quotes for the comment about Nagios):
>
> [root@ovmgr1 ovirt-engine]# engine-config -g IPTablesConfigSiteCustom
> IPTablesConfigSiteCustom: -A INPUT -p tcp --dport 5666 -s 10.4.5.99/32 -m
> comment --comment "Nagios NRPE daemon" -j ACCEPT version: general
> [root@ovmgr1 ovirt-engine]#
>
> So those double quotes  caused a wrong formatted json block that
> ansible-runner-service was not able to manage in the http post phase
>
> After changing with single quotes, with this command:
>
> engine-config -s IPTablesConfigSiteCustom="-A INPUT -p tcp --dport 5666 -s
> 10.4.5.99/32 -m comment --comment 'Nagios NRPE daemon' -j ACCEPT"
>
> and restarting the engine so that now I have
>
> [root@ovmgr1 ovirt-engine]# engine-config -g IPTablesConfigSiteCustom
> IPTablesConfigSiteCustom: -A INPUT -p tcp --dport 5666 -s 10.4.5.99/32 -m
> comment --comment 'Nagios NRPE daemon' -j ACCEPT version: general
> [root@ovmgr1 ovirt-engine]#
>
> I was able to add the CentOS 8.2 host.
> So mind if you have the double quotes in any engine-config key before
> upgrading from 4.3 to 4.4.
>
> What a nasty thing to detect...
> Thanks again guys for your help
>
> Gianluca
>


-- 
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QJI6BLUQ43N7RYGEUAPVWKXYOSKY4AVZ/


[ovirt-users] Re: Left over hibernation disks that we can't delete

2020-10-27 Thread james
Morning Sandro,

we are running ovirt 4.4.2.6-1.el8 with iSCSI storage on CENTOS8. We have 20 
VMs on the system. We put all of them into hibernation and only one gave us 
this issue.

However over the weekend we had to reboot this VM which had the metadata and 
memory dump disks and when we rebooted these disk disappeared so the problem 
has resolved itself. 

Thanks for you time. 

James
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RUKQJCZX4CMRB6DARAE2ZQFVJID4ZRXJ/


[ovirt-users] Re: Add Node to a single node installation with self hosted engine.

2020-10-27 Thread Gobinda Das
Hello Marcel,
 For a note, you can't expand your single gluster node cluster to 3
nodes.Only you can add compute nodes.
If you want to add compute nodes then you do not need any glusterfs
packages to be installed. Only ovirt packages are enough to add host as a
compute node.

On Tue, Oct 27, 2020 at 10:28 AM Parth Dhanjal  wrote:

> Hey Marcel!
>
> You have to install the required glusterfs packages and then deploy the
> gluster setup on the 2 new hosts. After creating the required LVs, VGs,
> thinpools, mount points and bricks, you'll have to expand the
> gluster-cluster from the current host using add-brick functionality from
> gluster. After this you can add the 2 new hosts to your existing
> ovirt-engine.
>
> On Mon, Oct 26, 2020 at 7:40 PM Marcel d'Heureuse 
> wrote:
>
>> Hi,
>>
>> I got a problem with my Ovirt installation. Normally we deliver Ovirt as
>> single node installation and we told our service guys if the internal
>> client will have more redundancy we need two more server and add this to
>> the single node installation. i thought that no one would order two new
>> servers.
>>
>> Now I have the problem to get the system running.
>>
>> First item is, that this environment has no internet access. So I can't
>> install software by update with yum.
>> The Ovirt installation is running on Ovirt node 4.3.9 boot USB Stick. All
>> three servers have the same software installed.
>> On the single node I have installed the hosted Engine package 1,1 GB to
>> deploy the self-hosted engine without internet. That works.
>>
>> Gluster, Ovirt, Self-Hosted engine are running on the server 01.
>>
>> What should I do first?
>>
>> Deploy the Glusterfs first and then add the two new hosts to the single
>> node installation?
>> Or should I deploy a new Ovirt System to the two new hosts and add later
>> the cleaned host to the new Ovirt System?
>>
>> I have not found any items in this mailing list which gives me an idea
>> what I should do now.
>>
>>
>> Br
>> Marcel
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TNV5WNKTGUSU5DB2CFR67FROXMMDCPCD/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VXJVTPD24CQWTKS3QGINEKGT3NXVCP5/
>


-- 


Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DVHGMMOK3HYHA6DZLAOGSPDBIMRO5FWT/