[ovirt-users] Re: Gluster Domain Storage full

2020-10-27 Thread Strahil Nikolov via Users
You have exactly 90% used space.
The Gluster's default protection value is exactly 10%:


Option: cluster.min-free-disk
Default Value: 10%
Description: Percentage/Size of disk space, after which the process starts 
balancing out the cluster, and logs will appear in log files

I would recommend you to temporarily drop that value till you do the cleanup.

gluster volume set data cluster.min-free-disk 5%

To restore the default values of any option:
gluster volume reset data cluster.min-free-disk


P.S.: Keep in mind that if you have sparse disks for your VMs , they will be 
unpaused and start eating the last space you got left ... so be quick :)

P.S.2: I hope you know that the only supported volume types are 
'distributed-replicated' and 'replicated' :)

Best Regards,
Strahil Nikolov








В вторник, 27 октомври 2020 г., 11:02:10 Гринуич+2, supo...@logicworks.pt 
 написа: 





This is a simple installation with one storage, 3 hosts. One volume with 2 
bricks, the second brick was to get more space to try to remove the disk, but 
without success

]# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/md127   50G  2.5G   48G   5% /
devtmpfs    7.7G 0  7.7G   0% /dev
tmpfs   7.7G   16K  7.7G   1% /dev/shm
tmpfs   7.7G   98M  7.6G   2% /run
tmpfs   7.7G 0  7.7G   0% /sys/fs/cgroup
/dev/md126 1016M  194M  822M  20% /boot
/dev/md125  411G  407G  4.1G 100% /home
gfs2.domain.com:/data 461G  414G   48G  90% 
/rhev/data-center/mnt/glusterSD/gfs2.domain.com:_data
tmpfs   1.6G 0  1.6G   0% /run/user/0


# gluster volume status data
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gfs2.domain.com:/home/brick1    49154 0  Y   8908
Brick gfs2.domain.com:/brickx 49155 0  Y   8931

Task Status of Volume data
--
There are no active volume tasks

# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gfs2.domain.com:/home/brick1
Brick2: gfs2.domain.com:/brickx
Options Reconfigured:
cluster.min-free-disk: 1%
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 27 De Outubro de 2020 1:00:08
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

So what is the output of "df" agianst :
- all bricks in the volume (all nodes)
- on the mount point in /rhev/mnt/

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space.
Also what is the status of the volume:

gluster volume status 
gluster volume info  


Best Regards,
Strahil Nikolov






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello,

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message:

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',)

Any idea?
Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

Any option to extend the Gluster Volume ?

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy).

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size.

Best Regards,
Strahil Nikolov



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil,

I just set cluster.min-free-disk to 1%:
# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started

[ovirt-users] Re: Add Node to a single node installation with self hosted engine.

2020-10-27 Thread Strahil Nikolov via Users
Hello Gobinda,

I know that gluster can easily convert distributed volume to replica volume, so 
why it is not possible to first convert to replica and then add the nodes as 
HCI ?

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 08:20:56 Гринуич+2, Gobinda Das 
 написа: 





Hello Marcel,
 For a note, you can't expand your single gluster node cluster to 3 nodes.Only 
you can add compute nodes.
If you want to add compute nodes then you do not need any glusterfs packages to 
be installed. Only ovirt packages are enough to add host as a compute node.

On Tue, Oct 27, 2020 at 10:28 AM Parth Dhanjal  wrote:
> Hey Marcel!
> 
> You have to install the required glusterfs packages and then deploy the 
> gluster setup on the 2 new hosts. After creating the required LVs, VGs, 
> thinpools, mount points and bricks, you'll have to expand the gluster-cluster 
> from the current host using add-brick functionality from gluster. After this 
> you can add the 2 new hosts to your existing ovirt-engine.
> 
> On Mon, Oct 26, 2020 at 7:40 PM Marcel d'Heureuse  wrote:
>> Hi,
>> 
>> I got a problem with my Ovirt installation. Normally we deliver Ovirt as
>> single node installation and we told our service guys if the internal
>> client will have more redundancy we need two more server and add this to
>> the single node installation. i thought that no one would order two new
>> servers.
>> 
>> Now I have the problem to get the system running.
>> 
>> First item is, that this environment has no internet access. So I can't
>> install software by update with yum.
>> The Ovirt installation is running on Ovirt node 4.3.9 boot USB Stick. All
>> three servers have the same software installed.
>> On the single node I have installed the hosted Engine package 1,1 GB to
>> deploy the self-hosted engine without internet. That works.
>> 
>> Gluster, Ovirt, Self-Hosted engine are running on the server 01.
>> 
>> What should I do first?
>> 
>> Deploy the Glusterfs first and then add the two new hosts to the single
>> node installation?
>> Or should I deploy a new Ovirt System to the two new hosts and add later
>> the cleaned host to the new Ovirt System?
>> 
>> I have not found any items in this mailing list which gives me an idea
>> what I should do now.
>> 
>> 
>> Br
>> Marcel
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TNV5WNKTGUSU5DB2CFR67FROXMMDCPCD/
>> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VXJVTPD24CQWTKS3QGINEKGT3NXVCP5/
> 
> 


-- 


Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DVHGMMOK3HYHA6DZLAOGSPDBIMRO5FWT/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B6INNLTZH7FMC5M4HLKN7YKZCZ3Q77E5/


[ovirt-users] Re: adminstration portal wont complete load, looping

2020-10-26 Thread Strahil Nikolov via Users
I found this in the SPAM folder ... maybe it's not relevant any more.

My guess is that you updated chrome recently and they changed something :)

In my case (openSUSE Leap 15) , it was just an ad-blocker , but I guess your 
chrome version could be newer.


Best Regards,
Strahil Nikolov






В сряда, 30 септември 2020 г., 19:51:09 Гринуич+3, Philip Brown 
 написа: 





Huh!

it works with incognito mode!
O.o

Thanks for that tip.
Guess I need to figure out how to fully purge cache or something.




- Original Message -
From: "Strahil Nikolov" 
To: "users" , "Philip Brown" 
Sent: Wednesday, September 30, 2020 9:07:50 AM
Subject: Re: [ovirt-users] adminstration portal wont complete load, looping

I got the same behaviour with adblock plus add-on.

Try in incognito mode (or with disabled plugins/ new fresh browser).

Best Regards,
Strahil Nikolov






В вторник, 29 септември 2020 г., 18:50:05 Гринуич+3, Philip Brown 
 написа: 





I have an odd situation:
When I go to
https://ovengine/ovirt-engine/webadmin/?locale=en_US

after authentication passes...
it shows the top banner of

oVirt OPEN VIRTUALIZATION MANAGER

and the


    Loading ...


in the center. but never gets past that. Any suggestions on how I could 
investigate and fix this?

background:
I recently updated certs to be signed wildcard certs, but this broke consoles 
somehow.
So I restored the original certs, and restarted things... but got stuck with 
this.


Interestingly, the VM portal loads fine.  But not the admin portal.



--
Philip Brown| Sr. Linux System Administrator | Medata, Inc. 
5 Peters Canyon Rd Suite 250 
Irvine CA 92606 
Office 714.918.1310| Fax 714.918.1325 
pbr...@medata.com| www.medata.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PSAKTDCKJD7ECNMKKI4MKPQTMAPP4AGP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZIK45GCGKGFKJOEE2YNH6QJMJG7SMHL3/


[ovirt-users] Re: Gluster Domain Storage full

2020-10-26 Thread Strahil Nikolov via Users
So what is the output of "df" agianst :
- all bricks in the volume (all nodes)
- on the mount point in /rhev/mnt/

Usually adding a new brick (per host) in replica 3 volume should provide you 
more space.
Also what is the status of the volume:

gluster volume status 
gluster volume info  


Best Regards,
Strahil Nikolov






В четвъртък, 15 октомври 2020 г., 16:55:27 Гринуич+3,  
написа: 





Hello,

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message:

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',)

Any idea?
Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

Any option to extend the Gluster Volume ?

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy).

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size.

Best Regards,
Strahil Nikolov



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil,

I just set cluster.min-free-disk to 1%:
# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: node2.domain.com:/home/brick1
Options Reconfigured:
cluster.min-free-disk: 1%
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on

But still get the same error: Error while executing action: Cannot move Virtual 
Disk. Low disk space on Storage Domain
I restarted the glusterfs volume.
But I can not do anything with the VM disk.


I know that filling the bricks is very bad, we lost access to the VM. I think 
there should be a mechanism to prevent stopping the VM.
we should continue to have access to the VM to free some space.

If you have a VM with a Thin Provision disk, if the VM fills the entire disk, 
we got the same problem.

Any idea?

Thanks

José




De: "Strahil Nikolov" 
Para: "users" , supo...@logicworks.pt
Enviadas: Segunda-feira, 21 De Setembro de 2020 21:28:10
Assunto: Re: [ovirt-users] Gluster Domain Storage full

Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume 
option.
You can power off the VM , then set cluster.min-free-disk
to 1% and immediately move any of the VM's disks to another storage domain.

Keep in mind that filling your bricks is bad and if you eat that reserve , the 
only option would be to try to export the VM as OVA and then wipe from current 
storage and import in a bigger storage domain.

Of course it would be more sensible to just expand the gluster volume (either 
scale-up the bricks -> add more disks, or scale-out -> adding more servers with 
disks on them), but I guess that is not an option - right ?

Best Regards,
Strahil Nikolov








В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message:

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain

Any idea?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXAIFEYVIDEVJEUCSWY7/
___
Users mailing list -- users@ovirt.org

[ovirt-users] Question about OVN MTU

2020-10-26 Thread Strahil Nikolov via Users
Hello All,

I would like to learn more about OVN and especially the maximum MTU that I can 
use in my environment.

Current Setup 4.3.10
Network was created via UI -> MTU Custom -> 8976 -> Create on External Provider 
-> Connect to Physical Network

So my physical connection is MTU 9000 and I have read that Geneve uses 24 bits 
(maybe that's wrong ?) , thus I have reduced the MTU to 8976.

I did some testing on the VMs and ping with payload of '8914' was the maximum I 
could pass without fragmenting and thus the MTU on the VMs was set to 8942.

Did I correctly configure the test network's MTU and am I understanding it 
correctly that we need extra 34 bits inside the network for encapsulation ?

I have checked 
https://www.ovirt.org/develop/release-management/features/network/managed_mtu_for_vm_networks.html
 but I don't see any refference how to calculate the max MTU.


Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CKNS5T5KE2W5EIXBTGQDU3URKHQDVAM4/


[ovirt-users] Re: Improve glusterfs performance

2020-10-25 Thread Strahil Nikolov via Users
It seems that your e-mail went to the spam.

I would start with isolating the issue ?
1. Is this a VM specific issue or a more-wide issue.
- Can you move another VM on that storage domain and verify performance ?
- Can you create/migrate same OS-type VM and check performance?
- What about running a VM with different version of Windows or even better -> 
Linux ?


Also I would check the systems from bottom up.
Check the following:
- Are you using Latest Firmware/OS updates for the Hosts and Engine ?
- What is your cmdline :
cat /proc/cmdline
- Tuned profile
- Are you using a HW controller for the Gluster bricks ? Health of controller 
and disks ?
- Is the FS aligned properly to the HW controller (stripe unit and stripe width 
)? More details on 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#Hardware_RAID

- FS Fragmentation ? Is your FS full ?

- Gluster volume status ? Client count ? (for example: gluster volume status 
all client-list)
- Gluster version, cluster.op-version and cluster.max-op-version ?

- If your network is slower than the bricks' backend (disks faster than 
network) , you can set cluster.choose-local to 'yes'

- Any errors and warnings in the gluster logs ?


Best Regards,
Strahil Nikolov





В четвъртък, 22 октомври 2020 г., 13:59:04 Гринуич+3,  
написа: 





Hello,

For example, a window machine runs to slow, usually the disk is allways in 100%
The group virt settings is this?:
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
features.shard=on
user.cifs=off
client.event-threads=4
server.event-threads=4
performance.client-io-threads=on



De: "Strahil Nikolov" 
Para: "users" , supo...@logicworks.pt
Enviadas: Quarta-feira, 21 De Outubro de 2020 19:22:14
Assunto: Re: [ovirt-users] Improve glusterfs performance

Usually, oVirt uses the 'virt' group of settings.

What are you symptoms ?

Best Regards,
Strahil Nikolov






В сряда, 21 октомври 2020 г., 16:44:50 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

Can anyone help me in how can I improve the performance of glusterfs to work 
with oVirt?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPU4QFTMUMMFGUA4PYG6624KLSHVLNX4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VIFTSAFCHMJXPAPTL34SN47RD2CK46TY/


[ovirt-users] Re: Storage Domain Data (Master)

2020-10-23 Thread Strahil Nikolov via Users
Hm... interesting case.

Have you tried to set it into maintenance ? Setting a domain to maintenance 
forces oVirt to pick another domain for master.

Best Regards,
Strahil Nikolov






В петък, 23 октомври 2020 г., 19:34:19 Гринуич+3, supo...@logicworks.pt 
 написа: 





When data (Master) is down the others Domains data are down too?

What is the best practice when a problem ocurres to the Data Master?

Thansk

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2SKXORAPJV4X2MCHSCFIE6EVUJNHKIZL/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/US7NILCO62ARCBFLDPVYUPE4LMDTDG5N/


[ovirt-users] Re: Manual VM Migration fails

2020-10-23 Thread Strahil Nikolov via Users
Can you try to set the destination host into maintenance and then 'reinstall' 
from the web UI drop down ?

Best Regards,
Strahil Nikolov






В петък, 23 октомври 2020 г., 18:00:07 Гринуич+3, Anton Louw via Users 
 написа: 










Apologies, I should also add that the destination node is a new node that was 
added. Below are the CPU Types:

 

Sources: Intel Westmere IBRS SSBD Family

Destination: Intel Broadwell Family

 

I have set the CPU configs on the cluster to Intel Westmere IBRS SSBD Family, 
after seeing that the destination node was not compatible. 

 


Anton Louw 
Cloud Engineer: Storage and Virtualization at Vox  
 
T:  087 805  | D: 087 805 1572M: N/AE: anton.l...@voxtelecom.co.zaa: 
Rutherford Estate, 1 Scott Street, Waverley, Johannesburgwww.vox.co.za 

  

   
   
   
   
  
  




Disclaimer
The contents of this email are confidential to the sender and the intended 
recipient. Unless the contents are clearly and entirely of a personal nature, 
they are subject to copyright in favour of the holding company of the Vox group 
of companies. Any recipient who receives this email in error should immediately 
report the error to the sender and permanently delete this email from all 
storage devices.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more Click Here.





From: Anton Louw Sent: 23 October 2020 16:08To: users@ovirt.orgSubject: Manual 
VM Migration fails



 

Hello Everybody,

 

I am having a strange issue. When I try and manually migrate a VM from one host 
to another, I get an error stating:

 

“Migration failed  (VM: VM1, Source: node6.example.com, Destination: 
node3.example.com)”

 

I have tried with a few different machines, and it pops up with the same error. 
I have attached the VDSM logs of both source and destination nodes. The time 
frame is 16:01

 

I see the below in that time frame, but not quite sure what I need to change:

 

2020-10-23 16:01:14,419+0200 ERROR (migsrc/42186f82) [virt.vm] 
(vmId='42186f82-b84c-7e65-e736-e6331acd04ed') Failed to migrate (migration:450)

Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 431, in 
_regular_run

    time.time(), migrationParams, machineParams

  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 505, in 
_startUnderlyingMigration

    self._perform_with_conv_schedule(duri, muri)

  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 591, in 
_perform_with_conv_schedule

    self._perform_migration(duri, muri)

  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 525, in 
_perform_migration

    self._migration_flags)

  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 100, in f

    ret = attr(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper

    ret = f(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in 
wrapper

    return func(inst, *args, **kwargs)

  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1781, in 
migrateToURI3

    if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
dom=self)

libvirtError: operation failed: guest CPU doesn't match specification: missing 
features: spec-ctrl,ssbd


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FHNJSZY6NZTAF23FMSSDVGADVVZIENOV/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M3AQUJCTGIMQJ5TOMNAVU3A6QQVLE364/


[ovirt-users] Re: Gluster volume not responding

2020-10-23 Thread Strahil Nikolov via Users
Most probably , but I have no clue.

You can set the host into maintenance and then activate it ,so the volume get's 
mounted properly.

Best Regards,
Strahil Nikolov






В петък, 23 октомври 2020 г., 03:16:42 Гринуич+3, Simon Scott 
 написа: 






Hi Strahil,




All networking configs have been checked and correct.




I just looked at the gluster volume and noticed the Mount Option 
‘logbsize=256k’ on two nodes and is not on the third node.





Status of volume: pltfm_data01

Brick : Brick  bdtpltfmovt01-strg:/gluster_bricks/pltfm_data01/pltfm_data01 TCP 
Port : 49152 RDMA Port : 0 Online : Y Pid : 24372 File System : xfs Device : 
/dev/mapper/gluster_vg_sdb-gluster_lv_pltfm_data01 Mount Options : 
rw,seclabel,noatime,nodiratime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota
 Inode Size : 512 Disk Space Free : 552.0GB Total Disk Space : 1.5TB Inode 
Count : 157286400

Free Inodes : 157245903

Brick : Brick  bdtpltfmovt02-strg:/gluster_bricks/pltfm_data01/pltfm_data01 TCP 
Port : 49152 RDMA Port : 0 Online : Y Pid : 24485 File System : xfs Device : 
/dev/mapper/gluster_vg_sdb-gluster_lv_pltfm_data01 Mount Options : 
rw,seclabel,noatime,nodiratime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota
 Inode Size : 512 Disk Space Free : 552.0GB Total Disk Space : 1.5TB Inode 
Count : 157286400

Free Inodes : 157245885

Brick : Brick  bdtpltfmovt03-strg:/gluster_bricks/pltfm_data01/pltfm_data01 TCP 
Port : 49152 RDMA Port : 0 Online : Y Pid : 24988 File System : xfs Device : 
/dev/mapper/gluster_vg_sdb-gluster_lv_pltfm_data01 Mount Options : 
rw,seclabel,noatime,nodiratime,attr2,inode64,sunit=512,swidth=512,noquota Inode 
Size : 512 Disk Space Free : 552.0GB Total Disk Space : 1.5TB Inode Count : 
157286400 Free Inodes : 157245890





Is this possibly causing the instability issues we are experiencing under load?




Regards




Simon...



> On 11 Oct 2020, at 19:18, Strahil Nikolov  wrote:
> 
> 


>  
> Hi Simon,
> 
> Usually it is the network, but you need real-world data. I would open screen 
> sessions and run ping continiously . Something like this:
> 
> while true; do echo -n "$(date) "; timeout -s 9 1 ping -c 1 ovirt2 | grep 
> icmp_seq; sleep 1; done | tee -a /tmp/icmp_log
> 
> Are all systems in the same network ?
> What about dns resolution - do you have entries in /etc/hosts ?
> 
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> В неделя, 11 октомври 2020 г., 11:54:47 Гринуич+3, Simon Scott 
>  написа: 
> 
> 
> 
> 
> 
> 
> 
> Thanks Strahil.
> 
> 
> 
> 
> I have found between 1 & 4 Gluster peer rpc-clnt-ping timer expired messages 
> in the rhev-data-center-mnt-glusterSD-hostname-strg:_pltfm_data01.log on the 
> storage network IP. Of the 6 Hosts only 1 does not have these timeouts.
> 
> 
> 
> 
> Fencing has been disabled but can you identify which logs are key to 
> identifying the cause please.
> 
> 
> 
> 
> It's a bonded (bond1) 10GB ovirt-mgmt logical network and Prod VM VLAN 
> interface AND a bonded (bond2) 10GB Gluster storage network. 
> 
> Dropped packets are seen incrementing in the vdsm.log but neither ethtool -S 
> or kernel logs are showing dropped packets. I am wondering if they are being 
> dropped due to the ring buffers being small.
> 
> 
> 
> 
> Kind Regards
> 
> 
> 
> 
> Shimme
> 
> 
> 
> 
> 
>  
> From: Strahil Nikolov 
> Sent: Thursday 8 October 2020 20:40
> To: users@ovirt.org ; Simon Scott 
> Subject: Re: [ovirt-users] Gluster volume not responding 
>  
> 
> 
> 
> 
>> Every Monday and Wednesday morning there are gluster connectivity timeouts 
>> >but all checks of the network and network configs are ok.
>> 
> 
> Based on this one I make the following conclusions:
> 1. Issue is reoccuring
> 2. You most probably have a network issue
> 
> Have you checked the following:
> - are there any ping timeouts between fuse clients and gluster nodes
> - Have you tried to disable fencing and check the logs after the issue 
> reoccurs
> - Are you sharing Blackup and Prod networks ? Is it possible some 
> backup/other production load in your environment to "black-out" your oVirt ?
> - Have you check the gluster cluster's logs for anything meaningful ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U527TGUQR6RV7Z426NWMO3K4OXQJABCM/
> 
> 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Improve glusterfs performance

2020-10-22 Thread Strahil Nikolov via Users
Virt settings are those:

[root@ovirt1 slow]# cat /var/lib/glusterd/groups/virt
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
cluster.eager-lock=enable
cluster.quorum-type=auto
cluster.server-quorum-type=server
cluster.data-self-heal-algorithm=full
cluster.locking-scheme=granular
cluster.shd-max-threads=8
cluster.shd-wait-qlength=1
features.shard=on
user.cifs=off
cluster.choose-local=off
client.event-threads=4
server.event-threads=4
performance.client-io-threads=on


I am using cluster.choose-local set to on ,as my Network is far slower than the 
bricks.

Another optimization you can consider is  to test with different values of 
event-threads , shd-max-threads and maybe low-prio-threads - but this is valid 
for Fast SSDs/NVMEs.
Of course, it's reasonable to have a production environment that is separate 
from test :)

Best Regards,
Strahil Nikolov






В четвъртък, 22 октомври 2020 г., 14:00:52 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

For example, a window machine runs to slow, usually the disk is allways in 100%
The group virt settings is this?:
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
features.shard=on
user.cifs=off
client.event-threads=4
server.event-threads=4
performance.client-io-threads=on



De: "Strahil Nikolov" 
Para: "users" , supo...@logicworks.pt
Enviadas: Quarta-feira, 21 De Outubro de 2020 19:22:14
Assunto: Re: [ovirt-users] Improve glusterfs performance

Usually, oVirt uses the 'virt' group of settings.

What are you symptoms ?

Best Regards,
Strahil Nikolov






В сряда, 21 октомври 2020 г., 16:44:50 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

Can anyone help me in how can I improve the performance of glusterfs to work 
with oVirt?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPU4QFTMUMMFGUA4PYG6624KLSHVLNX4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4Q7GVB4K2WK4J2ALFMIB4CRWF7S3LCHX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5R4HLMFBQBQA4XUKUUDNNJAT3T2B6LMA/


[ovirt-users] Re: ovirt Storage Domain Cluster Filesystem?

2020-10-22 Thread Strahil Nikolov via Users
I might be wrong, but I think that the SAN LUN is used as a PV and then each 
disk is a LV from the Host Perspective.

Of course , I could be wrong and someone can correct me. All my oVirt 
experience is based on HCI (Gluster + oVirt).

Best Regards,
Strahil Nikolov






В четвъртък, 22 октомври 2020 г., 09:38:43 Гринуич+3, lsc.or...@gmail.com 
 написа: 





Hi all

I come from Oracle VM x86 world and we are planning from moving Oracle VM to 
oVirt.

I am having hard time understanding Storage Domains in oVirt. All our storage 
are SAN and I wonder how can we manage SAN LUN in oVirt to create a storage 
domain such that the VM guests can run in any host in the oVirt Cluster?

For example in Oracle VM the Storage Repository (it is the Storage Domain in 
OVM words) are based on SAN LUNs and on top of that a cluster filesystem is 
created so all hosts in the cluster have concurrent access to the storage 
repository and the VM guest can be started in any of the hosts in the cluster.

How do we accomplish the same in oVirt with SAN Storage? Which Cluster 
Filesystem is supported in Storage Domain?

Or perhaps in oVirt the mechanism is totally different?

Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMXDRNDT6BJ63FGFQEIDKVU7HSVIRXCM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TVEIHJOFIW23AU7DVACYFJQWQCDTW4BM/


[ovirt-users] Re: Unable to connect to postgres database on oVirt engine

2020-10-22 Thread Strahil Nikolov via Users
Hi Didi,

thanks for the info - I learned it the hard way (trial & error) and so far it 
was working.

Do we have an entry about that topic in the documentation ?


Best Regards,
Strahil Nikolov






В четвъртък, 22 октомври 2020 г., 08:27:08 Гринуич+3, Yedidyah Bar David 
 написа: 







On Wed, Oct 21, 2020 at 9:16 PM Strahil Nikolov via Users  
wrote:
> I usually run the following (HostedEngine):
> 
> [root@engine ~]# su - postgres
>  
> -bash-4.2$ source /opt/rh/rh-postgresql10/enable  

This is applicable to 4.3, on el7. For 4.4 this isn't needed.
Also, IIRC this isn't the official method to activate an SCL component, but:

$ scl enable rh-postgresql10 bash

Which is a significant difference - it runs a process ('bash', in this case, 
but you can run psql directly) "inside" the environment.

A somewhat more official (although IIRC still not the recommended) way is:

. scl_source enable rh-postgresql10

(Or 'source' instead of '.', although '.' is the POSIX standard).
 
>  -bash-4.2$ psql engine

This works if the engine db is automatically provisioned by engine-setup (which 
is the default).

engine-psql.sh works regardless of version, remote or not, etc. It's a pretty 
simple script - you can
easily read it to see what it does.

Best regards,

 
>  
> 
> How did you try to access the Engine's DB ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> 
> 
> В вторник, 20 октомври 2020 г., 17:00:37 Гринуич+3, 
> kushagra2agar...@gmail.com  написа: 
> 
> 
> 
> 
> 
> I am unable to connect to postgres database running on oVirt engine and while 
> troubleshooting found below possible issue.
> 
> semanage fcontext -a -t postgresql_exec_t 
> /opt/rh/rh-postgresql10/root/usr/bin/psql
> ValueError: File spec /opt/rh/rh-postgresql10/root/usr/bin/psql conflicts 
> with equivalency rule '/opt/rh/rh-postgresql10/root /'; Try adding 
> '//usr/bin/psql' instead
> 
> Can someone help to resolve this issue.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JPWYVOHDVRLNISNH6WMW5KAWRBWI3NZP/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LKCIV56ELQRI4HTGK2VPYCP35YXMJYXF/
> 
> 


-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KFMWCGRAH6OFKWNJG3R7LORXPAKQHHZ4/


[ovirt-users] Re: Improve glusterfs performance

2020-10-22 Thread Strahil Nikolov via Users
I agree with Alex.

Also, most of the kernel tunables proposed in that thread are also available in 
the tuned profiles provided by the redhat-storage-server source rpm available 
at ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/

Usually the alignment of XFS ontop the HW raid is quite important and missed.

Best Regards,
Strahil Nikolov






В сряда, 21 октомври 2020 г., 22:35:21 Гринуич+3, Alex McWhirter 
 написа: 






In my experience, the ovirt optimized defaults are fairly sane. I may change a 
few things like enabling read ahead or increasing the shard size, but these are 
minor performance bumps if anything.



The most important thing is the underlying storage, RAID 10 is ideal 
performance wise, large stripe sizes are preferable for VM workloads, etc... 
You want the underlying storage to be as fast as possible, dedicated cache 
devices are a plus. IOPS and latency are often more important that throughput.



Network throughput and latency are also very important. I don't think i would 
attempt a gluster setup on anything slower than 10GbE, jumbo frames are a huge 
help, switches with large buffers are nice as well. Do not L3 route gluster (at 
least not inter-server links) unless you have a switch that can do line rate 
routing. High or inconsistent network latency will bring gluster to it's knees.



Kernel tuning / gluster volume options do help, but they are not groundbreaking 
performance improvements. Usually just little speed boosts here and there.



On 2020-10-21 12:30, eev...@digitaldatatechs.com wrote:

>  
>>  
>>  
>> 
> 

>  
>  
>  
>  
> Here is the post link:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/I62VBDYPQIWPRE3LKUUVSLHPZJVQBT4X/
> 
>  
> 
> Here is the actual link:
> 
> https://docs.gluster.org/en/latest/Administrator%20Guide/Linux%20Kernel%20Tuning/
> 
>  
> 
>  
> 
>  
> Eric Evans
> 
> Digital Data Services LLC.
> 
> 304.660.9080
> 
> 
> 
> 
>  
> 
>  
>  
> From: supo...@logicworks.pt  
> Sent: Wednesday, October 21, 2020 11:47 AM
> To: eev...@digitaldatatechs.com
> Cc: users 
> Subject: Re: [ovirt-users] Improve glusterfs performance
> 
> 
> 
>  
> 
>  
>  
> Thanks.
> 
> 
>  
> This is what I found from you related to glusterfs: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DN47OTYUUCOAFDD6QQC333AW5RBBN6SM/
> 
> 
>  
>  
> 
> 
>  
> But I don't find anything on how to imporve gluster.
> 
> 
>  
>  
> 
> 
>  
> José
> 
> 
>  
>  
> 
> 
> 
> 
>  
> De: eevans@digitaldatatechs.comPara: supo...@logicworks.pt, "users" 
> Enviadas: Quarta-feira, 21 De Outubro de 2020 
> 15:53:00Assunto: RE: [ovirt-users] Improve glusterfs performance
> 
> 
>  
>  
> 
> 
>  
>  
> I posted a link in the users list that details how to improve gluster and 
> improve performance. Search gluster.
> 
>  
> 
>  
> Eric Evans
> 
> Digital Data Services LLC.
> 
> 304.660.9080
> 
> 
> 
> 
>  
> 
>  
>  
> From: supo...@logicworks.pt  Sent: Wednesday, October 
> 21, 2020 9:42 AMTo: users Subject: [ovirt-users] Improve 
> glusterfs performance
> 
> 
> 
>  
> 
>  
>  
> Hello,
> 
> 
>  
>  
> 
> 
>  
> Can anyone help me in how can I improve the performance of glusterfs to work 
> with oVirt?
> 
> 
>  
>  
> 
> 
>  
> Thanks
> 
> 
>  
>  
> 
> 
>  
> -- 
> 
> 
>  
> 
> 
>  
> Jose Ferradeirahttp://www.logicworks.pt
> 
> 
> 
> 
> 
>  
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAWWGYZ4GUNHDBE23XBCOFUG56AZMFRU/
> 



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LI6QSA4N3JPXXVXWUW73TYGYOJOUTF5C/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5H2XLC2GZV2QWGWTAHMYQMV3X4OCHOWC/


[ovirt-users] Re: Improve glusterfs performance

2020-10-21 Thread Strahil Nikolov via Users
Usually, oVirt uses the 'virt' group of settings.

What are you symptoms ?

Best Regards,
Strahil Nikolov






В сряда, 21 октомври 2020 г., 16:44:50 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

Can anyone help me in how can I improve the performance of glusterfs to work 
with oVirt?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPU4QFTMUMMFGUA4PYG6624KLSHVLNX4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ARRR6FNLCAI2BIJNKIE6ZU4ZN3MT5MUQ/


[ovirt-users] Re: multiple new networks

2020-10-21 Thread Strahil Nikolov via Users
Have you checked the ovirt_host_network ansible module ?
It got a VLAN example and I guess you can loop over all the VLANs.

Best Regards,
Strahil Nikolov






В сряда, 21 октомври 2020 г., 11:12:53 Гринуич+3, kim.karga...@noroff.no 
 написа: 





Hi all,

We have Ovirt 4.3, with 11 hosts, and need a bunch of VLANs for our student's 
to be isolated and do specific things. We have created the VLANs on the switch, 
but need to create them on the admin portal, with vlan tagging and then add 
them to the interface on the hosts. We are talking about 400 VLANs. I have done 
this manually for 4 VLANs and all works fine, but was wondering if there is a 
way of doing this in one go for all? So I don't have to do it 400 times (at 
least creating the VLANs on the admin portal).

Thanks.

Kim
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3YDTCIKQ4GY3KFLH3IBGTERADO7WLHDB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YFZQXYKNBSQZ4IFN44DBVTYLUDDCUOE3/


[ovirt-users] Re: Unable to connect to postgres database on oVirt engine

2020-10-21 Thread Strahil Nikolov via Users
I usually run the following (HostedEngine):

[root@engine ~]# su - postgres
 
-bash-4.2$ source /opt/rh/rh-postgresql10/enable  
-bash-4.2$ psql engine


How did you try to access the Engine's DB ?

Best Regards,
Strahil Nikolov








В вторник, 20 октомври 2020 г., 17:00:37 Гринуич+3, kushagra2agar...@gmail.com 
 написа: 





I am unable to connect to postgres database running on oVirt engine and while 
troubleshooting found below possible issue.

semanage fcontext -a -t postgresql_exec_t 
/opt/rh/rh-postgresql10/root/usr/bin/psql
ValueError: File spec /opt/rh/rh-postgresql10/root/usr/bin/psql conflicts with 
equivalency rule '/opt/rh/rh-postgresql10/root /'; Try adding '//usr/bin/psql' 
instead

Can someone help to resolve this issue.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JPWYVOHDVRLNISNH6WMW5KAWRBWI3NZP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LKCIV56ELQRI4HTGK2VPYCP35YXMJYXF/


[ovirt-users] Re: Hyperconverged Gluster Deployment in cockpit with zfs

2020-10-21 Thread Strahil Nikolov via Users
The ansible role for Gluster expects raw devices and then it deploys it the 
conventional way (forget about ZoL with that role).

I think that you can create and mount your filesystems and deploy gluster all 
by yourself - it's not so hard  ... Just follow the Gluster's official Docu and 
skip the mkfs.xfs part.

https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/


Best Regards,
Strahil Nikolov



В вторник, 20 октомври 2020 г., 13:36:58 Гринуич+3, harryo...@gmail.com 
 написа: 





Hi,
When I want to use zfs for software raid on my oVirt nodes instead of a 
hardware raid controller, I don't know what to type in "Device Name" I don't 
know if this step should be skipped for zfs raid, I don't know the location of 
my zfs vdev or if thre is anyhing else i should input. If I would set this up 
via CLI, no "Device Name" is needed in the process, why is it then nedded in 
the Hyperconverged Gluster Deployment on cockpit? There is plenty of guides for 
Gluster on top of zfs online, but the porcess differ because of the "Device 
Name"
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DJJVMLBAGDOWQZITPZIPUIYRIVI2TLDZ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ODLZXADENTPYXOHBSOZ477JIFYXIRKHH/


[ovirt-users] Re: Enable Gluster Service

2020-10-18 Thread Strahil Nikolov via Users
I would go to the UI and identify the hsot with the 'SPM' flag.
Then you should check the vdsm logs on that host (/var/log/vdsm/)

Best Regards,
Strahil Nikolov






В четвъртък, 15 октомври 2020 г., 20:19:57 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello, 

When I  Enable Gluster Service on the cluster, de data center goes to invalid 
state.

Any idea why?


-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KBFEITWUQ5Q6G4KRDEBVDQIBK4TRFNQL/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EI6YUIXDDDLVB6DLCK7VQC34YTCWNL3T/


[ovirt-users] Re: Gluster Domain Storage full

2020-10-18 Thread Strahil Nikolov via Users
What is the output of:
df -h /rhev/data-center/mnt/glusterSD/server_volume/

gluster volume status volume
gluster volume info volume

In the "df" you should see the new space or otherwise you won't be able to do 
anything.

Best Regards,
Strahil Nikolov






В четвъртък, 15 октомври 2020 г., 17:04:58 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

I just add a second brick to the volume. Now I have 10% free, but still cannot 
delete the disk. Still the same message:

VDSM command DeleteImageGroupVDS failed: Could not remove all image's volumes: 
(u'b6165676-a6cd-48e2-8925-43ed49bc7f8e [Errno 28] No space left on device',)

Any idea?
Thanks

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: "users" 
Enviadas: Terça-feira, 22 De Setembro de 2020 13:36:27
Assunto: Re: [ovirt-users] Re: Gluster Domain Storage full

Any option to extend the Gluster Volume ?

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy).

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size.

Best Regards,
Strahil Nikolov



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil,

I just set cluster.min-free-disk to 1%:
# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: node2.domain.com:/home/brick1
Options Reconfigured:
cluster.min-free-disk: 1%
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on

But still get the same error: Error while executing action: Cannot move Virtual 
Disk. Low disk space on Storage Domain
I restarted the glusterfs volume.
But I can not do anything with the VM disk.


I know that filling the bricks is very bad, we lost access to the VM. I think 
there should be a mechanism to prevent stopping the VM.
we should continue to have access to the VM to free some space.

If you have a VM with a Thin Provision disk, if the VM fills the entire disk, 
we got the same problem.

Any idea?

Thanks

José




De: "Strahil Nikolov" 
Para: "users" , supo...@logicworks.pt
Enviadas: Segunda-feira, 21 De Setembro de 2020 21:28:10
Assunto: Re: [ovirt-users] Gluster Domain Storage full

Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume 
option.
You can power off the VM , then set cluster.min-free-disk
to 1% and immediately move any of the VM's disks to another storage domain.

Keep in mind that filling your bricks is bad and if you eat that reserve , the 
only option would be to try to export the VM as OVA and then wipe from current 
storage and import in a bigger storage domain.

Of course it would be more sensible to just expand the gluster volume (either 
scale-up the bricks -> add more disks, or scale-out -> adding more servers with 
disks on them), but I guess that is not an option - right ?

Best Regards,
Strahil Nikolov








В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message:

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain

Any idea?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXAIFEYVIDEVJEUCSWY7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 

[ovirt-users] Re: Number of data bricks per oVirt HCI host

2020-10-15 Thread Strahil Nikolov via Users
>Please clarify what are the disk groups that you are referring to? 
Either Raid5/6 or Raid10 with a HW controller(s).


>Regarding your statement  "In JBOD mode, Red Hat support only 'replica 3' 
>>volumes." does this also mean "replica 3" variants ex. 
>"distributed-replicate" 
Nope, As far as I know - only when you have 3 copies of the data ('replica 3' 
only).

Best Regards,
Strahil Nikolov


On Wed, Oct 14, 2020 at 7:34 AM C Williams  wrote:
> Thanks Strahil !
> 
> More questions may follow. 
> 
> Thanks Again For Your Help !
> 
> On Wed, Oct 14, 2020 at 12:29 AM Strahil Nikolov  
> wrote:
>> Imagine you got a host with 60 Spinning Disks -> I would recommend you to 
>> split it to 10/12 disk groups and these groups will represent several bricks 
>> (6/5).
>> 
>> Keep in mind that when you start using many (some articles state hundreds , 
>> but no exact number was given) bricks , you should consider brick 
>> multiplexing (cluster.brick-multiplex).
>> 
>> So, you can use as many bricks you want , but each brick requires cpu time 
>> (separate thread) , tcp port number and memory.
>> 
>> In my setup I use multiple bricks in order to spread the load via LACP over 
>> several small (1GBE) NICs.
>> 
>> 
>> The only "limitation" is to have your data on separate hosts , so when you 
>> create the volume it is extremely advisable that you follow this model:
>> 
>> hostA:/path/to/brick
>> hostB:/path/to/brick
>> hostC:/path/to/brick
>> hostA:/path/to/brick2
>> hostB:/path/to/brick2
>> hostC:/path/to/brick2
>> 
>> In JBOD mode, Red Hat support only 'replica 3' volumes - just to keep that 
>> in mind.
>> 
>> From my perspective , JBOD is suitable for NVMEs/SSDs while spinning disks 
>> should be in a raid of some type (maybe RAID10 for perf).
>> 
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> 
>> 
>> В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams 
>>  написа: 
>> 
>> 
>> 
>> 
>> 
>> Hello,
>> 
>> I am getting some questions from others on my team.
>> 
>> I have some hosts that could provide up to 6 JBOD disks for oVirt data (not 
>> arbiter) bricks 
>> 
>> Would this be workable / advisable ?  I'm under the impression there should 
>> not be more than 1 data brick per HCI host .
>> 
>> Please correct me if I'm wrong.
>> 
>> Thank You For Your Help !
>> 
>> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/
>> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M47YCSFNYYNPYTR7Z3TC63ZSVIR7QUGG/


[ovirt-users] Re: ovirt question

2020-10-14 Thread Strahil Nikolov via Users
Hi,

I would start with 
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/
 .
It might have some issues as 4.4 is quite fresh and dynamic, but you just need 
to ping the community for help over the e-mail.

Best Regards,
Strahil Nikolov






В сряда, 14 октомври 2020 г., 10:02:13 Гринуич+3, 陳星宇  
написа: 





  


Dear Sir:

 

Can I ask you questions about ovirt?

 

How should this system be installed?

Louis Chen 陳星宇

Tel : +886-37-586-896  Ext.1337Fax : +886-37-587-699

louissy_c...@phison.comwww.phison.com

群聯電子股份有限公司
Phison Electronics Corps.
苗栗縣350竹南鎮群義路1號

 



This message and any attachments are confidential and may be legally 
privileged. Any unauthorized review, use or distribution by anyone other than 
the intended recipient is strictly prohibited. If you are not the intended 
recipient, please immediately notify the sender, completely delete the message 
and any attachments, and destroy all copies. Your cooperation will be highly 
appreciated.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QWXPE76KMIMXPOMFP5P37FVO3MOZ4IJ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2UC5RHJFXCI6AJSVH3F4V5LYFU2C2LYL/


[ovirt-users] Re: Q: Hybrid GlusterFS / local storage setup?

2020-10-14 Thread Strahil Nikolov via Users
Hi Gilboa,

I think that storage domains need to be accessible from all nodes in the 
cluster - and as yours will be using local storage and yet be in a 2-node 
cluster that will be hard.

My guess is that you can try the following cheat:

 Create a single brick gluster volume and do some modifications:
- volume type 'replica 1'
- cluster.choose-local should be set to yes , once you apply the virt group of 
settings (cause it's setting it to no)
- Set your VMs in such way that they don't failover

Of course, creation of new VMs will happen from the host with the SPM flag, but 
the good thing is that you can change the host with that flag. So if your 
gluster has a brick ovirt1:/local_brick/brick , you can set the host ovirt1 to 
'SPM' and then create your VM.

Of course the above is just pure speculation as I picked my setup to be 
'replica 3 arbiter 1' and trade storage for live migration.

Best Regards,
Strahil Nikolov






В сряда, 14 октомври 2020 г., 12:42:44 Гринуич+3, Gilboa Davara 
 написа: 





Hello all,

I'm thinking about converting a couple of old dual Xeon V2
workstations into (yet another) oVirt setup.
However, the use case for this cluster is somewhat different:
While I do want most of the VMs to be highly available (Via 2+1 GFS
storage domain), I'd also want pin at least one "desktop" VM to each
host (possibly with vGPU) and let this VM access the local storage
directly in-order to get near bare metal performance.

Now, I am aware that I can simply share an LVM LV over NFS / localhost
and pin a specific VM to each specific host, and the performance will
be acceptable, I seem to remember that there's a POSIX-FS storage
domain that at least in theory should be able to give me per-host
private storage.

A. Am I barking at the wrong tree here? Is this setup even possible?
B. If it is even possible, any documentation / pointers on setting up
per-host private storage?

I should mention that these workstations are quite beefy (64-128GB
RAM, large MDRAID, SSD, etc) so I can spare memory / storage space (I
can even split the local storage and GFS to different arrays).

- Gilboa
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT24EBHC4OF7VXO7BGY3C2IT64M2T5FD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XJ44CLBZ3BIVVDRONWS5NGIZ2RXGXKP7/


[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-14 Thread Strahil Nikolov via Users
strict-o-direct just allows the app to define if direct I/O is needed and yes, 
that could be a reason for your data loss.

The good thing is that the feature is part of the virt group and there is a 
"Optimize for Virt" button somewhere in the UI . Yet, I prefer the manual 
approach of building gluster volumes ,as UI's primary focus is oVirt (quite 
natural , right).


Best Regards,
Strahil Nikolov 






В сряда, 14 октомври 2020 г., 12:30:42 Гринуич+3, Jarosław Prokopowski 
 написа: 





Thanks. I will get rid of multipath.

I did not set performance.strict-o-direct specifically, only changed 
permissions of the volume to vdsm.kvm and applied the virt gourp.

Now is see performance.strict-o-direct was off. Could it be the reason of the 
data loss?
Direct I/O is enabled in oVirt by gluster mount option "-o 
direct-io-mode=enable" right?

Below is full list of the volume options.


Option                                  Value                                  
--                                  -                                  
cluster.lookup-unhashed                on                                      
cluster.lookup-optimize                on                                      
cluster.min-free-disk                  10%                                    
cluster.min-free-inodes                5%                                      
cluster.rebalance-stats                off                                    
cluster.subvols-per-directory          (null)                                  
cluster.readdir-optimize                off                                    
cluster.rsync-hash-regex                (null)                                  
cluster.extra-hash-regex                (null)                                  
cluster.dht-xattr-name                  trusted.glusterfs.dht                  
cluster.randomize-hash-range-by-gfid    off                                    
cluster.rebal-throttle                  normal                                  
cluster.lock-migration                  off                                    
cluster.force-migration                off                                    
cluster.local-volume-name              (null)                                  
cluster.weighted-rebalance              on                                      
cluster.switch-pattern                  (null)                                  
cluster.entry-change-log                on                                      
cluster.read-subvolume                  (null)                                  
cluster.read-subvolume-index            -1                                      
cluster.read-hash-mode                  1                                      
cluster.background-self-heal-count      8                                      
cluster.metadata-self-heal              off                                    
cluster.data-self-heal                  off                                    
cluster.entry-self-heal                off                                    
cluster.self-heal-daemon                on                                      
cluster.heal-timeout                    600                                    
cluster.self-heal-window-size          1                                      
cluster.data-change-log                on                                      
cluster.metadata-change-log            on                                      
cluster.data-self-heal-algorithm        full                                    
cluster.eager-lock                      enable                                  
disperse.eager-lock                    on                                      
disperse.other-eager-lock              on                                      
disperse.eager-lock-timeout            1                                      
disperse.other-eager-lock-timeout      1                                      
cluster.quorum-type                    auto                                    
cluster.quorum-count                    (null)                                  
cluster.choose-local                    off                                    
cluster.self-heal-readdir-size          1KB                                    
cluster.post-op-delay-secs              1                                      
cluster.ensure-durability              on                                      
cluster.consistent-metadata            no                                      
cluster.heal-wait-queue-length          128                                    
cluster.favorite-child-policy          none                                    
cluster.full-lock                      yes                                    
diagnostics.latency-measurement        off                                    
diagnostics.dump-fd-stats              off                                    
diagnostics.count-fop-hits              off                                    

[ovirt-users] Re: Number of data bricks per oVirt HCI host

2020-10-13 Thread Strahil Nikolov via Users
Imagine you got a host with 60 Spinning Disks -> I would recommend you to split 
it to 10/12 disk groups and these groups will represent several bricks (6/5).

Keep in mind that when you start using many (some articles state hundreds , but 
no exact number was given) bricks , you should consider brick multiplexing 
(cluster.brick-multiplex).

So, you can use as many bricks you want , but each brick requires cpu time 
(separate thread) , tcp port number and memory.

In my setup I use multiple bricks in order to spread the load via LACP over 
several small (1GBE) NICs.


The only "limitation" is to have your data on separate hosts , so when you 
create the volume it is extremely advisable that you follow this model:

hostA:/path/to/brick
hostB:/path/to/brick
hostC:/path/to/brick
hostA:/path/to/brick2
hostB:/path/to/brick2
hostC:/path/to/brick2

In JBOD mode, Red Hat support only 'replica 3' volumes - just to keep that in 
mind.

From my perspective , JBOD is suitable for NVMEs/SSDs while spinning disks 
should be in a raid of some type (maybe RAID10 for perf).


Best Regards,
Strahil Nikolov






В сряда, 14 октомври 2020 г., 06:34:17 Гринуич+3, C Williams 
 написа: 





Hello,

I am getting some questions from others on my team.

I have some hosts that could provide up to 6 JBOD disks for oVirt data (not 
arbiter) bricks 

Would this be workable / advisable ?  I'm under the impression there should not 
be more than 1 data brick per HCI host .

Please correct me if I'm wrong.

Thank You For Your Help !


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGZISNFLEG3GJPDQGWTT7TWRPPAMLPFQ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XP3KR6S47ZIVK3K3AWIXCJQG7ZKTTO7Q/


[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-13 Thread Strahil Nikolov via Users
One recommendation is to get rid of the multipath for your SSD.
Replica 3 volumes are quite resilient and I'm really surprised it happened to 
you.

For the multipath stuff , you can create something like this:
[root@ovirt1 ~]# cat /etc/multipath/conf.d/blacklist.conf  
blacklist {
   wwid Crucial_CT256MX100SSD1_14390D52DCF5
}

As you are running multipath already , just run the following to get the wwid 
of your ssd :
multipath -v4 | grep 'got wwid of'

What were the gluster vol options you were running with ? oVirt is running the 
volume with 'performance.strict-o-direct' and Direct I/O , so you should not 
loose any data.


Best Regards,
Strahil Nikolov



 





В вторник, 13 октомври 2020 г., 16:35:26 Гринуич+3, Jarosław Prokopowski 
 написа: 





Hi Nikolov,

Thanks for the very interesting answer :-)

I do not use any raid controller. I was hoping glusterfs would take care of 
fault tolerance but apparently it failed.
I have one Samsung 1TB SSD drives in each server for VM storage. I see it is of 
type "multipath".  There is XFS filesystem over standard LVM (not thin). 
Mount options are: inode64,noatime,nodiratime
SELinux was in permissive mode.

I must read more about the things you described as have never  dived into it.
Please let me know if you have any suggestions :-)

Thanks a lot!
Jarek



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RBIDHY6P3KKTXFMPXP32YQ2FDZNXDB4L/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/476TCRSMAV2T4FKER4LLN2EPSEZRE7SH/


[ovirt-users] Re: Ovirt 4.4/ Centos 8 issue with nfs?

2020-10-12 Thread Strahil Nikolov via Users
I have seen a lot of users to use anonguid=36,anonuid=36,all_squash to force 
the vdsm:kvm ownership on the system.

Best Regards,
Strahil Nikolov






В понеделник, 12 октомври 2020 г., 21:40:42 Гринуич+3, Amit Bawer 
 написа: 







On Mon, Oct 12, 2020 at 9:33 PM Amit Bawer  wrote:
> 
> 
> On Mon, Oct 12, 2020 at 9:12 PM Lee Hanel  wrote:
>> my /etc/exports looks like:
>> (rw,async,no_wdelay,crossmnt,insecure,no_root_squash,insecure_locks,sec=sys,anonuid=1025,anongid=100)
> The anongid,anonuid options could be failing the qemu user access check, 
> 
> Is there a special need to have them for the nfs shares for ovirt?
> I'd suggest to specify the exports for ovirt on their own 
> /export/path1   *(rw,sync,no_root_suqash)
> /export/path2   *(rw,sync,no_root_suqash)
mind the typo "squash": 
/export/path2   *(rw,sync,no_root_squash) 

> ...
> 
>>  
>> also to note, as vdsm I can create files/directories on the share.
>> 
>> 
>> On Mon, Oct 12, 2020 at 12:34 PM Amit Bawer  wrote:
>>>
>>>
>>>
>>> On Mon, Oct 12, 2020 at 7:47 PM  wrote:

 ok, I think that the selinux context might be wrong?   but I saw nothing 
 in the audit logs about it.

 drwxr-xr-x. 1 vdsm kvm system_u:object_r:nfs_t:s0 40 Oct  8 17:19 /data

 I don't see in the ovirt docs what the selinux context needs to be.  Is 
 what you shared as an example the correct setting?
>>>
>>> It's taken from a working nfs setup,
>>> what is the /etc/exports (or equiv.) options settings on the server?
>>>
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OSGBFRHYC3AMI2E3ZYRRBIAJINYOIBG/
>> 
>> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VGLS62COSG774PI6JJKP44JTL6QQWRHW/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O4R6EWTDXC77F22F464SYDZKYSCTJGER/


[ovirt-users] Re: oVirt-node

2020-10-12 Thread Strahil Nikolov via Users
Hi Badur,

theoretically it's possible as oVirt is just a management layer.

You can use 'virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' as an alias 
of virsh and then you will be able to "virsh define yourVM.xml" & "virsh start 
yourVM".

Also it's suitable to start a VM during Engine's downtime.

Best Regards,
Strahil Nikolov










В понеделник, 12 октомври 2020 г., 13:36:31 Гринуич+3, Budur Nagaraju 
 написа: 





Hi 

Is there a way to deploy  vms on the ovirt node without using the oVirt engine?

Thanks,
Nagaraju
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FEGDT6G6P3D4GEPXFKWECUVO33H73YH5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GX67ICZ2QCUB246GNFOGAWZLR5WPVBLN/


[ovirt-users] Re: oVirt 4.4 - HA Proxy Fencing

2020-10-11 Thread Strahil Nikolov via Users
Well there are a lot of Red Hat solutions about that one.

You will need a user in ovirt that will be used to restart the VMs.
In my case , I called it 'fencerdrbd' and it has beed granted 'UserRole' 
permissions on the systems in the pacemaker cluster.


Here is my stonith device , but keep in mind that I added the engine's CA to 
the VMs so I use 'ssl_secure=1':


[root@drbd1 ~]# pcs stonith show ovirt_FENCE
 Resource: ovirt_FENCE (class=stonith type=fence_rhevm)
  Attributes: ipaddr=engine.localdomain login=fencerdrbd@internal 
passwd= 
pcmk_host_map=drbd1.localdomain:drbd1;drbd2.localdomain:drbd2;drbd3.localdomain:drbd3
 power_wait=5 ssl=1 ssl_secure=1
  Operations: monitor interval=86400s (ovirt_FENCE-monitor-interval-86400s)

The VMs hostname to VM name mapping is done via pcmk_host_map option.

Keep in mind that due to VM live migration , it is nice to have a token and 
consensus increased.Mines are 1 for token and 12000 for consensus (3 VM 
cluster) , in your case (2 node) 8000 would be OK.

If you use the 'admin@internal' user - you need to use '--disable-http-filter' :
# pcs stonith update fence_device disable_http_filter=1

Extra info can be found at: https://access.redhat.com/solutions/3093891

Best Regards,
Strahil Nikolov





В неделя, 11 октомври 2020 г., 18:41:25 Гринуич+3, Jeremey Wise 
 написа: 






I have a pair of nodes which service DNS / NTP / FTP / AD /Kerberos / IPLB etc..

ns01, ns02

These two "infrastructure VMs have HA Proxy and pacemaker and I have set to 
have "HA" within ovirt and node affinity.

But.. within HAProxy, the nodes use to be able to call the STONITH function of 
KVM to reset a node if / as needed.

Fence Device: "stonith:fence_virsh"
But with oVirt this function no longer works.

For VMs and other applications which need STONITH functions, how would this be 
done with oVirt.

I would assume the call would be to oVirt-Engine now, but not seeing 
documentation on HA-Proxy / Pacemaker to use oVirt.

Can someone point me in the right direction?


-- 
penguinpages
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B4KLZLPAIZMUMUEIUOSPLEI23UAGZVJM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/46FACPOSPGBIRDHQC67WZQA3O2CFYESN/


[ovirt-users] Re: Gluster volume not responding

2020-10-11 Thread Strahil Nikolov via Users
Hi Simon,

Usually it is the network, but you need real-world data. I would open screen 
sessions and run ping continiously . Something like this:

while true; do echo -n "$(date) "; timeout -s 9 1 ping -c 1 ovirt2 | grep 
icmp_seq; sleep 1; done | tee -a /tmp/icmp_log

Are all systems in the same network ?
What about dns resolution - do you have entries in /etc/hosts ?


Best Regards,
Strahil Nikolov


В неделя, 11 октомври 2020 г., 11:54:47 Гринуич+3, Simon Scott 
 написа: 







Thanks Strahil.




I have found between 1 & 4 Gluster peer rpc-clnt-ping timer expired messages in 
the rhev-data-center-mnt-glusterSD-hostname-strg:_pltfm_data01.log on the 
storage network IP. Of the 6 Hosts only 1 does not have these timeouts.




Fencing has been disabled but can you identify which logs are key to 
identifying the cause please.




It's a bonded (bond1) 10GB ovirt-mgmt logical network and Prod VM VLAN 
interface AND a bonded (bond2) 10GB Gluster storage network. 

Dropped packets are seen incrementing in the vdsm.log but neither ethtool -S or 
kernel logs are showing dropped packets. I am wondering if they are being 
dropped due to the ring buffers being small.




Kind Regards




Shimme





 
From: Strahil Nikolov 
Sent: Thursday 8 October 2020 20:40
To: users@ovirt.org ; Simon Scott 
Subject: Re: [ovirt-users] Gluster volume not responding 
 



>Every Monday and Wednesday morning there are gluster connectivity timeouts 
>>but all checks of the network and network configs are ok.

Based on this one I make the following conclusions:
1. Issue is reoccuring
2. You most probably have a network issue

Have you checked the following:
- are there any ping timeouts between fuse clients and gluster nodes
- Have you tried to disable fencing and check the logs after the issue reoccurs
- Are you sharing Blackup and Prod networks ? Is it possible some backup/other 
production load in your environment to "black-out" your oVirt ?
- Have you check the gluster cluster's logs for anything meaningful ?

Best Regards,
Strahil Nikolov



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U527TGUQR6RV7Z426NWMO3K4OXQJABCM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXYVXMBCPU5UCTQVVHWQ3PLL2RHCJJ7G/


[ovirt-users] Re: too many glitches while moving storage domain

2020-10-11 Thread Strahil Nikolov via Users
Hi Jiri,

I already opened an Feature request 
https://bugzilla.redhat.com/show_bug.cgi?id=1881457 that is about something 
similar.

Can you check if your case was similar and update the request ?

Best Regards,
Strahil Nikolov






В събота, 10 октомври 2020 г., 23:48:01 Гринуич+3, Jiří Sléžka 
 написа: 





Hi,

today I started one not so common operation and I would like to share my
(not so good) experience.

I have 4 old Opteron G3 hosts which creates 1 DC with 1 cluster with 4.2
compatibility. Then I got 2 newer Intel based hosts which creates
separate DC and cluster. I use one shared FC storage with few LUNs for
all the stuff. Intel cluster is selfhosted with oVirt 4.4.2 where I
migrated original standalone oVirt 4.3.10 manager.

So I have 2 DCs, one with 4.2 and one with 4.4 compatibility.

Now the funny part... I had 2 LUNs connected to old 4.2 DC. My intention
was detach one of them and attach it to the new DC.

first pain was that some of vms had ther disks on both LUNs. It is
indicated late in the process and without hint which vms they are. So I
had to activate LUN in old DC and tried to find that vms (search string
in vms tab "Storage = one and Storage = two" seems not working). Ok, it
took two or three rounds... then, also late in the process there was
problem that one vm had previewed snapshot so another round with
activation of LUN in old DC... Then I was able detach and import LUN to
new DC. Nice, but with warning that LUN has 4.2 compatibility which will
be converted to 4.4 and there is no way back to connect it to old DC...
It is logical but very scary if something went wrong... but it did not
in my case :-)

LUN is connected in new DC. Now I had to import vms. Most vms were
imported ok but two of them were based on template wich resides on other
LUN. It was not indicated during detaching! It looks like I cannot move
template from storage to storage other way than through Export storage
(which I don't have at this moment) or through OVA export for which I
have not enough free storage space on hosts. Its a trap! :-) Btw. there
is no check for free space while starting export to OVA (template uses
preallocated disk). Exporting task still runs but there is no free space
at the host... and probably no way to cancel it from manager :-(

Ok, I had most of the vms imported. Last really strange thing is that I
lost one vm during import. It is not listed in VirtualMachines nor in VM
import tab on starage nor in Disk tab... that vm was an Aruba migration
tool and was imported from OVN image.

In fact there are two disks in Disk import tab, one of them has no
Alias/description and was created today around time I started work on
this migration. The second one has alias "vmdisk1" and is few months
older but I have no idea if it is the lost vm...

Sorry for long story, TL;DR version could be: There are glitches in some
(not so common) workflows...

Cheers,

Jiri

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3RSL5Y2BDX2OWYXHVWJXZV4C2XA5VMBX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IWZ7RKOVHONTWB7M32RWX6TCFXN34X3A/


[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-11 Thread Strahil Nikolov via Users
Hi Jaroslaw,

That point was from someone else. I don't think that gluster has a such weak 
point. The only weak point I have seen is the infrastructure it relies ontop 
and of course the built-in limitations it has.

You need to verify the following:
- mount options are important . Using 'nobarrier' but without radi-controller 
protection is devastating. Also I use the following option when using gluster + 
selinux in enforcing mode:
 
context=system_u:object_r:glusterd_brick_t:s0 - it tells the kernel what is the 
selinux context on all files/dirs in the gluster brick and this reduces I/O 
requests to the bricks

My mount options are:
noatime,nodiratime,inode64,nouuid,context="system_u:object_r:glusterd_brick_t:s0"

- Next is your FS - if you use HW raid controller , you need to specify the 
sunit= and swidth= for the 'mkfs.xfs' (and don't forget the '-i size=512')
This tells the XFS about the hardware beneath

- If you use thin LVM , you need to be sure that your '_tmeta' LV of the 
Thinpool LV is not over a VDO device as it doesn't dedupe quite good
I'm using VDO in 'emulate512' as my 'PHY-SEC' is 4096 and oVirt doesn't like it 
:) . You can check yours via 'lsblk -t'.

- Configure and tune your VDO. I think that 1 VDO = 1 Fast disk (NVMe/SSD) as 
I'm not very good in tuning VDO. If you need dedupe - check RedHat's 
documentation about the indexing as the defaults are not optimal.

- Next is the disk scheduler. In case you use NVMe - the linux kernel is taking 
care of it , but for SSDs and large HW arrays - you can enable the multiqueue 
and switch to 'none' via UDEV rules.Of course , testing is needed for every 
prod environment :)
Also consider using noop/none I/O scheduler in the VMs as you don't want to 
reorder I/O requests on VM level , just to do it on Host level.

- You can set your CPU to avoid switching to lower C states -> that adds extra 
latency for the host/VM processes

- Transparent Huge Pages can be a real problem , especially with large VMs. 
oVirt 4.4.x now should support native Huge and Gumbo pages which will reduce 
the stress over the OS.

- vm.swappiness, vm.dirty_background , vm.dirty_*** settings. You can check 
what RH gluster storage is using the ones in the redhat-storage-server rpms: in 
ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/

They control the behaviour of the system when to start flushing memory to disk 
and when to block any process until all memory is flushed.


Best Regards,
Strahil Nikolov








В събота, 10 октомври 2020 г., 18:18:55 Гринуич+3, Jarosław Prokopowski 
 написа: 





Thanks Strahil 
The data center is remote so I will definitely ask the lab guys to ensure the 
switch is connected to battery supported power socket. 
So the gluster's weak point is actually the switch in the network? Can it have 
difficulty finding out which version of data is correct after the switch was 
off for some time?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VFP2FX2YRAPOH3FPS6MBUYD6KXD55VIA/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5G53IXHEF2IPVSGYXATOD6NN5IDAB2YE/


[ovirt-users] Re: problems installing standard Linux as nodes in 4.4

2020-10-10 Thread Strahil Nikolov via Users
I guess you tried to ssh to the HostedEngine and then ssh to the host , right ?

Best Regards,
Strahil Nikolov






В събота, 10 октомври 2020 г., 02:28:35 Гринуич+3, Gianluca Cecchi 
 написа: 





On Fri, Oct 9, 2020 at 7:12 PM Martin Perina  wrote:
> 
> 
> Could you please share with us all logs from engine gathered by logcollector? 
> We will try to find out any clue what's wrong in your env ...
> 
> Thanks,
> Martin
> 

I will try to collect.
In the mean time I've found that SSH could be in some way involved

When I add the host and get the immediate failure and apparently nothing 
happens at all,  I see these two lines in /var/log/ovirt-engine/server.log

2020-10-09 18:15:09,369+02 WARN  
[org.apache.sshd.client.session.ClientConnectionService] 
(sshd-SshClient[7cb54873]-nio2-thread-1) 
globalRequest(ClientConnectionService[ClientSessionImpl[root@ov200/10.4.192.32:22]])[hostkeys...@openssh.com,
 want-reply=false] failed (SshException) to process: EdDSA provider not 
supported
2020-10-09 18:15:09,699+02 WARN  
[org.apache.sshd.client.session.ClientConnectionService] 
(sshd-SshClient[2cbceeab]-nio2-thread-1) 
globalRequest(ClientConnectionService[ClientSessionImpl[root@ov200/10.4.192.32:22]])[hostkeys...@openssh.com,
 want-reply=false] failed (SshException) to process: EdDSA provider not 
supported

could it be that the ssh client embedded is not able to connect to the CentOS 
8.2 for some reason?

On host at the moment when I try to add it I see again two sessions opened and 
immediately closed (tried several times), eg in the timeframe above I have:

Oct  9 18:15:09 ov200 systemd-logind[1237]: New session 41 of user root.
Oct  9 18:15:09 ov200 systemd[1]: Started Session 41 of user root.
Oct  9 18:15:09 ov200 systemd-logind[1237]: Session 41 logged out. Waiting for 
processes to exit.
Oct  9 18:15:09 ov200 systemd-logind[1237]: Removed session 41.
Oct  9 18:15:09 ov200 systemd-logind[1237]: New session 42 of user root.
Oct  9 18:15:09 ov200 systemd[1]: Started Session 42 of user root.
Oct  9 18:15:09 ov200 systemd-logind[1237]: Session 42 logged out. Waiting for 
processes to exit.
Oct  9 18:15:09 ov200 systemd-logind[1237]: Removed session 42.

anyway at sshd service level it seems it is ok om the host:

journalctl -u sshd.service has

Oct 09 18:15:09 ov200 sshd[13379]: Accepted password for root from 10.4.192.43 
port 46008 ssh2
Oct 09 18:15:09 ov200 sshd[13379]: pam_unix(sshd:session): session opened for 
user root by (uid=0)
Oct 09 18:15:09 ov200 sshd[13379]: pam_unix(sshd:session): session closed for 
user root
Oct 09 18:15:09 ov200 sshd[13398]: Accepted password for root from 10.4.192.43 
port 46014 ssh2
Oct 09 18:15:09 ov200 sshd[13398]: pam_unix(sshd:session): session opened for 
user root by (uid=0)
Oct 09 18:15:09 ov200 sshd[13398]: pam_unix(sshd:session): session closed for 
user root

On the host I have not customized anything ssh related:

[root@ov200 ssh]# ps -ef|grep sshd
root        1274       1  0 Oct08 ?        00:00:00 /usr/sbin/sshd -D 
-oCiphers=aes256-...@openssh.com,chacha20-poly1...@openssh.com,aes256-ctr,aes256-cbc,aes128-...@openssh.com,aes128-ctr,aes128-cbc
 
-oMACs=hmac-sha2-256-...@openssh.com,hmac-sha1-...@openssh.com,umac-128-...@openssh.com,hmac-sha2-512-...@openssh.com,hmac-sha2-256,hmac-sha1,umac-...@openssh.com,hmac-sha2-512
 -oGSSAPIKexAlgorithms=gss-gex-sha1-,gss-group14-sha1- 
-oKexAlgorithms=curve25519-sha256,curve25519-sha...@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1
 
-oHostKeyAlgorithms=rsa-sha2-256,rsa-sha2-256-cert-...@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp256-cert-...@openssh.com,ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-...@openssh.com,rsa-sha2-512,rsa-sha2-512-cert-...@openssh.com,ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-...@openssh.com,ssh-ed25519,ssh-ed25519-cert-...@openssh.com,ssh-rsa,ssh-rsa-cert-...@openssh.com
 
-oPubkeyAcceptedKeyTypes=rsa-sha2-256,rsa-sha2-256-cert-...@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp256-cert-...@openssh.com,ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-...@openssh.com,rsa-sha2-512,rsa-sha2-512-cert-...@openssh.com,ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-...@openssh.com,ssh-ed25519,ssh-ed25519-cert-...@openssh.com,ssh-rsa,ssh-rsa-cert-...@openssh.com
 
-oCASignatureAlgorithms=rsa-sha2-256,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,rsa-sha2-512,ecdsa-sha2-nistp521,ssh-ed25519,ssh-rsa

and in sshd_config

HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key

Can I replicate the command that the engine would run on host through ssh?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 

[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-09 Thread Strahil Nikolov via Users
Based on the logs you shared, it looks like a network issue - but it could 
always be something else.
If you ever experience something like that situation, please share the logs 
immediately and add the gluster mailing list - in order to get assistance with 
the root cause.

Best Regards,
Strahil Nikolov






В петък, 9 октомври 2020 г., 16:26:14 Гринуич+3, Jarosław Prokopowski 
 написа: 





Hmm, I'm not sure. I just created glusterfs volumes on LVM volumes, changed 
ownership to vdsm.kvm and applied virt group. Then I added it to oVirt as 
storage for VMs

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DJEOW53SSPB4REFTJMZBVYQIDDXORLIT/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FIZN6U7XW4XXKQFZNFSQXKS3OAXLONZZ/


[ovirt-users] Re: Gluster volume not responding

2020-10-08 Thread Strahil Nikolov via Users
Hi Simon,

I doubt the system needs tuning from network perspective.

I guess you can run some 'screen'-s which a pinging another system and logging 
everything to a file.

Best Regards,
Strahil Nikolov






В петък, 9 октомври 2020 г., 01:05:22 Гринуич+3, Simon Scott 
 написа: 







Thanks Strahil.




I have found between 1 & 4 Gluster peer rpc-clnt-ping timer expired messages in 
the rhev-data-center-mnt-glusterSD-hostname-strg:_pltfm_data01.log on the 
storage network IP. Of the 6 Hosts only 1 does not have these timeouts.




Fencing has been disabled but can you identify which logs are key to 
identifying the cause please.




It's a bonded (bond1) 10GB ovirt-mgmt logical network and Prod VM VLAN 
interface AND a bonded (bond2) 10GB Gluster storage network. 

Dropped packets are seen incrementing in the vdsm.log but neither ethtool -S or 
kernel logs are showing dropped packets. I am wondering if they are being 
dropped due to the ring buffers being small.




Kind Regards




Shimme





 
From: Strahil Nikolov 
Sent: Thursday 8 October 2020 20:40
To: users@ovirt.org ; Simon Scott 
Subject: Re: [ovirt-users] Gluster volume not responding 
 



>Every Monday and Wednesday morning there are gluster connectivity timeouts 
>>but all checks of the network and network configs are ok.

Based on this one I make the following conclusions:
1. Issue is reoccuring
2. You most probably have a network issue

Have you checked the following:
- are there any ping timeouts between fuse clients and gluster nodes
- Have you tried to disable fencing and check the logs after the issue reoccurs
- Are you sharing Blackup and Prod networks ? Is it possible some backup/other 
production load in your environment to "black-out" your oVirt ?
- Have you check the gluster cluster's logs for anything meaningful ?

Best Regards,
Strahil Nikolov


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/52YVULUR3YV4AQLKPLRN3OZ3JC4V4RZO/


[ovirt-users] Re: Gluster volume not responding

2020-10-08 Thread Strahil Nikolov via Users
I have seen many "checks" that are "OK"...
Have you checked that backups are not used over the same network ?

I would disable the power management (fencing) ,so I can find out what has 
happened to the systems.


Best Regards,
Strahil Nikolov






В четвъртък, 8 октомври 2020 г., 22:43:34 Гринуич+3, Strahil Nikolov via Users 
 написа: 





>Every Monday and Wednesday morning there are gluster connectivity timeouts 
>>but all checks of the network and network configs are ok.

Based on this one I make the following conclusions:
1. Issue is reoccuring
2. You most probably have a network issue

Have you checked the following:
- are there any ping timeouts between fuse clients and gluster nodes
- Have you tried to disable fencing and check the logs after the issue reoccurs
- Are you sharing Blackup and Prod networks ? Is it possible some backup/other 
production load in your environment to "black-out" your oVirt ?
- Have you check the gluster cluster's logs for anything meaningful ?

Best Regards,
Strahil Nikolov

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TIDHSP34LVYUIDUU76OX3PFDEHL7LSWE/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HKJVBXDIN4DJ7LKFDQY6RBWFY5X2U6XW/


[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-08 Thread Strahil Nikolov via Users
Hi Jaroslaw,

it's more important to find the root cause of the data loss , as this is 
definately not supposed to happen (I got myself several power outages without 
issues).

Do you keep the logs ?

For now , check if your gluster settings (gluster volume info VOL) matches the 
settings in the virt group (/var/lib/glusterd/group/virt - or somethinhg like 
that).


Best Regards,
Strahil Nikolov






В четвъртък, 8 октомври 2020 г., 15:16:10 Гринуич+3, Jarosław Prokopowski 
 написа: 





Hi Guys,

I had a situation 2 times that due to unexpected power outage something went 
wrong and VMs on glusterfs where not recoverable.
Gluster heal did not help and I could not start the VMs any more. 
Is there a way to make such setup bulletproof? 
Does it matter which volume type I choose - raw or qcow2? Or thin provision 
versus reallocated?
Any other advise?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRM6H2YENBP3AHQ5JWSFXH6UT6J6SDQS/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7NC4BJSEUA4VGO57HJZWDMELHPMSYQG3/


[ovirt-users] Re: Gluster volume not responding

2020-10-08 Thread Strahil Nikolov via Users
>Every Monday and Wednesday morning there are gluster connectivity timeouts 
>>but all checks of the network and network configs are ok.

Based on this one I make the following conclusions:
1. Issue is reoccuring
2. You most probably have a network issue

Have you checked the following:
- are there any ping timeouts between fuse clients and gluster nodes
- Have you tried to disable fencing and check the logs after the issue reoccurs
- Are you sharing Blackup and Prod networks ? Is it possible some backup/other 
production load in your environment to "black-out" your oVirt ?
- Have you check the gluster cluster's logs for anything meaningful ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TIDHSP34LVYUIDUU76OX3PFDEHL7LSWE/


[ovirt-users] Re: CPU Type / Cluster

2020-10-06 Thread Strahil Nikolov via Users
Hi Michael,

I'm running 4.3.10 and I can confirm that Opteron_G5 was not removed.
What is reported by 'virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf capabilities' 
on both hosts ?

Best Regards,
Strahil Nikolov






В сряда, 7 октомври 2020 г., 00:06:08 Гринуич+3, Michael Jones 
 написа: 





Thanks for the email;

unfortunately it seems "Opteron_G5" is not on the 2nd host (guessing it
was removed in 4.2 or some strange cpu compat thing);

I upgraded the CPU #consumerism,

now both are "model_EPYC-IBPB / EPYC Secure" and clustered.

Kind Regards,

Mike

On 04/10/2020 17:51, Strahil Nikolov wrote:
> Hi Mike,
>
> In order to add them to a single cluster , you should set them to Opteron_G5 
> (my FX-8350 is also there) , untill you replace the host with something more 
> modern.
>
> Of course , you can have your hosts in separate clusters - but then you won't 
> be able to live migrate your VMs.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 3 октомври 2020 г., 16:50:24 Гринуич+3, Michael Jones 
>  написа: 
>
>
>
>
>
> to get these two hosts into a cluster would i need to castrate them down
> to nehalem, or would i be able to botch the db for the 2nd host from
> "EPYC-IBPB" to "Opteron_G5"?
>
> I don't really want to drop them down to nehalem, so either I can botch
> the 2nd cpu so they are both on opteron_G5 or i'll have to buy a new CPU
> for host1 to bring it up to "EPYC-IBPB";
>
> I have;
>
> host1;
>
> # vdsm-client Host getCapabilities | grep cpuFlags | tr "," "\n" | grep
> model_ | sed 's/"//' | sort -n
> model_486
> model_Conroe
> model_cpu64-rhel6
> model_kvm32
> model_kvm64
> model_Nehalem
> model_Opteron_G1
> model_Opteron_G2
> model_Opteron_G3
> model_Opteron_G4
> model_Opteron_G5  "AMD FX(tm)-8350 Eight-Core Processor"
> model_Penryn
> model_pentium
> model_pentium2
> model_pentium3
> model_qemu32
> model_qemu64
> model_Westmere
>
> host2;
>
> # vdsm-client Host getCapabilities | grep cpuFlags | tr "," "\n" | grep
> model_ | sed 's/"//' | sort -n
> model_486
> model_Conroe
> model_Dhyana
> model_EPYC
> model_EPYC-IBPB  "AMD Ryzen 7 1700X Eight-Core Processor"
> model_kvm32
> model_kvm64
> model_Nehalem
> model_Opteron_G1
> model_Opteron_G2
> model_Opteron_G3
> model_Penryn
> model_pentium
> model_pentium2
> model_pentium3
> model_qemu32
> model_qemu64
> model_SandyBridge
> model_Westmere
>
> Thanks,
>
> Mike
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NULJX3JB736A4MHC2GX7ADDW3ZT3C37O/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D462NVX5MFSMPSD52HN342C3OK3MSV4F/


[ovirt-users] Re: oVirt Survey Autumn 2020

2020-10-06 Thread Strahil Nikolov via Users
Hello All,

can someone send me the full link (not the short one) as my proxy is blocking 
it :)

Best Regards,
Strahil Nikolov






В вторник, 6 октомври 2020 г., 15:26:57 Гринуич+3, Sandro Bonazzola 
 написа: 





Just a kind reminder about the survey (https://forms.gle/bPvEAdRyUcyCbgEc7) 
closing on October 18th

Il giorno mer 23 set 2020 alle ore 11:11 Sandro Bonazzola  
ha scritto:
> As we continue to develop oVirt 4.4, the Development and Integration teams at 
> Red Hat would value insights on how you are deploying the oVirt environment.
> Please help us to hit the mark by completing this short survey.
> The survey will close on October 18th 2020. If you're managing multiple oVirt 
> deployments with very different use cases or very different deployments you 
> can consider answering this survey multiple times. 
> 
> Please note the answers to this survey will be publicly accessible.
> This survey is under oVirt Privacy Policy available at 
> https://www.ovirt.org/site/privacy-policy.html .

and the privacy link was wrong, the right one: 
https://www.ovirt.org/privacy-policy.html (no content change, only url change)

 
> 
> 
> The survey is available https://forms.gle/bPvEAdRyUcyCbgEc7
> 
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA
> 
> sbona...@redhat.com   
>   
> 
> 
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.
> 
>  


-- 
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
Red Hat EMEA

sbona...@redhat.com   
  


Red Hat respects your work life balance. Therefore there is no need to answer 
this email out of your office hours.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IJEW35XLR6WBM45DKYMZQ2UOZRWYXHKY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QYKXU7P2DNXPGZ2MOBBXVMJYA6DIND2S/


[ovirt-users] Re: engine-setup in 4.4.2 not using yum/dnf proxy?

2020-10-06 Thread Strahil Nikolov via Users
I would put it in the yum.conf and export it as "http_proxy" & "https_proxy" 
system variables.

Best Regards,
Strahil Nikolov






В вторник, 6 октомври 2020 г., 12:39:22 Гринуич+3, Gianluca Cecchi 
 написа: 





Hello,
I'm testing upgrade from 4.3.10 to 4.4.2 for a standalone manager with local 
database environment.
I configured the new engine system as a CentOS 8.2 with a proxy in 
/etc/yum.conf (that is a link to /etc/dnf/dnf.com) and that worked for all the 
steps until engine-setup.
Now I get this

[root@ovmgr1 ~]# engine-setup
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
          Configuration files: 
/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, 
/etc/ovirt-engine-setup.conf.d/10-packaging.conf, 
/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
          Log file: 
/var/log/ovirt-engine/setup/ovirt-engine-setup-20201006112458-p84x9i.log
          Version: otopi-1.9.2 (otopi-1.9.2-1.el8)
[ INFO  ] DNF Downloading 1 files, 0.00KB
[ INFO  ] DNF Downloading CentOS-8 - AppStream 0.00/0.00KB
[ INFO  ] DNF Downloading CentOS-8 - AppStream 0.00/0.00KB
[ INFO  ] DNF Downloading CentOS-8 - AppStream 0.00/0.00KB
[ INFO  ] DNF Downloading CentOS-8 - AppStream 0.00/0.00KB
[ INFO  ] DNF Downloading CentOS-8 - AppStream 0.00/0.00KB
[ INFO  ] DNF Downloading CentOS-8 - AppStream 0.00/0.00KB
[ INFO  ] DNF Downloading CentOS-8 - AppStream 0.00/0.00KB
[ INFO  ] DNF Downloading CentOS-8 - AppStream 0.00/0.00KB
[ INFO  ] DNF Downloading CentOS-8 - AppStream 0.00/0.00KB
[ INFO  ] DNF Downloading CentOS-8 - AppStream 0.00/0.00KB
[ INFO  ] DNF Downloaded CentOS-8 - AppStream
[ INFO  ] DNF Errors during downloading metadata for repository 'AppStream':

[ ERROR ] Execution of setup failed

and I see in netstat during the phase

tcp        0      1 10.4.192.43:33418       18.225.36.18:80         SYN_SENT   

so it seems it is not using the proxy.
Do I have to put proxy info into any other file?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D5WQSM7OZNKJQK3L5CN367W2TRVZZVHZ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2UXQ6J6CAWQWV3DLAJOH3NOVADSIM54N/


[ovirt-users] Re: CEPH - Opinions and ROI

2020-10-04 Thread Strahil Nikolov via Users
>And of course I want Gluster to switch between single node, replication >and 
>dispersion seemlessly and on the fly, as well as much better >diagnostic tools.

Actually Gluster can switch from distributed to 
replicated/distributed-replicated on the fly.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZW22AMUWQB32KJ55VJA3UUYAMDCBL7MN/


[ovirt-users] Re: CPU Type / Cluster

2020-10-04 Thread Strahil Nikolov via Users
Hi Mike,

In order to add them to a single cluster , you should set them to Opteron_G5 
(my FX-8350 is also there) , untill you replace the host with something more 
modern.

Of course , you can have your hosts in separate clusters - but then you won't 
be able to live migrate your VMs.

Best Regards,
Strahil Nikolov






В събота, 3 октомври 2020 г., 16:50:24 Гринуич+3, Michael Jones 
 написа: 





to get these two hosts into a cluster would i need to castrate them down
to nehalem, or would i be able to botch the db for the 2nd host from
"EPYC-IBPB" to "Opteron_G5"?

I don't really want to drop them down to nehalem, so either I can botch
the 2nd cpu so they are both on opteron_G5 or i'll have to buy a new CPU
for host1 to bring it up to "EPYC-IBPB";

I have;

host1;

# vdsm-client Host getCapabilities | grep cpuFlags | tr "," "\n" | grep
model_ | sed 's/"//' | sort -n
model_486
model_Conroe
model_cpu64-rhel6
model_kvm32
model_kvm64
model_Nehalem
model_Opteron_G1
model_Opteron_G2
model_Opteron_G3
model_Opteron_G4
model_Opteron_G5  "AMD FX(tm)-8350 Eight-Core Processor"
model_Penryn
model_pentium
model_pentium2
model_pentium3
model_qemu32
model_qemu64
model_Westmere

host2;

# vdsm-client Host getCapabilities | grep cpuFlags | tr "," "\n" | grep
model_ | sed 's/"//' | sort -n
model_486
model_Conroe
model_Dhyana
model_EPYC
model_EPYC-IBPB  "AMD Ryzen 7 1700X Eight-Core Processor"
model_kvm32
model_kvm64
model_Nehalem
model_Opteron_G1
model_Opteron_G2
model_Opteron_G3
model_Penryn
model_pentium
model_pentium2
model_pentium3
model_qemu32
model_qemu64
model_SandyBridge
model_Westmere

Thanks,

Mike

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NULJX3JB736A4MHC2GX7ADDW3ZT3C37O/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7UZZ4Q5G2ZNOPTYGRN7HBV6JOPF2INTX/


[ovirt-users] Re: ovirt-engine and host certification is expired in ovirt4.0

2020-10-02 Thread Strahil Nikolov via Users
Have you tried to set the host into maintenance and then "Enroll Certificates" 
from the UI ?

Best Regards,
Strahil Nikolov






В петък, 2 октомври 2020 г., 12:27:19 Гринуич+3, momokch--- via Users 
 написа: 





hello everyone, 
my ovirt-engine and host certification is expired, is it any method no need to 
shutdown all the vm can enroll/update the certificate between engine and the 
host???
i am using the ovirt 4.0
thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YFH5FZHHNSCO3R3GAAKBAFBCUKFQSBXN/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W4Q5SRXMI3SAE3PZ3GL4EENHK7GFEQYN/


[ovirt-users] Re: Question mark VMs

2020-10-02 Thread Strahil Nikolov via Users
Verify that your host is really down (or at least rebooted) and then in the UI 
you can 'confirm: Host has been rebooted' from the dropdown.
This should mark all your VMs as dead .


Best Regards,
Strahil Nikolov






В петък, 2 октомври 2020 г., 12:03:31 Гринуич+3, Vrgotic, Marko 
 написа: 





  


Hi oVIrt wizards,

 

One of my LocalStorage hypervisors dies. VMs do not need to be rescued.

As expected I see them in WebUI with question mark state (picture below):

 

What is the step or are steps to clean ovirt engine db from these VMs and its 
Storage/Hypervisor/Cluster/DC?

 



 

Kindly awaiting your reply.

 

 

-

kind regards/met vriendelijke groeten

 

Marko Vrgotic
Sr. System Engineer @ System Administration


ActiveVideo

o: +31 (35) 6774131

e: m.vrgo...@activevideo.com
w: www.activevideo.com

 

ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ 
Hilversum, The Netherlands. The information contained in this message may be 
legally privileged and confidential. It is intended to be read only by the 
individual or entity to whom it is addressed or by their designee. If the 
reader of this message is not the intended recipient, you are on notice that 
any distribution of this message, in any form, is strictly prohibited.  If you 
have received this message in error, please immediately notify the sender 
and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or 
destroy any copy of this message.

 

 



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QCXUDBX6IGIHJKVZFBCF726K2KGTQGMD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPIBYFMARBJNDKT3ZUDWCPYCMBKIXMIV/


[ovirt-users] Re: Is it possible to change scheduler optimization settings of cluster using ansible or some other automation way

2020-10-02 Thread Strahil Nikolov via Users
What kind of setting do you want to change ?

Maybe I misunderstood you wrong. The 'scheduling_policy' requires a predefined 
scheduling policy and 'scheduling_policy_properties' allows you to override the 
score of a setting (like 'Memory').

Best Regards,
Strahil Nikolov






В четвъртък, 1 октомври 2020 г., 18:24:14 Гринуич+3, kushagra2agar...@gmail.com 
 написа: 





@strahil Nikolov  'scheduling_policy' & 'scheduling_policy_properties' options 
in oVirt_Cluster module are not allowing to change scheduler optimisation 
settings. 

If okay can you please double check once.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XX6GCULNXWA5QICGEGEE37QPEAEZEPF5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YFQLSTKWEQU6RAKKXBQFWHIFLIZMTH6C/


[ovirt-users] Re: Is it possible to change scheduler optimization settings of cluster using ansible or some other automation way

2020-10-01 Thread Strahil Nikolov via Users
Based on 
'https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_cluster_module.html'
 there is option 'scheduling_policy' & 'scheduling_policy_properties' .

Maybe that was recently introduced.

Best Regards,
Strahil Nikolov






В четвъртък, 1 октомври 2020 г., 17:24:25 Гринуич+3, kushagra2agar...@gmail.com 
 написа: 





@strahil Nikolov.. ovirt_cluster module don't seems to have flag to change 
scheduler optimisation settings. Can you please double check

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VPCBA5SJSNLUSTWQTEZLYDT5IBNXH3KC/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QTHYDMHBOR45L3IW2XEAOYS2XLA7RDDH/


[ovirt-users] Re: ovirt-node-4.4.2 grub is not reading new grub.cfg at boot

2020-10-01 Thread Strahil Nikolov via Users
Either use 'grub2-editenv' or 'grub2-editenv - unset kernelopts' + 
'grub2-mkconfig -o /boot/grub2/grub.cfg'

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/configuring-kernel-command-line-parameters_managing-monitoring-and-updating-the-kernel
 

https://access.redhat.com/solutions/3710121

Best Regards,
Strahil Nikolov





В четвъртък, 1 октомври 2020 г., 16:12:52 Гринуич+3, Mike Lindsay 
 написа: 





Hey Folks,

I've got a bit of a strange one here. I downloaded and installed
ovirt-node-ng-installer-4.4.2-2020091810.el8.iso today on an old dev
laptop and to get it to install I needed to add acpi=off to the kernel
boot param to get the installing to work (known issue with my old
laptop). After installation it was still booting with acpi=off, no
biggie (seen that happen with Centos 5,6,7 before on occasion) right,
just change the line in /etc/defaults/grub and run grub2-mkconfig (ran
for both efi and legacy for good measure even knowing EFI isn't used)
and reboot...done this hundreds of times without any problems.

But this time after rebooting if I hit 'e' to look at the kernel
params on boot, acpi=off is still there. Basically any changes to
/etc/default/grub are being ignored or over-ridden but I'll be damned
if I can't find where.

I know I'm missing something simple here, I do this all the time but
to be honest this is the first Centos 8 based install I've had time to
play with. Any suggestions would be greatly appreciated.

The drive layout is a bit weird but had no issues running fedora or
centos in the past. boot drive is a mSATA (/dev/sdb) and there is a
SSD data drive at /dev/sda...having sda installed or removed makes no
difference and /boot is mounted where it should /dev/sdb1very
strange

Cheers,
Mike

[root@ovirt-node01 ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX='crashkernel=auto resume=/dev/mapper/onn-swap
rd.lvm.lv=onn/ovirt-node-ng-4.4.2-0.20200918.0+1 rd.lvm.lv=onn/swap
noapic rhgb quiet'
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true
GRUB_DISABLE_OS_PROBER='true'



[root@ovirt-node01 ~]# cat /boot/grub2/grub.cfg
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub2-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
set pager=1

if [ -f ${config_directory}/grubenv ]; then
  load_env -f ${config_directory}/grubenv
elif [ -s $prefix/grubenv ]; then
  load_env
fi
if [ "${next_entry}" ] ; then
  set default="${next_entry}"
  set next_entry=
  save_env next_entry
  set boot_once=true
else
  set default="${saved_entry}"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}

function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

terminal_output console
if [ x$feature_timeout_style = xy ] ; then
  set timeout_style=menu
  set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
else
  set timeout=5
fi
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/00_tuned ###
set tuned_params=""
set tuned_initrd=""
### END /etc/grub.d/00_tuned ###

### BEGIN /etc/grub.d/01_users ###
if [ -f ${prefix}/user.cfg ]; then
  source ${prefix}/user.cfg
  if [ -n "${GRUB2_PASSWORD}" ]; then
    set superusers="root"
    export superusers
    password_pbkdf2 root ${GRUB2_PASSWORD}
  fi
fi
### END /etc/grub.d/01_users ###

### BEGIN /etc/grub.d/08_fallback_counting ###
insmod increment
# Check if boot_counter exists and boot_success=0 to activate this behaviour.
if [ -n "${boot_counter}" -a "${boot_success}" = "0" ]; then
  # if countdown has ended, choose to boot rollback deployment,
  # i.e. default=1 on OSTree-based systems.
  if  [ "${boot_counter}" = "0" -o "${boot_counter}" = "-1" ]; then
    set default=1
    set boot_counter=-1
  # otherwise decrement boot_counter
  else
    decrement boot_counter
  fi
  save_env boot_counter
fi
### END /etc/grub.d/08_fallback_counting ###

### BEGIN /etc/grub.d/10_linux ###
insmod part_msdos
insmod ext2
set root='hd1,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint-bios=hd1,msdos1
--hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1
b6557c59-e11f-471b-8cb1-70c47b0b4b29
else
  search --no-floppy 

[ovirt-users] Re: CEPH - Opinions and ROI

2020-10-01 Thread Strahil Nikolov via Users
CEPH requires at least 4 nodes to be "good".
I know that Gluster is not the "favourite child" for most vendors, yet it is 
still optimal for HCI.

You can check 
https://www.ovirt.org/develop/release-management/features/storage/cinder-integration.html
 for cinder integration.

Best Regards,
Strahil Nikolov






В четвъртък, 1 октомври 2020 г., 07:36:24 Гринуич+3, Jeremey Wise 
 написа: 






I have for many years used gluster because..well.  3 nodes.. and so long as I 
can pull a drive out.. I can get my data.. and with three copies.. I have much 
higher chance of getting it.

Downsides to gluster: Slower (its my home..meh... and I have SSD to avoid MTBF 
issues ) and with VDO.. and thin provisioning.. not had issue.

BUT  gluster seems to be falling out of favor.  Especially as I move 
towards OCP.

So..  CEPH.  I have one SSD in each of the three servers.  so I have some space 
to play.

I googled around.. and find no clean deployment notes and guides on CEPH + 
oVirt.

Comments or ideas..

-- 
penguinpages.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UTKROHYPKJOXJKAJPRL37IETMELMXCPD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GLVKY6ATXLZNXCFRE6HGRBCXYLPVPU7K/


[ovirt-users] Re: VM AutoStart

2020-10-01 Thread Strahil Nikolov via Users
In EL 8 , there is no 'default' python. You can use both.

My choice would be ansible because APIs change , but also ansible modules are 
updated. If you create your own script , you will have to take care about the 
updates, while with ansible - you just update the relevan packages :)

Best Regards,
Strahil Nikolov






В сряда, 30 септември 2020 г., 22:55:40 Гринуич+3, Jeremey Wise 
 написа: 






As the three servers are Centos8 minimal installs. + oVirt HCI wizard to keep 
them lean and mean... a couple questions

1) which version of python would I need for this (note in script about python 2 
but isn't that deprecated?)
[root@thor /]# yum install python
Last metadata expiration check: 2:29:38 ago on Wed 30 Sep 2020 01:18:32 PM EDT.
No match for argument: python
There are following alternatives for "python": python2, python36, python38
Error: Unable to find a match: python

2)  When you have three nodes.. one is set to host the ovirt-engine active, and 
another as backup.  If this is added to rc.local.   Of the two nodes hosting HA 
for oVirt-engine.. node which boots first will host (or so it seems). I think 
if I add this to both those hosts .. it will not create issues.  Any thoughts?



On Wed, Sep 30, 2020 at 3:23 PM Derek Atkins  wrote:
> I run it out of rc.local:
> 
> /usr/local/sbin/start_vms.py > /var/log/start_vms 2>&1 &
> 
> The script is smart enough to wait for the engine to be fully active.
> 
> -derek
> 
> On Wed, September 30, 2020 3:11 pm, Jeremey Wise wrote:
>> i would like to eventually go ansible route..  and was starting down that
>> path but this is fabulous.
>>
>> I will modify and post how it went.
>>
>> One question:  How /where do you set this saved new and delicious script
>> so
>> once oVirt-engine comes up... it runs?
>>
>> Thanks
>>
>> On Wed, Sep 30, 2020 at 2:42 PM Derek Atkins  wrote:
>>
>>> Hi,
>>>
>>> I had a script based around ovirt-shell which I re-wrote as a script
>>> around the Python SDK4 which I run on my engine during the startup
>>> sequence.  The script will wait for the engine to come up and ensure the
>>> storage domains are up before it tries to start the VMs.  Then it will
>>> go
>>> ahead and start the VMs in the specified order with specified delay
>>> and/or
>>> wait-for-up signal between them.
>>>
>>> You can find my scripts at https://www.ihtfp.org/ovirt/
>>>
>>> Or you can go the ansible route :)
>>>
>>> Enjoy!
>>>
>>> -derek
>>>
>>> On Wed, September 30, 2020 11:21 am, Jeremey Wise wrote:
>>> > When I have to shut down cluster... ups runs out etc..  I need a
>>> sequence
>>> > set of just a small number of VMs to "autostart"
>>> >
>>> > Normally I just use DNS FQND to connect to oVirt engine but as two of
>>> my
>>> > VMs  are a DNS HA cluster..  as well as NTP / SMTP /DHCP etc...  I
>>> need
>>> > those two infrastructure VMs to be auto boot.
>>> >
>>> > I looked at HA settings for those VMs but it seems to be watching for
>>> > pause
>>> > /resume.. but it does not imply or state auto start on clean first
>>> boot.
>>> >
>>> > Options?
>>> >
>>> >
>>> > --
>>> > p enguinpages
>>> > ___
>>> > Users mailing list -- users@ovirt.org
>>> > To unsubscribe send an email to users-le...@ovirt.org
>>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> > oVirt Code of Conduct:
>>> > https://www.ovirt.org/community/about/community-guidelines/
>>> > List Archives:
>>> >
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VAYHFFSANCBRN44ABBTXIYEAR3ZFCP6N/
>>> >
>>>
>>>
>>> --
>>>        Derek Atkins                 617-623-3745
>>>        de...@ihtfp.com             www.ihtfp.com
>>>        Computer and Internet Security Consultant
> 
>>>
>>>
>>
>> --
>> jeremey.w...@gmail.com
>>
> 
> 
> -- 
>        Derek Atkins                 617-623-3745
>        de...@ihtfp.com             www.ihtfp.com
>        Computer and Internet Security Consultant
> 
> 


-- 
jeremey.w...@gmail.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RDGSTXC5NEQD2NVRZHG4JP24EQDBRPSM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YFFCVSDS5JOTDV567FOJGLEODUAM5R4B/


[ovirt-users] Re: VM AutoStart

2020-10-01 Thread Strahil Nikolov via Users
As I mentioned, I would use systemd service to start the ansible play (or a 
script running it).

Best Regards,
Strahil Nikolov






В сряда, 30 септември 2020 г., 22:15:17 Гринуич+3, Jeremey Wise 
 написа: 





i would like to eventually go ansible route..  and was starting down that 
path but this is fabulous.

I will modify and post how it went.

One question:  How /where do you set this saved new and delicious script so 
once oVirt-engine comes up... it runs?

Thanks

On Wed, Sep 30, 2020 at 2:42 PM Derek Atkins  wrote:
> Hi,
> 
> I had a script based around ovirt-shell which I re-wrote as a script
> around the Python SDK4 which I run on my engine during the startup
> sequence.  The script will wait for the engine to come up and ensure the
> storage domains are up before it tries to start the VMs.  Then it will go
> ahead and start the VMs in the specified order with specified delay and/or
> wait-for-up signal between them.
> 
> You can find my scripts at https://www.ihtfp.org/ovirt/
> 
> Or you can go the ansible route :)
> 
> Enjoy!
> 
> -derek
> 
> On Wed, September 30, 2020 11:21 am, Jeremey Wise wrote:
>> When I have to shut down cluster... ups runs out etc..  I need a sequence
>> set of just a small number of VMs to "autostart"
>>
>> Normally I just use DNS FQND to connect to oVirt engine but as two of my
>> VMs  are a DNS HA cluster..  as well as NTP / SMTP /DHCP etc...  I need
>> those two infrastructure VMs to be auto boot.
>>
>> I looked at HA settings for those VMs but it seems to be watching for
>> pause
>> /resume.. but it does not imply or state auto start on clean first boot.
>>
>> Options?
>>
>>
>> --
>> p enguinpages
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VAYHFFSANCBRN44ABBTXIYEAR3ZFCP6N/
>>
> 
> 
> -- 
>        Derek Atkins                 617-623-3745
>        de...@ihtfp.com             www.ihtfp.com
>        Computer and Internet Security Consultant
> 
> 
> 


-- 
jeremey.w...@gmail.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XGHXJVGACPIPIZB77KSXRFBF7S6VFEI3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RSWMEE3BLR4JR5AYPYV3PAWN2LJIF6VR/


[ovirt-users] Re: Power on VM - CLI / API

2020-10-01 Thread Strahil Nikolov via Users
---
- name: Example
  hosts: localhost
  connection: local
  vars:
    ovirt_auth:
      username: 'admin@internal'
      password: 'pass'
      url: 'https://engine.localdomain/ovirt-engine/api'
      insecure: True
      ca_file: '/root/ansible/engine.ca'
 
  - name: Power on {{ outer_item }} after snapshot restore
    ovirt_vm:
      auth: "{{ ovirt_auth }}"
      state: running
      name: "{{ item }}"
    loop:
     - VM1
     - VM2

Yeah, you have to fix the tabulations (both Ansible and Python are pain in the 
*** )

Best Regards,
Strahil Nikolov







В сряда, 30 септември 2020 г., 21:01:26 Гринуич+3, Jeremey Wise 
 написа: 






Can anyone post link.  (with examples.. as most documentation for oVirt lacks 
this)..  where I can power on a VM via CLI or API.

As of now I cannot login to oVirt-Engine.  No errors when I restart it..  
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/45KKF5TN5PRQ3R7MDOWIQTSYZXZRVDIZ/

BUt... I need to get VMs booted.

I tried to follow:
http://ovirt.github.io/ovirt-engine-api-model/master/
and my server's API web portal
https://ovirte01.penguinpages.local/ovirt-engine/apidoc/#/documents/003_common_concepts

And.. even get POSTMAN (real newbie at that tool but ran into how to add 
exported .pem key from portal to session issues)


# failed CLI example:   Power on VM "ns01"
###  DRAFT :: 2020-09-30

# Get key from oVirt engine and import.  Ex: from ovirte01  into server 'thor

curl -k 
'https://ovirte01.penguinpages.local/ovirt-engine/services/pki-resource?resource=ca-certificate=X509-PEM-CA'
 -o ovirt01_ca.pem

sudo cp ovirt01_ca.pem /etc/pki/ca-trust/source/anchors

sudo update-ca-trust extract

 

openssl s_client -connect ovirte01.penguinpages.local:443 -showcerts < /dev/null

 

# Use key during GET list of VMs  

??  

 curl -X POST https://ovirte01.penguinpages.local/post -H 
/ovirt-engine/api/vms/ns01/start HTTP/1.1 


#

I just need to power on VM


-- 
penguinpages
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN4AHVRITGBFUJBYATZA2DTUEIJEX6GL/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V5XQ5LTSLAKLXRUATQOY4PHSFVC3LVQB/


[ovirt-users] Re: VM AutoStart

2020-09-30 Thread Strahil Nikolov via Users
Also consider setting a reasonable 'TimeoutStartSec=' in your systemd service 
file when you create the service...

Best Regards,
Strahil Nikolov






В сряда, 30 септември 2020 г., 20:18:01 Гринуич+3, Strahil Nikolov via Users 
 написа: 





I would create an ansible playbook that will be running from the engine:
1. Check the engine's health page via uri module and wait_for (maybe with a 
regex)
Healthpage is : https://engine_FQDN/ovirt-engine/services/health
2. Use ansible ovirt_vm module to start your vms in the order you want
3. Test the playbook
4. Create a oneshot systemd servce that starts after 'ovirt-engine.service' and 
runs your playbook

Best Regards,
Strahil Nikolov






В сряда, 30 септември 2020 г., 18:27:13 Гринуич+3, Jeremey Wise 
 написа: 






When I have to shut down cluster... ups runs out etc..  I need a sequence set 
of just a small number of VMs to "autostart"

Normally I just use DNS FQND to connect to oVirt engine but as two of my VMs  
are a DNS HA cluster..  as well as NTP / SMTP /DHCP etc...  I need those two 
infrastructure VMs to be auto boot.

I looked at HA settings for those VMs but it seems to be watching for pause 
/resume.. but it does not imply or state auto start on clean first boot.

Options?


-- 
penguinpages
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VAYHFFSANCBRN44ABBTXIYEAR3ZFCP6N/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2TC2S56HVRDE3JDN5CGYWVVPAPOCAT2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VD3OAKTKZPUWC4RUW7RB3SPQ7JL2YH5K/


[ovirt-users] Re: Gluster Volumes - Correct Peer Connection

2020-09-30 Thread Strahil Nikolov via Users
If you can do it from cli - use the cli as it has far more control over what 
the UI can provide.
Usually I use UI for monitoring and basic stuff like starting/stopping the 
brick or setting the 'virt'group via the 'optimize for Virt' (or whatever it 
was called).

Best Regards,
Strahil Nikolov






В сряда, 30 септември 2020 г., 19:48:21 Гринуич+3, penguin pages 
 написа: 





I have a network called "Storage"  but not called "gluster logical network"  

Front end  172.16.100.0/24 for mgmt and vms (1Gb)  "ovirtmgmt"

Back end 172.16.101.0/24 for storage (10Gb) "Storage"

and yes.. I was never able to figure out how to us UI to create bricks.. so I 
just was bad and went to CLI and made them.

But would be valuable to learn oVirt "Best Practice" way... though the HCI 
wizard setup SHOULD have done this in that the wizard allows and I supplied 
font vs back end.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5COXTYU3QRPIU4YP3OTSMTA7Y4E5UGEV/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6WIZ6XRRTS6GDGPHHD7YWH3HTTULVUGD/


[ovirt-users] Re: VM AutoStart

2020-09-30 Thread Strahil Nikolov via Users
I would create an ansible playbook that will be running from the engine:
1. Check the engine's health page via uri module and wait_for (maybe with a 
regex)
Healthpage is : https://engine_FQDN/ovirt-engine/services/health
2. Use ansible ovirt_vm module to start your vms in the order you want
3. Test the playbook
4. Create a oneshot systemd servce that starts after 'ovirt-engine.service' and 
runs your playbook

Best Regards,
Strahil Nikolov






В сряда, 30 септември 2020 г., 18:27:13 Гринуич+3, Jeremey Wise 
 написа: 






When I have to shut down cluster... ups runs out etc..  I need a sequence set 
of just a small number of VMs to "autostart"

Normally I just use DNS FQND to connect to oVirt engine but as two of my VMs  
are a DNS HA cluster..  as well as NTP / SMTP /DHCP etc...  I need those two 
infrastructure VMs to be auto boot.

I looked at HA settings for those VMs but it seems to be watching for pause 
/resume.. but it does not imply or state auto start on clean first boot.

Options?


-- 
penguinpages
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VAYHFFSANCBRN44ABBTXIYEAR3ZFCP6N/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2TC2S56HVRDE3JDN5CGYWVVPAPOCAT2/


[ovirt-users] Re: Replica Question

2020-09-30 Thread Strahil Nikolov via Users
In your case it seems reasonable, but you should test the 2 stripe sizes (128K 
vs 256K) before running in production. The good thing about replica volumes is 
that you can remove a brick , recreate it from cli and then add it back.

Best Regards,
Strahil Nikolov






В сряда, 30 септември 2020 г., 17:03:11 Гринуич+3, C Williams 
 написа: 





Hello,

I am planning to use (6) 1 TB disks in a hardware RAID 6 array which will yield 
4 data disks per oVirt Gluster brick.

However. 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
 ---  states 

"For RAID 6, the stripe unit size must be chosen such that the full stripe size 
(stripe unit * number of data disks) is between 1 MiB and 2 MiB, preferably in 
the lower end of the range. Hardware RAID controllers usually allow stripe unit 
sizes that are a power of 2. For RAID 6 with 12 disks (10 data disks), the 
recommended stripe unit size is 128KiB"

In my case,  each  oVirt Gluster Brick would have 4 data disks which would mean 
a 256K hardware RAID stripe size ( by what I see above)  to get the full stripe 
size unit to above 1 MiB  ( 4 data disks X 256K Hardware RAID stripe = 1024 K )

Could I use a smaller stripe size ex. 128K -- since most of the oVirt virtual 
disk traffic will be small sharded files and a 256K stripe would seem to me to 
be pretty big.for this type of data ?

Thanks For Your Help !!





On Tue, Sep 29, 2020 at 8:22 PM C Williams  wrote:
> Strahil,
> 
> Once Again Thank You For Your Help !  
> 
> On Tue, Sep 29, 2020 at 11:29 AM Strahil Nikolov  
> wrote:
>> One important step is to align the XFS to the stripe size * stripe width. 
>> Don't miss it or you might have issues.
>> 
>> Details can be found at: 
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> 
>> 
>> В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams 
>>  написа: 
>> 
>> 
>> 
>> 
>> 
>> Hello,
>> 
>> We have decided to get a 6th server for the install. I hope to set up a 2x3 
>> Distributed replica 3 .
>> 
>> So we are not going to worry about the "5 server" situation.
>> 
>> Thank You All For Your Help !!
>> 
>> On Mon, Sep 28, 2020 at 5:53 PM C Williams  wrote:
>>> Hello,
>>> 
>>> More questions on this -- since I have 5 servers . Could the following work 
>>> ?   Each server has (1) 3TB RAID 6 partition that I want to use for 
>>> contiguous storage.
>>> 
>>> Mountpoint for RAID 6 partition (3TB)  /brick  
>>> 
>>> Server A: VOL1 - Brick 1                                  directory  
>>> /brick/brick1 (VOL1 Data brick)                                             
>>>                                                         
>>> Server B: VOL1 - Brick 2 + VOL2 - Brick 3      directory  /brick/brick2 
>>> (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
>>> Server C: VOL1 - Brick 3                                 directory 
>>> /brick/brick3  (VOL1 Data brick)  
>>> Server D: VOL2 - Brick 1                                 directory 
>>> /brick/brick1  (VOL2 Data brick)
>>> Server E  VOL2 - Brick 2                                 directory 
>>> /brick/brick2  (VOL2 Data brick)
>>> 
>>> Questions about this configuration 
>>> 1.  Is it safe to use a  mount point 2 times ?  
>>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
>>>  says "Ensure that no more than one brick is created from a single mount." 
>>> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
>>> 
>>> 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2 
>>> additional data bricks plus 1 arbitrator brick (VOL2)  to create a  
>>> distributed-replicate cluster providing ~6TB of contiguous storage ?  . 
>>>      By contiguous storage I mean that df -h would show ~6 TB disk space.
>>> 
>>> Thank You For Your Help !!
>>> 
>>> On Mon, Sep 28, 2020 at 4:04 PM C Williams  wrote:
 Strahil,
 
 Thank You For Your Help !
 
 On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov  
 wrote:
> You can setup your bricks in such way , that each host has at least 1 
> brick.
> For example:
> Server A: VOL1 - Brick 1
> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
> Server D: VOL2 - brick 1
> 
> The most optimal is to find a small system/VM for being an arbiter and 
> having a 'replica 3 arbiter 1' volume.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme 
>  написа: 
> 
> 
> 
> 
> 
> It might be possible to do something similar as described in the 
> documentation here: 
> 

[ovirt-users] Re: Is it possible to change scheduler optimization settings of cluster using ansible or some other automation way

2020-09-30 Thread Strahil Nikolov via Users
You can use this ansible module and assign your scheduling policy:

https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_cluster_module.html

Best Regards,
Strahil Nikolov






В сряда, 30 септември 2020 г., 11:36:01 Гринуич+3, Kushagra Agarwal 
 написа: 





I was hoping if i can get some help with the below oVirt scenario:

Problem Statement:- 

Is it possible to change scheduler optimization settings of cluster using 
ansible or some other automation way

Description:- Do we have any ansible module or any other CLI based approach 
which can help us to change 'scheduler optimization' settings of cluster in 
oVIrt. Scheduler optimization settings of cluster can be found under Scheduling 
Policy tab ( Compute -> Clusters(select the cluster) -> click on edit and then 
navigate to scheduling policy

Any help in this will be highly appreciated.
  
Thanks,
Kushagra
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TUK5BO27MX35B76ZNFIQM6Q2BFYBV5DV/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F34ROBHEZPQDWUMA4JBI2AMB6K7BZV7N/


[ovirt-users] Re: update to 4.4 fails with "Domain format is different from master storage domain format" (v4.3 cluster with V4 NFS storage domains)

2020-09-30 Thread Strahil Nikolov via Users
Are you trying to use the same storage domain ?
I hope not, as this is not supposed to be done like that.As far as I remember - 
you need fresh storage.

Best Regards,
Strahil NIkolov






В вторник, 29 септември 2020 г., 20:07:51 Гринуич+3, Sergey Kulikov 
 написа: 






Hello, I'm trying to update our hosted-engine ovirt to version 4.4 from 4.3.10 
and everything goes fine until 
hosted-engine --deploy tries to add new hosted_storage domain, we have NFS 
storage domains, and it 
fails with error:

[ INFO  ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is 
"[Domain format is different from master storage domain format]". HTTP response 
code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
reason is \"Operation Failed\". Fault detail is \"[Domain format is different 
from master storage domain format]\". HTTP response code is 400."}

It looks like storage domains in data center should have been upgraded to V5 
when DC and cluster 
compatibility version was updated to 4.3, but looks like it was implemented in 
ovirt 4.3.3 and this 
setup was updated from 4.2 to 4.3 before 4.3.3 was released, so I ended up with 
4.3 DCs and clusters 
with V4 storage domain format.
Is there any way to convert V4 to V5 (there are running VMs on them) to be able 
upgrade to 4.4?


-- 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VCKOKLB6ZVE6674HDRWVEXI5RTXIF6WZ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAXJFNX73K4C5P77ZRZDZRLAW4SDHEYY/


[ovirt-users] Re: adminstration portal wont complete load, looping

2020-09-30 Thread Strahil Nikolov via Users
I got the same behaviour with adblock plus add-on.

Try in incognito mode (or with disabled plugins/ new fresh browser).

Best Regards,
Strahil Nikolov






В вторник, 29 септември 2020 г., 18:50:05 Гринуич+3, Philip Brown 
 написа: 





I have an odd situation:
When I go to
https://ovengine/ovirt-engine/webadmin/?locale=en_US

after authentication passes...
it shows the top banner of

oVirt OPEN VIRTUALIZATION MANAGER

and the


    Loading ...


in the center. but never gets past that. Any suggestions on how I could 
investigate and fix this?

background:
I recently updated certs to be signed wildcard certs, but this broke consoles 
somehow.
So I restored the original certs, and restarted things... but got stuck with 
this.


Interestingly, the VM portal loads fine.  But not the admin portal.



--
Philip Brown| Sr. Linux System Administrator | Medata, Inc. 
5 Peters Canyon Rd Suite 250 
Irvine CA 92606 
Office 714.918.1310| Fax 714.918.1325 
pbr...@medata.com| www.medata.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PSAKTDCKJD7ECNMKKI4MKPQTMAPP4AGP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O6L5EJ6CYKVOTQSCNLNCDYEVT2LZ2FLQ/


[ovirt-users] Re: Replica Question

2020-09-29 Thread Strahil Nikolov via Users
One important step is to align the XFS to the stripe size * stripe width. Don't 
miss it or you might have issues.

Details can be found at: 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration

Best Regards,
Strahil Nikolov






В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams 
 написа: 





Hello,

We have decided to get a 6th server for the install. I hope to set up a 2x3 
Distributed replica 3 .

So we are not going to worry about the "5 server" situation.

Thank You All For Your Help !!

On Mon, Sep 28, 2020 at 5:53 PM C Williams  wrote:
> Hello,
> 
> More questions on this -- since I have 5 servers . Could the following work ? 
>   Each server has (1) 3TB RAID 6 partition that I want to use for contiguous 
> storage.
> 
> Mountpoint for RAID 6 partition (3TB)  /brick  
> 
> Server A: VOL1 - Brick 1                                  directory  
> /brick/brick1 (VOL1 Data brick)                                               
>                                                       
> Server B: VOL1 - Brick 2 + VOL2 - Brick 3      directory  /brick/brick2 (VOL1 
> Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
> Server C: VOL1 - Brick 3                                 directory 
> /brick/brick3  (VOL1 Data brick)  
> Server D: VOL2 - Brick 1                                 directory 
> /brick/brick1  (VOL2 Data brick)
> Server E  VOL2 - Brick 2                                 directory 
> /brick/brick2  (VOL2 Data brick)
> 
> Questions about this configuration 
> 1.  Is it safe to use a  mount point 2 times ?  
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
>  says "Ensure that no more than one brick is created from a single mount." In 
> my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
> 
> 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2 
> additional data bricks plus 1 arbitrator brick (VOL2)  to create a  
> distributed-replicate cluster providing ~6TB of contiguous storage ?  . 
>      By contiguous storage I mean that df -h would show ~6 TB disk space.
> 
> Thank You For Your Help !!
> 
> On Mon, Sep 28, 2020 at 4:04 PM C Williams  wrote:
>> Strahil,
>> 
>> Thank You For Your Help !
>> 
>> On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov  
>> wrote:
>>> You can setup your bricks in such way , that each host has at least 1 brick.
>>> For example:
>>> Server A: VOL1 - Brick 1
>>> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
>>> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
>>> Server D: VOL2 - brick 1
>>> 
>>> The most optimal is to find a small system/VM for being an arbiter and 
>>> having a 'replica 3 arbiter 1' volume.
>>> 
>>> Best Regards,
>>> Strahil Nikolov
>>> 
>>> 
>>> 
>>> 
>>> 
>>> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme 
>>>  написа: 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> It might be possible to do something similar as described in the 
>>> documentation here: 
>>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
>>>  -- but I'm not sure if oVirt HCI would support it. You might have to roll 
>>> out your own GlusterFS storage solution. Someone with more Gluster/HCI 
>>> knowledge might know better.
>>> 
>>> On Mon, Sep 28, 2020 at 1:26 PM C Williams  wrote:
 Jayme,
 
 Thank for getting back with me ! 
 
 If I wanted to be wasteful with storage, could I start with an initial 
 replica 2 + arbiter and then add 2 bricks to the volume ? Could the 
 arbiter solve split-brains for 4 bricks ?
 
 Thank You For Your Help !
 
 On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
> You can only do HCI in multiple's of 3. You could do a 3 server HCI setup 
> and add the other two servers as compute nodes or you could add a 6th 
> server and expand HCI across all 6
> 
> On Mon, Sep 28, 2020 at 12:28 PM C Williams  
> wrote:
>> Hello,
>> 
>> We recently received 5 servers. All have about 3 TB of storage. 
>> 
>> I want to deploy an oVirt HCI using as much of my storage and compute 
>> resources as possible. 
>> 
>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>> 
>> I have deployed replica 3s and know about replica 2 + arbiter -- but an 
>> arbiter would not be applicable here -- since I have equal storage on 
>> all of the planned bricks.
>> 
>> Thank You For Your Help !!
>> 
>> C Williams
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> 

[ovirt-users] Re: Replica Question

2020-09-29 Thread Strahil Nikolov via Users

More questions on this -- since I have 5 servers . Could the following work ?   
Each server has (1) 3TB RAID 6 partition that I want to use for contiguous 
storage.

Mountpoint for RAID 6 partition (3TB)  /brick  

Server A: VOL1 - Brick 1                                  directory  
/brick/brick1 (VOL1 Data brick)                                                 
                                                    
Server B: VOL1 - Brick 2 + VOL2 - Brick 3      directory  /brick/brick2 (VOL1 
Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
Server C: VOL1 - Brick 3                                 directory 
/brick/brick3  (VOL1 Data brick)  
Server D: VOL2 - Brick 1                                 directory 
/brick/brick1  (VOL2 Data brick)
Server E  VOL2 - Brick 2                                 directory 
/brick/brick2  (VOL2 Data brick)

Questions about this configuration 
1.  Is it safe to use a  mount point 2 times ?  
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
 says "Ensure that no more than one brick is created from a single mount." In 
my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B

As long as you keep the arbiter brick (VOL 2) separate from the data brick (VOL 
1) , it will be fine. A brick is a unique combination of Gluster TSP node + 
mount point. You can use /brick/brick1 on all your nodes and the volume will be 
fine. Using "/brick/brick1" for data brick (VOL1) and "/brick/brick1" for 
arbiter brick (VOL2) on the same host IS NOT ACCEPTABLE. So just keep the brick 
names more unique and everything will be fine. Maybe something like this will 
be easier to work with, but you can set it to anything you want as long as you 
don't use same brick in 2 volumes:
serverA:/gluster_bricks/VOL1/brick1
serverB:/gluster_bricks/VOL1/brick2
serverB:/gluster_bricks/VOL2/arbiter
serverC:/gluster_bricks/VOL1/brick3
serverD:/gluster_bricks/VOL2/brick1
serverE:/gluster_bricks/VOL2/brick2


2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2 
additional data bricks plus 1 arbitrator brick (VOL2)  to create a  
distributed-replicate cluster providing ~6TB of contiguous storage ?  . 
     By contiguous storage I mean that df -h would show ~6 TB disk space.

No, you either use 6 data bricks (subvol1 -> 3 data disks, subvol2 -> 3 data 
disks), or you use 4 data + 2 arbiter bricks (subvol 1 -> 2 data + 1 arbiter, 
subvol 2 -> 2 data + 1 arbiter). The good thing is that you can reshape the 
volume once you have more disks.


If you have only Linux VMs , you can follow point 1 and create 2 volumes which 
will be 2 storage domains in Ovirt. Then you can stripe (software raid 0 via 
mdadm or lvm native) your VMs with 1 disk from the first volume and 1 disk from 
the second volume.

Actually , I'm using 4 gluster volumes for my NVMe as my network is too slow. 
My VMs have 4 disks in a raid0 (boot) and striped LV (for "/").

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G5YGKU6IK2OBUETNH2AL2MFOGKWUKFBO/


[ovirt-users] Re: Replica Question

2020-09-28 Thread Strahil Nikolov via Users
You can setup your bricks in such way , that each host has at least 1 brick.
For example:
Server A: VOL1 - Brick 1
Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
Server D: VOL2 - brick 1

The most optimal is to find a small system/VM for being an arbiter and having a 
'replica 3 arbiter 1' volume.

Best Regards,
Strahil Nikolov





В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme 
 написа: 





It might be possible to do something similar as described in the documentation 
here: 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
 -- but I'm not sure if oVirt HCI would support it. You might have to roll out 
your own GlusterFS storage solution. Someone with more Gluster/HCI knowledge 
might know better.

On Mon, Sep 28, 2020 at 1:26 PM C Williams  wrote:
> Jayme,
> 
> Thank for getting back with me ! 
> 
> If I wanted to be wasteful with storage, could I start with an initial 
> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter 
> solve split-brains for 4 bricks ?
> 
> Thank You For Your Help !
> 
> On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
>> You can only do HCI in multiple's of 3. You could do a 3 server HCI setup 
>> and add the other two servers as compute nodes or you could add a 6th server 
>> and expand HCI across all 6
>> 
>> On Mon, Sep 28, 2020 at 12:28 PM C Williams  wrote:
>>> Hello,
>>> 
>>> We recently received 5 servers. All have about 3 TB of storage. 
>>> 
>>> I want to deploy an oVirt HCI using as much of my storage and compute 
>>> resources as possible. 
>>> 
>>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>>> 
>>> I have deployed replica 3s and know about replica 2 + arbiter -- but an 
>>> arbiter would not be applicable here -- since I have equal storage on all 
>>> of the planned bricks.
>>> 
>>> Thank You For Your Help !!
>>> 
>>> C Williams
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>>> 
>> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CECMD2SWBSBDAFP3TFMMYWTSV3UKU72E/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DF4P6PZGSAKQZRRWSHBVPZXNJQDR6YGC/


[ovirt-users] Re: oVirt Change hosts to FQDN

2020-09-28 Thread Strahil Nikolov via Users
You cannot have 2 IPs for 2 different FQDNs.
You have to use something like: 

172.16.100.101 thor.penguinpages.local thor thorst

Fix your /etc/hosts or you should use DNS.

Best Regards,
Strahil Nikolov




В понеделник, 28 септември 2020 г., 03:41:17 Гринуич+3, Jeremey Wise 
 написа: 





when I re deployed ovirt engine after running ovirt-hosted-engine-cleanup on 
all nodes.. it deployed oVirt on the first node fine.. but when I tried to add 
the other two nodes it kept failing.  I got it to success ONLY if I used IP vs 
DNS (post about error here: 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/A6Z3MRFGFSEA7IOCE6WLPEPXE536Z6DR/#A6Z3MRFGFSEA7IOCE6WLPEPXE536Z6DR
 )

But this seems to have been a bad idea.  I now need to correct this. 

I can't send emails with images so I will post scrape:

##

Name
Comment
Hostname/IP
Cluster
Data Center
Status
Virtual Machines
Memory
CPU
Network
SPM

medusa 
medusa host in three node HA cluster
172.16.100.103
Default_Cluster 
Default_Datacenter 
Up
0
6%
9%
0%
SPM

odin 
odin host in three node HA cluster
172.16.100.102
Default_Cluster 
Default_Datacenter 
Up
1
8%
0%
0%
Normal


thor 
thor host in three node HA cluster
thor.penguinpages.local
Default_Cluster 
Default_Datacenter 
Up
4
9%
2%
0%
Normal
##

[root@thor ~]# gluster pool list
UUID                                    Hostname                        State
83c772aa-33cd-430f-9614-30a99534d10e    odinst.penguinpages.local       
Connected
977b2c1d-36a8-4852-b953-f75850ac5031    medusast.penguinpages.local     
Connected
7726b514-e7c3-4705-bbc9-5a90c8a966c9    localhost                       
Connected

[root@thor ~]# gluster peer status
Number of Peers: 2

Hostname: odinst.penguinpages.local
Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
State: Peer in Cluster (Connected)

Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)

[root@odin ~]# gluster peer status
Number of Peers: 2

Hostname: thorst.penguinpages.local
Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9
State: Peer in Cluster (Connected)

Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@medusa ~]# gluster peer status
Number of Peers: 2

Hostname: thorst.penguinpages.local
Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9
State: Peer in Cluster (Connected)

Hostname: odinst.penguinpages.local
Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
State: Peer in Cluster (Connected)
[root@medusa ~]#

[root@thor ~]# cat /etc/hosts
# Version: 20190730a
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

# Cluster node thor
172.16.100.91   thorm.penguinpages.local thorm
172.16.100.101  thor.penguinpages.local thor
172.16.101.101  thorst.penguinpages.local thorst

# Cluster node odin
172.16.100.92   odinm.penguinpages.local odinm
172.16.100.102  odin.penguinpages.local odin
172.16.101.102  odinst.penguinpages.local odinst

# Cluster node medusa
# 172.16.100.93   medusam.penguinpages.local medusam
172.16.100.103  medusa.penguinpages.local medusa
172.16.101.103  medusast.penguinpages.local medusast
172.16.100.31 ovirte01.penguinpages.local ovirte01
172.16.100.32 ovirte02.penguinpages.local ovirte02
172.16.100.33 ovirte03.penguinpages.local ovirte03
[root@thor ~]#


On Sun, Sep 27, 2020 at 1:54 PM Strahil Nikolov  wrote:
> Hi Jeremey,
> 
> I am not sure that I completely understand the problem.
> 
> Can you provide the Host details page from UI and the output of:
> 'gluster pool list' & 'gluster peer status' from all nodes ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В събота, 26 септември 2020 г., 20:31:23 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> 
> I posted that I had wiped out the oVirt-engine..  running cleanup on all 
> three nodes.  Done a re-deployment.   Then to add nodes back.. though all 
> have entries for eachother in /etc/hosts and ssh works fine via short and 
> long name.  
> 
> I added nodes back into cluster..  but had to do it via IP to get past error.
> 
> Now, if I go to create a volume via the GUI in gluster I get:
> Error while executing action Create Gluster Volume: Volume create failed: 
> rc=30800 out=() err=["Host 172_16_100_102 is not in 'Peer in Cluster' state"] 
> 
> Which seems to be related to using IP vs DNS to add gluster volumes
> https://bugzilla.redhat.com/show_bug.cgi?id=1055928
> 
> Question:  how do i fix the hosts in cluster being defined by IP vs desired 
> hostname?
> 
> 
> 
> -- 
> jeremey.w...@gmail.com
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> 

[ovirt-users] Re: MOM can't run

2020-09-27 Thread Strahil Nikolov via Users
In my case momd is static and not running:

[root@ovirt1 ~]# systemctl status mom-vdsm.service momd.service
● mom-vdsm.service - MOM instance configured for VDSM purposes
  Loaded: loaded (/usr/lib/systemd/system/mom-vdsm.service; enabled; vendor 
preset: enabled)
  Active: active (running) since нд 2020-09-27 20:58:09 EEST; 28min ago
Main PID: 6153 (python)
   Tasks: 6
  CGroup: /system.slice/mom-vdsm.service
  └─6153 python /usr/sbin/momd -c /etc/vdsm/mom.conf

сеп 27 20:58:09 ovirt1.localdomain systemd[1]: Started MOM instance configured 
for VDSM purposes.

● momd.service - Memory Overcommitment Manager Daemon
  Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor preset: 
disabled)
  Active: inactive (dead)



What is the status of mom-vdsm.service ?

Best Regards,
Strahil Nikolov






В неделя, 27 септември 2020 г., 10:06:39 Гринуич+3, duhongyu 
 написа: 





hi
I try to start momd by "systemctl restart momd" but it can't run , so I run 
momd by " /usr/sbin/momd -c /etc/momd.conf "
[root@node0 mom]# /usr/sbin/momd -c /etc/momd.conf 
2020-09-27 14:31:12,291 - mom - INFO - MOM starting
2020-09-27 14:31:12,381 - mom.HostMonitor - INFO - Host Monitor starting
2020-09-27 14:31:12,381 - mom - INFO - hypervisor interface vdsm
2020-09-27 14:31:12,690 - mom - ERROR - Unable to import hypervisor interface: 
vdsm
2020-09-27 14:31:12,707 - mom.GuestManager - INFO - Guest Manager starting: 
multi-thread
2020-09-27 14:31:12,708 - mom.GuestManager - INFO - Guest Manager ending
2020-09-27 14:31:12,708 - mom.PolicyEngine - INFO - Policy Engine starting
2020-09-27 14:31:12,709 - mom.RPCServer - INFO - RPC Server is disabled
2020-09-27 14:31:12,710 - mom - INFO - Shutting down RPC server.
2020-09-27 14:31:12,710 - mom.PolicyEngine - INFO - Policy Engine ending
2020-09-27 14:31:12,710 - mom - INFO - Waiting for RPC server thread.
2020-09-27 14:31:12,710 - mom - INFO - Waiting for policy engine thread.
2020-09-27 14:31:12,710 - mom - INFO - Waiting for guest manager thread.
2020-09-27 14:31:12,710 - mom - INFO - Waiting for host monitor thread.
2020-09-27 14:31:12,828 - mom.HostMonitor - INFO - HostMonitor is ready
2020-09-27 14:31:17,710 - mom - INFO - MOM ending


/usr/lib/python2.7/site-packages/mom/HypervisorInterfaces/vdsmInterface.py
import sys
sys.path.append('/usr/share/vdsm')
import API
import supervdsm
import logging
import traceback
from mom.HypervisorInterfaces.HypervisorInterface import HypervisorInterface, \
    HypervisorInterfaceError

after I debug momd, I find not import API and supervdsm,  can you give me some 
suggestion?
 





--
Regards
Hongyu Du



 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/64US76ZW5GVJV4XK4RC5F7MKX6OHF3ED/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QILT77KMAFLA2HP6HZNJEEPRX3NCMCSS/


[ovirt-users] Re: Recreating ISO storage domain

2020-09-27 Thread Strahil Nikolov via Users
Actually ISO domain is not necessary.

You can moutn it via FUSE to a system and either use the python script ( It was 
mentioned several times in the mailing list) or the API/UI to upload your ISOs 
to a data storage domain.

I think it is about time to get rid of the deprecated ISO domain.

Best Regards,
Strahil NIkolov






В събота, 26 септември 2020 г., 21:44:28 Гринуич+3, matthew.st...@fujitsu.com 
 написа: 





  


I have created a three host oVirt cluster using 4.4.2.

 

I created an ISO storage domain to hold my collection of ISO images, and then 
decided to migrate it to a better location.

 

I placed the storage domain in maintenance mode, end then removed it.

 

When I when to recreate it add the new location, I found that the ‘ISO’ storage 
domain was no longer an options.

 

What do I need to do to re-enable it, so I can re-create the storage domain.



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4YVAUMRFWO7FNVLMVU7GKHZLWFJEUJ5I/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G77LRBVCDQYDM5T3J4TFYJUQKPN2EV45/


[ovirt-users] Re: oVirt Change hosts to FQDN

2020-09-27 Thread Strahil Nikolov via Users
Hi Jeremey,

I am not sure that I completely understand the problem.

Can you provide the Host details page from UI and the output of:
'gluster pool list' & 'gluster peer status' from all nodes ?

Best Regards,
Strahil Nikolov






В събота, 26 септември 2020 г., 20:31:23 Гринуич+3, Jeremey Wise 
 написа: 






I posted that I had wiped out the oVirt-engine..  running cleanup on all three 
nodes.  Done a re-deployment.   Then to add nodes back.. though all have 
entries for eachother in /etc/hosts and ssh works fine via short and long name. 
 

I added nodes back into cluster..  but had to do it via IP to get past error.

Now, if I go to create a volume via the GUI in gluster I get:
Error while executing action Create Gluster Volume: Volume create failed: 
rc=30800 out=() err=["Host 172_16_100_102 is not in 'Peer in Cluster' state"] 

Which seems to be related to using IP vs DNS to add gluster volumes
https://bugzilla.redhat.com/show_bug.cgi?id=1055928

Question:  how do i fix the hosts in cluster being defined by IP vs desired 
hostname?



-- 
jeremey.w...@gmail.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7Z7UQTQYDSJDZN5AHZCIYQECPUAIE66/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SEKMSTNTD47II22L4AIZUPB63KWIYWCR/


[ovirt-users] Re: oVirt - Engine - VM Reconstitute

2020-09-26 Thread Strahil Nikolov via Users
Importing is done from UI (Admin portal) -> Storage -> Domains -> Newly Added 
domain -> "Import VM" -> select Vm and you can import.

Keep in mind that it is easier to import if all VM disks are on the same 
storage domain (I've opened a RFE for multi-domain import).


Best Regards,
Strahil Nikolov






В петък, 25 септември 2020 г., 21:41:45 Гринуич+3, penguin pages 
 написа: 





Thanks for reply.. It really is appreciated.

1) Note about VM import
-> Can you provide details  / Example.  For me I click on
oVirt-> Compute -> Virtual Machines -> Import
    -> Source Option (only ones making any sense would be "Export Domain" rest 
are unrelated and KVM one would need xml.. which is I think gone now)  So..  if 
I choose "Export Domain"  the "path" is to what file to import VM?


2) note about import from live VM in ..  When engine is gone.. this would be 
intersting to try.. but as I reboot.. and re-installed engine.. I think this 
cleared any hope of getting the libvirt xml files out
[root@medusa libvirt]# tree /var/run/libvirt/

.
├── hostdevmgr
├── interface
│   └── driver.pid
├── libvirt-admin-sock
├── libvirt-sock
├── libvirt-sock-ro
├── network
│   ├── autostarted
│   ├── driver.pid
│   ├── nwfilter.leases
│   ├── ;vdsmdummy;.xml
│   └── vdsm-ovirtmgmt.xml
├── nodedev
│   └── driver.pid
├── nwfilter
│   └── driver.pid
├── nwfilter-binding
│   └── vnet0.xml
├── qemu
│   ├── autostarted
│   ├── driver.pid
│   ├── HostedEngine.pid
│   ├── HostedEngine.xml
│   └── slirp
├── secrets
│   └── driver.pid
├── storage
│   ├── autostarted
│   └── driver.pid
├── virtlockd-sock
├── virtlogd-admin-sock
└── virtlogd-sock

10 directories, 22 files
[root@medusa libvirt]#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VUXIOEH2V4SY3MVAOLTS2V5YQO6MZCMQ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U7YLZCNYGN5LMBUNSM4734OQYIH2GGIN/


[ovirt-users] Re: Node 4.4.1 gluster bricks

2020-09-26 Thread Strahil Nikolov via Users
Since oVirt 4.4 , the stage that deploys the oVirt node/host is adding an lvm 
filter in /etc/lvm/lvm.conf which is the reason behind that.

Best Regards,
Strahil Nikolov






В петък, 25 септември 2020 г., 20:52:13 Гринуич+3, Staniforth, Paul 
 написа: 







Thanks,

             the gluster volume is just a test and the main reason was to test 
the upgrade of a node with gluster bricks.




I don't know why lvm doesn\t work which is what oVirt is using.




Regards,

               Paul S.











 
From: Strahil Nikolov 
Sent: 25 September 2020 18:28
To: Users ; Staniforth, Paul 
Subject: Re: [ovirt-users] Node 4.4.1 gluster bricks 
 



Caution External Mail: Do not click any links or open any attachments unless 
you trust the sender and know that the content is safe.

>1 node I wiped it clean and the other I left the 3 gluster brick drives 
>untouch.

If the last node from the original is untouched you can:
1. Go to the old host and use 'gluster volume remove-brick  replica 1 
wiped_host:/path/to-brick untouched_bricks_host:/path/to-brick force'
2. Remove the 2 nodes that you have kicked away:
gluster peer detach node2
gluster peer detach node3

3. Reinstall the wiped node and install gluster there
4. Create the filesystem on the brick:
mkfs.xfs -i size=512 /dev/mapper/brick_block_device
5. Mount the Gluster (you can copy the fstab entry from the working node and 
adapt it)
Here is an example:
/dev/data/data1 /gluster_bricks/data1 xfs 
inode64,noatime,nodiratime,inode64,nouuid,context="system_u:object_r:glusterd_brick_t:s0"
 0 0

6. Create the selinux label via 'semanage fcontext -a -t glusterd_brick_t 
"/gluster_bricks/data1(/.*)?"' (remove only the single quotes) and run 
'restorecon -RFvv /gluster_bricks/data1'
7. Mount the FS and create a dir inside the mount point
8. Extend the gluster volume:
'gluster volume add-brick  replica 2 
new_host:/gluster_bricks//

9. Run a full heal
gluster volume heal  full

10. Repeat again and remember to never wipe 2 nodes at a time :)


Good luck and take a look at Quick Start Guide - Gluster Docs



Best Regards,
Strahil Nikolov


To view the terms under which this email is distributed, please go to:- 
http://leedsbeckett.ac.uk/disclaimer/email/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QJ6IVFVC2PAMWW57QO5S36COYONV7XM/


[ovirt-users] Re: Node 4.4.1 gluster bricks

2020-09-25 Thread Strahil Nikolov via Users
>1 node I wiped it clean and the other I left the 3 gluster brick drives 
>untouch.

If the last node from the original is untouched you can:
1. Go to the old host and use 'gluster volume remove-brick  replica 1 
wiped_host:/path/to-brick untouched_bricks_host:/path/to-brick force'
2. Remove the 2 nodes that you have kicked away:
gluster peer detach node2
gluster peer detach node3

3. Reinstall the wiped node and install gluster there 
4. Create the filesystem on the brick:
mkfs.xfs -i size=512 /dev/mapper/brick_block_device
5. Mount the Gluster (you can copy the fstab entry from the working node and 
adapt it)
Here is an example:
/dev/data/data1 /gluster_bricks/data1 xfs 
inode64,noatime,nodiratime,inode64,nouuid,context="system_u:object_r:glusterd_brick_t:s0"
 0 0

6. Create the selinux label via 'semanage fcontext -a -t glusterd_brick_t 
"/gluster_bricks/data1(/.*)?"' (remove only the single quotes) and run 
'restorecon -RFvv /gluster_bricks/data1'
7. Mount the FS and create a dir inside the mount point
8. Extend the gluster volume:
'gluster volume add-brick  replica 2 
new_host:/gluster_bricks//

9. Run a full heal
gluster volume heal  full

10. Repeat again and remember to never wipe 2 nodes at a time :)


Good luck and take a look at Quick Start Guide - Gluster Docs



Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HABTIYJP2WITY2Y43XBAOSUH46EN7NS/


[ovirt-users] Re: oVirt - Engine - VM Reconstitute

2020-09-25 Thread Strahil Nikolov via Users
>Question:
>1) Can someone point me to the manual on how to re-constitute a VM and >bring 
>it back into oVirt where all "oVirt-engines" were redeployed.  It is >only 
>three or four VMs I typically care about (HA cluster and OCP >ignition/ 
>Ansible tower VM).  
Ensure that the old Engine is powered off. Next, add the storage domains in the 
new HostedEngine and inside the storage engine , there is a "Import VM" tab 
which will allow you to import your VMs.

>2) How do I make sure these core VMs are able to be reconstituted.  Can I 
>>create a dedicated volume where the VMs are full provisioned, and the path 
>>structure is "human understandable". 

Usually when you power up a VM , the VM's xml is logged in the vdsm log.
Also , while the VM is running , you can find the xml of the VM in the 
/var/run/libvirt (or whatever libvirt puts it).

>3) I know that you can backup the engine.  If I had been a smart person, >how 
>does one backup and recover from this kind of situation.    Does >anyone have 
>any guides or good articles on this?

https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf -> page 52

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5L25K7QFDYSEYQMYHVAMQL7R5MQDGL2K/


[ovirt-users] Re: oVirt host "unregistered"

2020-09-25 Thread Strahil Nikolov via Users
>"Error while executing action: Cannot add Host. Connecting to host via SSH 
>>has failed, verify that the host is reachable (IP address, routable address 
>>etc.) You may refer to the engine.log file for further details."

>Tested SSH between all nodes and works without password.
Engine is not running in the host, it is running in a VM called HostedEngine 
and that VM has to be able to reach the host over ssh.

Did you do any ssh hardening ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CGJLI7ZUWPS4DNXRJQTLKDHZWETOBJWD/


[ovirt-users] Re: Node upgrade to 4.4

2020-09-24 Thread Strahil Nikolov via Users
Have you checked the oVirt 2020 conference videos ?
There was a slot exactly on this topic- I think ansible was used for automatic 
upgrade.


I prefer the manual approach , as I have full control over the environment.

Best Regards,
Strahil Nikolov





В петък, 25 септември 2020 г., 02:23:49 Гринуич+3, Vincent Royer via Users 
 написа: 





It's a bit disheartening that HCI users don't have an easier upgrade path. 

So far there's a lot of 'this might work' or' that might work', and good luck. 
Certainly doesn't inspire much confidence. 

Every time there is a survey asking what the pain points are, I say 
"updates!!", but they seem to be getting more and more complicated. 




On Thu., Sep. 24, 2020, 11:06 a.m. Jayme,  wrote:
> Interested to hear how upgrading 4.3 HCI to 4.4 goes. I've been considering 
> it in my environment but was thinking about moving all VMs off to NFS storage 
> then rebuilding oVirt on 4.4 and importing.
> 
> On Thu, Sep 24, 2020 at 1:45 PM  wrote:
>> I am hoping for a miracle like that, too.
>> 
>> In the mean-time I am trying to make sure that all variants of exports and 
>> imports from *.ova to re-attachable NFS domains work properly, in case I 
>> have to start from scratch.
>> 
>> HCI upgrades don't get the special love you'd expect after RHV's proud 
>> announcement that they are now ready to take on Nutanix and vSAN.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C2HVZDUABWKNFN4IJD2ILLQF5E2DUUBU/
>> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZKWFAVDI5L2SGTAY7J4ISNRI25LRCMZ5/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V6GFJJJI4RIO3BQMZ2OO65OKUP2ASUZR/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BF6TQXWEELRESVBRQWAQKVLLXFAIDLJU/


[ovirt-users] Re: Restart oVirt-Engine

2020-09-24 Thread Strahil Nikolov via Users

>How ,without reboot of hosting system, do I restart the oVirt engine?
>
># I tried below but do not seem to effect the virtual machine
>[root@thor iso]# systemctl restart ov

Wrong system - this is most probably your KVM host , not the VM hosting the 
Engine. Usually the engine is defined during the initial setup.

># You cannot restart the VM " HostedEngine " as it responses:  
>
>Error while executing action:
>
>HostedEngine:
>    * Cannot restart VM. This VM is not managed by the engine.
That's not being done from UI. Either ssh to the hosted engine and issue a 
'reboot' and the ovirt-ha-agent on one of the hosts will bring it up, or use 
the 'hosted-engine' utility to shutdown and power up the VM.

About the engine not detecting a node up - check if the vdsm.service is running 
on the node.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U632SIHIFLROQVCXX5GKE4KOGP2TJIEJ/


[ovirt-users] Re: oVirt 4.3 HCI ovirtmgmt vlan problem

2020-09-24 Thread Strahil Nikolov via Users
Once a host is in oVirt , you should not change the network ... or that's what 
I have been told.

You should remove the host from oVirt , do your configurations and then add the 
host back.

Best Regards,
Strahil Nikolov






В четвъртък, 24 септември 2020 г., 01:43:40 Гринуич+3, wodel youchi 
 написа: 





Hi,

I deployed an HCI of three nodes using ovirt 4.3 on a flat network at the 
beginning, now we need to use VLAN on the management network.

I have ovirtmgmt over bond2, this bond will have three vlans : vlan 10 for 
management, vlan20 for DMZ and vlan30 for DMZ2.

On the switch, I did configure the concerned ports to support ; the native vlan 
(untagged), the vlan10 (tagged), ...etc.
Then I activated the tag on the ovirtmgmt network on the web UI, I did lose 
connexion to the hypervisors and things got weird, I then put my machine on 
vlan10, I saw that two of my hypervisors had their network configuration 
modified to use the vlan10, but not the hypervisor where the VM-engine was 
running.
I created the vlan manually on that hypervisor, then I started the VM-Manager, 
all hosts were recognized.
Then I stopped, then started the platforme again. Still the same problem, two 
hosts are correct with their vlan : bond2.10 created, but the third no vlan.
Doing the configuration manually works, but it does not survive reboot, is 
there a way to force vdsm to accept the new configuration on that faulty host?

Regards.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CZAWDO2TJOK6WCLS5RXCEJSZRBOBNBUF/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/63JVBICRSAFNQMHRIWTJH7GZIYM2YOEE/


[ovirt-users] Re: Host has to be reinstalled

2020-09-23 Thread Strahil Nikolov via Users
I guess 'yum reinstall vdsm-gluster'.


Best Regards,
Strahil Nikolov







В сряда, 23 септември 2020 г., 22:07:58 Гринуич+3, Jeremey Wise 
 написа: 






Trying to repair / clean up HCI deployment so it is HA and ready for 
"production".

I have gluster now showing three bricks  all green  

Now I just have error on node.. and of course the node which is hosting the 
ovirt-engine

# (as I can not send images to this forum... I will move to a breadcrumb 
posting)
Compute -> Hosts -> "thor" (red exclamation)
"Host has to be reinstalled"

To fix gluster... i had to reinstall "vdsm-gluster"

But what package does this error need to be reviewed / fixed with?


-- 
penguinpages
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5BILIQ5L4OXQA7AEDTGBVQH6XSHP4M4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2BIKAKTIJFMJ5D244DH44HSKFXBTTCT/


[ovirt-users] Re: Node upgrade to 4.4

2020-09-23 Thread Strahil Nikolov via Users
As far as I know there is an automation to do it for you.

Best Regards,
Strahil Nikolov






В сряда, 23 септември 2020 г., 21:41:13 Гринуич+3, Vincent Royer 
 написа: 





well that sounds like a risky nightmare. I appreciate your help. 

Vincent Royer
778-825-1057


SUSTAINABLE MOBILE ENERGY SOLUTIONS





On Wed, Sep 23, 2020 at 11:31 AM Strahil Nikolov  wrote:
> Before you reinstall the node , you should use 'gluster volume remove-brick 
>  replica  ovirt_node:/path-to-brick' to reduce the 
> volume to replica 2 (for example). Then you need to 'gluster peer detach 
> ovirt_node' in order to fully cleanup the gluster TSP.
> 
> You will have to remove the bricks that are on that < ovirt_node > before 
> detaching it.
> 
> Once you reinstall with EL 8, you can 'gluster peer probe 
> ' and then 'gluster volume add-brick  replica 
>  reinstalled_ovirt_node:/path-to-brick.
> 
> Note that reusing bricks is not very easy, so just wipe the data via 
> 'mkfs.xfs -i size=512 /dev/block/device'.
> 
> Once all volumes are again a replica 3 , just wait for the healing to go over 
> and you can proceed with the oVirt part.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В сряда, 23 септември 2020 г., 20:45:30 Гринуич+3, Vincent Royer 
>  написа: 
> 
> 
> 
> 
> 
> My confusion is that those documents do not describe any gluster related 
> tasks for Ovirt Nodes.  When I take a node down and install Ovirt Node 4.4 on 
> it, won't all the gluster bricks on that node be lost?  The part describing 
> "preserving local storage", that isn't anything about Gluster, correct?
> 
> 
> Vincent Royer
> 778-825-1057
> 
> 
> SUSTAINABLE MOBILE ENERGY SOLUTIONS
> 
> 
> 
> 
> 
> On Tue, Sep 22, 2020 at 8:31 PM Ritesh Chikatwar  wrote:
>> Vincent,
>> 
>> 
>> This document will be useful
>> https://www.ovirt.org/documentation/upgrade_guide/#Upgrading_the_Manager_to_4-4_4-3_SHE
>> 
>> On Wed, Sep 23, 2020, 3:55 AM Vincent Royer  wrote:
>>> I have 3 nodes running node ng 4.3.9 with a gluster/hci cluster.  How do I 
>>> upgrade to 4.4?  Is there a guide?
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TCX2RUE5RN7RNB45UWBXZ4SKH6KT7ZFC/
>>> 
>> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/J6IERH7OAO6JJ423A3K2KU2R25YXU2NF/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OI7DNZCRGL72NRDTIDJHKVKAP4BT2VKB/


[ovirt-users] Re: Node upgrade to 4.4

2020-09-23 Thread Strahil Nikolov via Users
Before you reinstall the node , you should use 'gluster volume remove-brick 
 replica  ovirt_node:/path-to-brick' to reduce the volume 
to replica 2 (for example). Then you need to 'gluster peer detach ovirt_node' 
in order to fully cleanup the gluster TSP.

You will have to remove the bricks that are on that < ovirt_node > before 
detaching it.

Once you reinstall with EL 8, you can 'gluster peer probe 
' and then 'gluster volume add-brick  replica 
 reinstalled_ovirt_node:/path-to-brick.

Note that reusing bricks is not very easy, so just wipe the data via 'mkfs.xfs 
-i size=512 /dev/block/device'.

Once all volumes are again a replica 3 , just wait for the healing to go over 
and you can proceed with the oVirt part.

Best Regards,
Strahil Nikolov






В сряда, 23 септември 2020 г., 20:45:30 Гринуич+3, Vincent Royer 
 написа: 





My confusion is that those documents do not describe any gluster related tasks 
for Ovirt Nodes.  When I take a node down and install Ovirt Node 4.4 on it, 
won't all the gluster bricks on that node be lost?  The part describing 
"preserving local storage", that isn't anything about Gluster, correct?


Vincent Royer
778-825-1057


SUSTAINABLE MOBILE ENERGY SOLUTIONS





On Tue, Sep 22, 2020 at 8:31 PM Ritesh Chikatwar  wrote:
> Vincent,
> 
> 
> This document will be useful
> https://www.ovirt.org/documentation/upgrade_guide/#Upgrading_the_Manager_to_4-4_4-3_SHE
> 
> On Wed, Sep 23, 2020, 3:55 AM Vincent Royer  wrote:
>> I have 3 nodes running node ng 4.3.9 with a gluster/hci cluster.  How do I 
>> upgrade to 4.4?  Is there a guide?
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TCX2RUE5RN7RNB45UWBXZ4SKH6KT7ZFC/
>> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J6IERH7OAO6JJ423A3K2KU2R25YXU2NF/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2Z7XRREYSSTAFIQHP3AFJYSP5B4GOTRS/


[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-23 Thread Strahil Nikolov via Users

>1) ..."I would give the engine a 'Windows'-style fix (a.k.a. reboot)"  
>>how does one restart just the oVirt-engine?
ssh to HostedEngine VM and run one of the following:
- reboot
- systemctl restart ovirt-engine.service

>2) I now show in shell  3 nodes, each with the one brick for data, vmstore, 
>>engine (and an ISO one I am trying to make).. with one brick each and all 
>>online and replicating.  But the GUI shows thor (first server running 
>>engine) offline needing to be reloaded.  Now volumes show two bricks.. one 
>>online one offline.  And no option to start / force restart.
If it shows one offline brick -> you can try the "force start". You can go to 
UI -> Storage -> Volume -> select Volume -> Start and then mark "Force" and "OK"


>4) To the question of "did I add third node later."  I would attach 
>>deployment guide I am building ... but can't do that in this forum.  but 
>>this is as simple as I can make it.  3 intel generic servers,  1 x boot 
>>drive , 1 x 512GB SSD,  2 x 1TB SSD in each.  wipe all data all 
>>configuration fresh Centos8 minimal install.. setup SSH setup basic 
>>networking... install cockpit.. run HCI wizard for all three nodes. That is 
>>all.

>How many hosts do you see in oVirt ?
>Help is appreciated.  The main concern I have is gap in what engine sees >and 
>what CLI shows.  Can someone show me where to get logs?  the GUI log  >when I 
>try to "activate" thor server "Status of host thor was set to 
>>NonOperational."  "Gluster command [] failed on server >."  
>is very unhelpful.
Check the following services on the node:
- glusterd.service
- sanlock.service
- supervdsmd.service
- vdsmd.service
- ovirt-ha-broker.service
- ovirt-ha-agent.service

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YJ7L5G7NU4PQAPQDCDIMC37JCEEGAILF/


[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-22 Thread Strahil Nikolov via Users
In my setup , I got no filter at all (yet, I'm on 4.3.10):
[root@ovirt ~]# lvmconfig | grep -i filter
[root@ovirt ~]#

P.S.: Don't forget to 'dracut -f' due to the fact that the initramfs has a 
local copy of the lvm.conf 


Best Regards,
Strahil Nikolov




В вторник, 22 септември 2020 г., 23:05:29 Гринуич+3, Jeremey Wise 
 написа: 







Correct..  on wwid  

 
I do want to make clear here.  that to geta around the error you must ADD  (not 
remove ) drives to /etc/lvm/lvm.conf  so oVirt Gluster can complete setup of 
drives.

[root@thor log]# cat /etc/lvm/lvm.conf |grep filter
# Broken for gluster in oVirt
#filter = 
["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", 
"a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", 
"r|.*|"]
# working for gluster wizard in oVirt
filter = 
["a|^/dev/disk/by-id/lvm-pv-uuid-AAHPao-R62q-8aac-410x-ZdA7-UL4i-Bh2bwJ$|", 
"a|^/dev/disk/by-id/lvm-pv-uuid-bSnFU3-jtUj-AGds-07sw-zdYC-52fM-mujuvC$|", 
"a|^/dev/disk/by-id/wwn-0x5001b448b847be41$|", "r|.*|"]



On Tue, Sep 22, 2020 at 3:57 PM Strahil Nikolov  wrote:
> Obtaining the wwid is not exactly correct.
> You can identify them via:
> 
> multipath -v4 | grep 'got wwid of'
> 
> Short example: 
> [root@ovirt conf.d]# multipath -v4 | grep 'got wwid of'
> Sep 22 22:55:58 | nvme0n1: got wwid of 
> 'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-0001'
> Sep 22 22:55:58 | sda: got wwid of 'TOSHIBA-TR200_Z7KB600SK46S'
> Sep 22 22:55:58 | sdb: got wwid of 'ST500NM0011_Z1M00LM7'
> Sep 22 22:55:58 | sdc: got wwid of 'WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189'
> Sep 22 22:55:58 | sdd: got wwid of 'WDC_WD15EADS-00P8B0_WD-WMAVU0115133'
> 
> Of course if you are planing to use only gluster it could be far easier to 
> set:
> 
> [root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf 
> blacklist {
>         devnode "*"
> }
> 
> 
> 
> Best Regards,
> Strahil Nikolov
> 
> В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer 
>  написа: 
> 
> 
> 
> 
> 
> On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise  wrote:
>>
>>
>> Agree about an NVMe Card being put under mpath control.
> 
> NVMe can be used via multipath, this is a new feature added in RHEL 8.1:
> https://bugzilla.redhat.com/1498546
> 
> Of course when the NVMe device is local there is no point to use it
> via multipath.
> To avoid this, you need to blacklist the devices like this:
> 
> 1. Find the device wwid
> 
> For NVMe, you need the device ID_WWN:
> 
>     $ udevadm info -q property /dev/nvme0n1 | grep ID_WWN
>     ID_WWN=eui.5cd2e42a81a11f69
> 
> 2. Add local blacklist file:
> 
>     $ mkdir /etc/multipath/conf.d
>     $ cat /etc/multipath/conf.d/local.conf
>     blacklist {
>         wwid "eui.5cd2e42a81a11f69"
>     }
> 
> 3. Reconfigure multipath
> 
>     $ multipathd reconfigure
> 
> Gluster should do this for you automatically during installation, but
> it does not
> you can do this manually.
> 
>> I have not even gotten to that volume / issue.  My guess is something weird 
>> in CentOS / 4.18.0-193.19.1.el8_2.x86_64  kernel with NVMe block devices.
>>
>> I will post once I cross bridge of getting standard SSD volumes working
>>
>> On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov  
>> wrote:
>>>
>>> Why is your NVME under multipath ? That doesn't make sense at all .
>>> I have modified my multipath.conf to block all local disks . Also ,don't 
>>> forget the '# VDSM PRIVATE' line somewhere in the top of the file.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>>
>>>
>>>
>>>
>>> В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise 
>>>  написа:
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> vdo: ERROR - Device /dev/sdc excluded by a filter
>>>
>>>
>>>
>>>
>>> Other server
>>> vdo: ERROR - Device 
>>> /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
>>>  excluded by a filter.
>>>
>>>
>>> All systems when I go to create VDO volume on blank drives.. I get this 
>>> filter error.  All disk outside of the HCI wizard setup are now blocked 
>>> from creating new Gluster volume group.
>>>
>>> Here is what I see in /dev/lvm/lvm.conf |grep filter
>>> [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter
>>> filter = 
>>> ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", 
>>> "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", 
>>> "r|.*|"]
>>>
>>> [root@odin ~]# ls -al /dev/disk/by-id/
>>> total 0
>>> drwxr-xr-x. 2 root root 1220 Sep 18 14:32 .
>>> drwxr-xr-x. 6 root root  120 Sep 18 14:32 ..
>>> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
>>> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda
>>> lrwxrwxrwx. 1 root root  10 Sep 18 22:40 
>>> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1
>>> lrwxrwxrwx. 1 root root  10 Sep 18 22:40 
>>> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2
>>> lrwxrwxrwx. 1 root root    9 Sep 18 14:32 
>>> 

[ovirt-users] Re: console breaks with signed SSL certs

2020-09-22 Thread Strahil Nikolov via Users
Most probably there is an option to tell it (I mean oVIrt) the exact keys to be 
used.

Yet, give the engine a gentle push and reboot it - just to be sure you are not 
chasing a ghost.

I'm using self-signed certs and I can't help much in this case.


Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 22:54:28 Гринуич+3, Philip Brown 
 написа: 





Thanks for the initial start, Strahil,

my desktop is windows. but I took apart the console.vv file, and these are my 
findings:

in the console.vv file, there is a valid CA cert, which is for the signing CA 
for our valid wildcard SSL cert.

However, when I connected to the target host, on the tls-port, i noted that it 
is still using the original self-signed CA, generated by ovirt-engine for the 
host.
Digging with lsof says that the process is qemu-kvm
Looking at command line, that has
  x509-dir=/etc/pki/vdsm/libvirt-spice

So...


I guess I need to update server.key server.cert and ca-cert in there?

except there's a whoole lot of '*key.pem' files under  the /etc/pki directory 
tree.
Suggestions on which is best to update?
For example, there is also

/etc/pki/vdsm/keys/vdsmkey.pem




- Original Message -
From: "Strahil Nikolov" 
To: "users" , "Philip Brown" 
Sent: Tuesday, September 22, 2020 12:09:55 PM
Subject: Re: [ovirt-users] Re: console breaks with signed SSL certs

I assume you are working on linux (for windows you will need to ssh to a linux 
box or even one ofthe Hosts).

When you download the 'console.vv' file for Spice connection - you will have to 
note several stuff:

- host
- tls-port (not the plain 'port=' !!! )
- ca

Process the CA and replace the '\n' with new lines .

Then you can run:
openssl s_client -connect : -CAfile  
-showcerts

Then you can inspect the certificate chain.
I would then grep for the strings from openssl in the engine.

In my case I find these containing the line with the 'issuer':

/etc/pki/ovirt-engine/certs/websocket-proxy.cer
/etc/pki/ovirt-engine/certs/apache.cer
/etc/pki/ovirt-engine/certs/reports.cer
/etc/pki/ovirt-engine/certs/imageio-proxy.cer
/etc/pki/ovirt-engine/certs/ovn-ndb.cer
/etc/pki/ovirt-engine/certs/ovn-sdb.cer
/etc/pki/ovirt-engine/certs/ovirt-provider-ovn.cer


Happy Hunting!

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 21:52:10 Гринуич+3, Philip Brown 
 написа: 





More detail on the problem.
after starting remote-viewer  --debug, I get



(remote-viewer.exe:18308): virt-viewer-DEBUG: 11:45:30.594: New spice channel 
0608B240 SpiceMainChannel 0
(remote-viewer.exe:18308): virt-viewer-DEBUG: 11:45:30.594: notebook show 
status 03479130

(remote-viewer.exe:18308): Spice-WARNING **: 11:45:30.691: 
../subprojects/spice-common/common/ssl_verify.c:444:openssl_verify: Error in 
certificate chain verification: self signed certificate in certificate chain 
(num=19:depth1:/C=US/O=xx.65101)

(remote-viewer.exe:18308): GSpice-WARNING **: 11:45:30.692: main-1:0: 
SSL_connect: error:0001:lib(0):func(0):reason(1)
(remote-viewer.exe:18308): virt-viewer-DEBUG: 11:45:30.693: Destroy SPICE 
channel SpiceMainChannel 0


So it seems like there's some additional thing that needs telling to use the 
official signed cert.
Any clues for me please?


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VKSX7CLJ4N7PNCDE5IQ73BIVPAXS7RSF/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/545XR3UZJ3U4H5BKZ4A5PRQEUGWICYQY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6GMZNLDLTAZKL5B2AJUOE5KQRGWNNML5/


[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-22 Thread Strahil Nikolov via Users
Obtaining the wwid is not exactly correct.
You can identify them via:

multipath -v4 | grep 'got wwid of'

Short example: 
[root@ovirt conf.d]# multipath -v4 | grep 'got wwid of'
Sep 22 22:55:58 | nvme0n1: got wwid of 
'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-0001'
Sep 22 22:55:58 | sda: got wwid of 'TOSHIBA-TR200_Z7KB600SK46S'
Sep 22 22:55:58 | sdb: got wwid of 'ST500NM0011_Z1M00LM7'
Sep 22 22:55:58 | sdc: got wwid of 'WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189'
Sep 22 22:55:58 | sdd: got wwid of 'WDC_WD15EADS-00P8B0_WD-WMAVU0115133'

Of course if you are planing to use only gluster it could be far easier to set:

[root@ovirt conf.d]# cat /etc/multipath/conf.d/blacklist.conf 
blacklist {
        devnode "*"
}



Best Regards,
Strahil Nikolov

В вторник, 22 септември 2020 г., 22:12:21 Гринуич+3, Nir Soffer 
 написа: 





On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise  wrote:
>
>
> Agree about an NVMe Card being put under mpath control.

NVMe can be used via multipath, this is a new feature added in RHEL 8.1:
https://bugzilla.redhat.com/1498546

Of course when the NVMe device is local there is no point to use it
via multipath.
To avoid this, you need to blacklist the devices like this:

1. Find the device wwid

For NVMe, you need the device ID_WWN:

    $ udevadm info -q property /dev/nvme0n1 | grep ID_WWN
    ID_WWN=eui.5cd2e42a81a11f69

2. Add local blacklist file:

    $ mkdir /etc/multipath/conf.d
    $ cat /etc/multipath/conf.d/local.conf
    blacklist {
        wwid "eui.5cd2e42a81a11f69"
    }

3. Reconfigure multipath

    $ multipathd reconfigure

Gluster should do this for you automatically during installation, but
it does not
you can do this manually.

> I have not even gotten to that volume / issue.  My guess is something weird 
> in CentOS / 4.18.0-193.19.1.el8_2.x86_64  kernel with NVMe block devices.
>
> I will post once I cross bridge of getting standard SSD volumes working
>
> On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov  wrote:
>>
>> Why is your NVME under multipath ? That doesn't make sense at all .
>> I have modified my multipath.conf to block all local disks . Also ,don't 
>> forget the '# VDSM PRIVATE' line somewhere in the top of the file.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise 
>>  написа:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> vdo: ERROR - Device /dev/sdc excluded by a filter
>>
>>
>>
>>
>> Other server
>> vdo: ERROR - Device 
>> /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
>>  excluded by a filter.
>>
>>
>> All systems when I go to create VDO volume on blank drives.. I get this 
>> filter error.  All disk outside of the HCI wizard setup are now blocked from 
>> creating new Gluster volume group.
>>
>> Here is what I see in /dev/lvm/lvm.conf |grep filter
>> [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter
>> filter = 
>> ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", 
>> "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", 
>> "r|.*|"]
>>
>> [root@odin ~]# ls -al /dev/disk/by-id/
>> total 0
>> drwxr-xr-x. 2 root root 1220 Sep 18 14:32 .
>> drwxr-xr-x. 6 root root  120 Sep 18 14:32 ..
>> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
>> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda
>> lrwxrwxrwx. 1 root root  10 Sep 18 22:40 
>> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1
>> lrwxrwxrwx. 1 root root  10 Sep 18 22:40 
>> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2
>> lrwxrwxrwx. 1 root root    9 Sep 18 14:32 
>> ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb
>> lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
>> ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc
>> lrwxrwxrwx. 1 root root  10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2
>> lrwxrwxrwx. 1 root root  10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0
>> lrwxrwxrwx. 1 root root  10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1
>> lrwxrwxrwx. 1 root root  11 Sep 18 16:40 
>> dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11
>> lrwxrwxrwx. 1 root root  10 Sep 18 16:40 
>> dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6
>> lrwxrwxrwx. 1 root root  11 Sep 18 16:40 
>> dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12
>> lrwxrwxrwx. 1 root root  10 Sep 18 23:35 
>> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
>>  -> ../../dm-3
>> lrwxrwxrwx. 1 root root  10 Sep 18 23:49 
>> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
>>  -> ../../dm-4
>> lrwxrwxrwx. 1 root root  10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5
>> lrwxrwxrwx. 1 root root  10 Sep 18 16:40 
>> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT 
>> -> ../../dm-1
>> lrwxrwxrwx. 1 root root  10 Sep 18 16:40 
>> 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
Ovirt uses the "/rhev/mnt... mountpoints.

Do you have those (for each storage domain ) ?

Here is an example from one of my nodes:
[root@ovirt1 ~]# df -hT | grep rhev
gluster1:/engine                              fuse.glusterfs  100G   19G   82G  
19% /rhev/data-center/mnt/glusterSD/gluster1:_engine
gluster1:/fast4                               fuse.glusterfs  100G   53G   48G  
53% /rhev/data-center/mnt/glusterSD/gluster1:_fast4
gluster1:/fast1                               fuse.glusterfs  100G   56G   45G  
56% /rhev/data-center/mnt/glusterSD/gluster1:_fast1
gluster1:/fast2                               fuse.glusterfs  100G   56G   45G  
56% /rhev/data-center/mnt/glusterSD/gluster1:_fast2
gluster1:/fast3                               fuse.glusterfs  100G   55G   46G  
55% /rhev/data-center/mnt/glusterSD/gluster1:_fast3
gluster1:/data                                fuse.glusterfs  2.4T  535G  1.9T  
23% /rhev/data-center/mnt/glusterSD/gluster1:_data



Best Regards,
Strahil Nikolov


В вторник, 22 септември 2020 г., 19:44:54 Гринуич+3, Jeremey Wise 
 написа: 






Yes.

And at one time it was fine.   I did a graceful shutdown.. and after booting it 
always seems to now have issue with the one server... of course the one hosting 
the ovirt-engine :P

# Three nodes in cluster

# Error when you hover over node


# when i select node and choose "activate"



#Gluster is working fine... this is oVirt who is confused.
[root@medusa vmstore]# mount |grep media/vmstore
medusast.penguinpages.local:/vmstore on /media/vmstore type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072,_netdev)
[root@medusa vmstore]# echo > /media/vmstore/test.out
[root@medusa vmstore]# ssh -f thor 'echo $HOSTNAME >> /media/vmstore/test.out'
[root@medusa vmstore]# ssh -f odin 'echo $HOSTNAME >> /media/vmstore/test.out'
[root@medusa vmstore]# ssh -f medusa 'echo $HOSTNAME >> /media/vmstore/test.out'
[root@medusa vmstore]# cat /media/vmstore/test.out

thor.penguinpages.local
odin.penguinpages.local
medusa.penguinpages.local


Ideas to fix oVirt?



On Tue, Sep 22, 2020 at 10:42 AM Strahil Nikolov  wrote:
> By the way, did you add the third host in the oVirt ?
> 
> If not , maybe that is the real problem :)
> 
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В вторник, 22 септември 2020 г., 17:23:28 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> Its like oVirt thinks there are only two nodes in gluster replication
> 
> 
> 
> 
> 
> # Yet it is clear the CLI shows three bricks.
> [root@medusa vms]# gluster volume status vmstore
> Status of volume: vmstore
> Gluster process                             TCP Port  RDMA Port  Online  Pid
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/vmstore/vmstore                        49154     0          Y       9444
> Brick odinst.penguinpages.local:/gluster_br
> icks/vmstore/vmstore                        49154     0          Y       3269
> Brick medusast.penguinpages.local:/gluster_
> bricks/vmstore/vmstore                      49154     0          Y       7841
> Self-heal Daemon on localhost               N/A       N/A        Y       80152
> Self-heal Daemon on odinst.penguinpages.loc
> al                                          N/A       N/A        Y       
> 141750
> Self-heal Daemon on thorst.penguinpages.loc
> al                                          N/A       N/A        Y       
> 245870
> 
> Task Status of Volume vmstore
> --
> There are no active volume tasks
> 
> 
> 
> How do I get oVirt to re-establish reality to what Gluster sees?
> 
> 
> 
> On Tue, Sep 22, 2020 at 8:59 AM Strahil Nikolov  wrote:
>> Also in some rare cases, I have seen oVirt showing gluster as 2 out of 3 
>> bricks up , but usually it was an UI issue and you go to UI and mark a 
>> "force start" which will try to start any bricks that were down (won't 
>> affect gluster) and will wake up the UI task to verify again brick status.
>> 
>> 
>> https://github.com/gluster/gstatus is a good one to verify your cluster 
>> health , yet human's touch is priceless in any kind of technology.
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> 
>> 
>> В вторник, 22 септември 2020 г., 15:50:35 Гринуич+3, Jeremey Wise 
>>  написа: 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> when I posted last..  in the tread I paste a roling restart.    And...  now 
>> it is replicating.
>> 
>> oVirt still showing wrong.  BUT..   I did my normal test from each of the 
>> three nodes.
>> 
>> 1) Mount Gluster file system with localhost as primary and other two as 
>> tertiary to local mount (like a client would do)
>> 2) run test file create Ex:   echo $HOSTNAME >> /media/glustervolume/test.out
>> 3) repeat from each node then read back that all are in sync.
>> 
>> I REALLY hate reboot (restart) as a fix.  I need to get better with root 
>> 

[ovirt-users] Re: Upgrade Ovirt from 4.2 to 4.4 on CentOS7.4

2020-09-22 Thread Strahil Nikolov via Users
oVirt 4.4 requires EL8.2 , so no you cannot go to 4.4 without upgrading the OS 
to EL8.

Yet, you can still bump the version to 4.3.10 which is still EL7 based and it 
works quite good.

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 17:39:52 Гринуич+3, 
 написа: 





Hi everyone,
I am writing for support regarding the ovirt upgrade.
I am using Ovirt with version 4.2 on CentOS 7.4 operating system.
The latest release of the Ovirt engine is 4.4 which is available for CentOS 
8.Can I upgrade without upgrading the operating system to centos8? 
I would not be wrong but it is not possible to switch from Centos7 to Centos8 
.Can anyone give me some advice?Thank you all!!!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IWFDBQVPDIX5JHZVIELIU7VIAOSRVROX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KJYX6PDK6K2ZZROVACDHMSSRZ5PBRWUS/


[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
By the way, did you add the third host in the oVirt ?

If not , maybe that is the real problem :)


Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 17:23:28 Гринуич+3, Jeremey Wise 
 написа: 





Its like oVirt thinks there are only two nodes in gluster replication





# Yet it is clear the CLI shows three bricks.
[root@medusa vms]# gluster volume status vmstore
Status of volume: vmstore
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       9444
Brick odinst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       3269
Brick medusast.penguinpages.local:/gluster_
bricks/vmstore/vmstore                      49154     0          Y       7841
Self-heal Daemon on localhost               N/A       N/A        Y       80152
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       141750
Self-heal Daemon on thorst.penguinpages.loc
al                                          N/A       N/A        Y       245870

Task Status of Volume vmstore
--
There are no active volume tasks



How do I get oVirt to re-establish reality to what Gluster sees?



On Tue, Sep 22, 2020 at 8:59 AM Strahil Nikolov  wrote:
> Also in some rare cases, I have seen oVirt showing gluster as 2 out of 3 
> bricks up , but usually it was an UI issue and you go to UI and mark a "force 
> start" which will try to start any bricks that were down (won't affect 
> gluster) and will wake up the UI task to verify again brick status.
> 
> 
> https://github.com/gluster/gstatus is a good one to verify your cluster 
> health , yet human's touch is priceless in any kind of technology.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В вторник, 22 септември 2020 г., 15:50:35 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> 
> 
> when I posted last..  in the tread I paste a roling restart.    And...  now 
> it is replicating.
> 
> oVirt still showing wrong.  BUT..   I did my normal test from each of the 
> three nodes.
> 
> 1) Mount Gluster file system with localhost as primary and other two as 
> tertiary to local mount (like a client would do)
> 2) run test file create Ex:   echo $HOSTNAME >> /media/glustervolume/test.out
> 3) repeat from each node then read back that all are in sync.
> 
> I REALLY hate reboot (restart) as a fix.  I need to get better with root 
> cause of gluster issues if I am going to trust it.  Before when I manually 
> made the volumes and it was simply (vdo + gluster) then worst case was that 
> gluster would break... but I could always go into "brick" path and copy data 
> out.
> 
> Now with oVirt.. .and LVM and thin provisioning etc..   I am abstracted from 
> simple file recovery..  Without GLUSTER AND oVirt Engine up... all my 
> environment  and data is lost.  This means nodes moved more to "pets" then 
> cattle.
> 
> And with three nodes.. I can't afford to loose any pets. 
> 
> I will post more when I get cluster settled and work on those wierd notes 
> about quorum volumes noted on two nodes when glusterd is restarted.
> 
> Thanks,
> 
> On Tue, Sep 22, 2020 at 8:44 AM Strahil Nikolov  wrote:
>> Replication issue could mean that one of the client (FUSE mounts) is not 
>> attached to all bricks.
>> 
>> You can check the amount of clients via:
>> gluster volume status all client-list
>> 
>> 
>> As a prevention , just do a rolling restart:
>> - set a host in maintenance and mark it to stop glusterd service (I'm 
>> reffering to the UI)
>> - Activate the host , once it was moved to maintenance
>> 
>> Wait for the host's HE score to recover (silver/gold crown in UI) and then 
>> proceed with the next one.
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise 
>>  написа: 
>> 
>> 
>> 
>> 
>> 
>> 
>> I did.
>> 
>> Here are all three nodes with restart. I find it odd ... their has been a 
>> set of messages at end (see below) which I don't know enough about what 
>> oVirt laid out to know if it is bad.
>> 
>> ###
>> [root@thor vmstore]# systemctl status glusterd
>> ● glusterd.service - GlusterFS, a clustered file-system server
>>    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
>> preset: disabled)
>>   Drop-In: /etc/systemd/system/glusterd.service.d
>>            └─99-cpu.conf
>>    Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
>>      Docs: man:glusterd(8)
>>   Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
>> --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
>>  Main PID: 2113 (glusterd)
>>     Tasks: 151 (limit: 1235410)
>>    Memory: 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
That's really wierd.
I would give the engine a 'Windows'-style fix (a.k.a. reboot).

I guess some of the engine's internal processes crashed/looped and it doesn't 
see the reality.

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 16:27:25 Гринуич+3, Jeremey Wise 
 написа: 





Its like oVirt thinks there are only two nodes in gluster replication





# Yet it is clear the CLI shows three bricks.
[root@medusa vms]# gluster volume status vmstore
Status of volume: vmstore
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       9444
Brick odinst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       3269
Brick medusast.penguinpages.local:/gluster_
bricks/vmstore/vmstore                      49154     0          Y       7841
Self-heal Daemon on localhost               N/A       N/A        Y       80152
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       141750
Self-heal Daemon on thorst.penguinpages.loc
al                                          N/A       N/A        Y       245870

Task Status of Volume vmstore
--
There are no active volume tasks



How do I get oVirt to re-establish reality to what Gluster sees?



On Tue, Sep 22, 2020 at 8:59 AM Strahil Nikolov  wrote:
> Also in some rare cases, I have seen oVirt showing gluster as 2 out of 3 
> bricks up , but usually it was an UI issue and you go to UI and mark a "force 
> start" which will try to start any bricks that were down (won't affect 
> gluster) and will wake up the UI task to verify again brick status.
> 
> 
> https://github.com/gluster/gstatus is a good one to verify your cluster 
> health , yet human's touch is priceless in any kind of technology.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В вторник, 22 септември 2020 г., 15:50:35 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> 
> 
> when I posted last..  in the tread I paste a roling restart.    And...  now 
> it is replicating.
> 
> oVirt still showing wrong.  BUT..   I did my normal test from each of the 
> three nodes.
> 
> 1) Mount Gluster file system with localhost as primary and other two as 
> tertiary to local mount (like a client would do)
> 2) run test file create Ex:   echo $HOSTNAME >> /media/glustervolume/test.out
> 3) repeat from each node then read back that all are in sync.
> 
> I REALLY hate reboot (restart) as a fix.  I need to get better with root 
> cause of gluster issues if I am going to trust it.  Before when I manually 
> made the volumes and it was simply (vdo + gluster) then worst case was that 
> gluster would break... but I could always go into "brick" path and copy data 
> out.
> 
> Now with oVirt.. .and LVM and thin provisioning etc..   I am abstracted from 
> simple file recovery..  Without GLUSTER AND oVirt Engine up... all my 
> environment  and data is lost.  This means nodes moved more to "pets" then 
> cattle.
> 
> And with three nodes.. I can't afford to loose any pets. 
> 
> I will post more when I get cluster settled and work on those wierd notes 
> about quorum volumes noted on two nodes when glusterd is restarted.
> 
> Thanks,
> 
> On Tue, Sep 22, 2020 at 8:44 AM Strahil Nikolov  wrote:
>> Replication issue could mean that one of the client (FUSE mounts) is not 
>> attached to all bricks.
>> 
>> You can check the amount of clients via:
>> gluster volume status all client-list
>> 
>> 
>> As a prevention , just do a rolling restart:
>> - set a host in maintenance and mark it to stop glusterd service (I'm 
>> reffering to the UI)
>> - Activate the host , once it was moved to maintenance
>> 
>> Wait for the host's HE score to recover (silver/gold crown in UI) and then 
>> proceed with the next one.
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise 
>>  написа: 
>> 
>> 
>> 
>> 
>> 
>> 
>> I did.
>> 
>> Here are all three nodes with restart. I find it odd ... their has been a 
>> set of messages at end (see below) which I don't know enough about what 
>> oVirt laid out to know if it is bad.
>> 
>> ###
>> [root@thor vmstore]# systemctl status glusterd
>> ● glusterd.service - GlusterFS, a clustered file-system server
>>    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
>> preset: disabled)
>>   Drop-In: /etc/systemd/system/glusterd.service.d
>>            └─99-cpu.conf
>>    Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
>>      Docs: man:glusterd(8)
>>   Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
>> --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, 

[ovirt-users] Re: Question on "Memory" column/field in Virtual Machines list/table in ovirt GUI

2020-09-22 Thread Strahil Nikolov via Users
>Ok, May I know why you think it's only a bug in SLES?.
I never claimed it is a bug in SLES, but a bug in Ovirt detecting proper memory 
usage in SLES.
The behaviour you observe was normal for RHEL6/CentOS6/SLES11/openSUSE and 
bellow , so it is normal for some OSes.In my oVirt 4.3.10 , I see that the 
entry there is "SLES11+" , but I believe that it is checking the memory on 
SLES15 , just as if it is a SLES11.


>As I said before, ovirt is behaving the same way even for CentOS7 VMs. I am 
>attaching the details again here below.
Most probably oVirt is checking memory the RHEL6 style , which is not the 
correct one.

>My question is why ovirt is treating buff/cache memory as used memory and why 
>is not reporting memory usage just based on actual used memory?
Most probably it is a bug :D , every software has some. I would recommend you 
to open a bug in the bugzilla.redhat.com for each OS type (for example 1 for 
SLES/openSUSE and 1 for EL7/EL8-based).

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TQT22I3GTVLAZZPHJ6UAMPIW6Y2XEKEA/


[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
Also in some rare cases, I have seen oVirt showing gluster as 2 out of 3 bricks 
up , but usually it was an UI issue and you go to UI and mark a "force start" 
which will try to start any bricks that were down (won't affect gluster) and 
will wake up the UI task to verify again brick status.


https://github.com/gluster/gstatus is a good one to verify your cluster health 
, yet human's touch is priceless in any kind of technology.

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 15:50:35 Гринуич+3, Jeremey Wise 
 написа: 







when I posted last..  in the tread I paste a roling restart.    And...  now it 
is replicating.

oVirt still showing wrong.  BUT..   I did my normal test from each of the three 
nodes.

1) Mount Gluster file system with localhost as primary and other two as 
tertiary to local mount (like a client would do)
2) run test file create Ex:   echo $HOSTNAME >> /media/glustervolume/test.out
3) repeat from each node then read back that all are in sync.

I REALLY hate reboot (restart) as a fix.  I need to get better with root cause 
of gluster issues if I am going to trust it.  Before when I manually made the 
volumes and it was simply (vdo + gluster) then worst case was that gluster 
would break... but I could always go into "brick" path and copy data out.

Now with oVirt.. .and LVM and thin provisioning etc..   I am abstracted from 
simple file recovery..  Without GLUSTER AND oVirt Engine up... all my 
environment  and data is lost.  This means nodes moved more to "pets" then 
cattle.

And with three nodes.. I can't afford to loose any pets. 

I will post more when I get cluster settled and work on those wierd notes about 
quorum volumes noted on two nodes when glusterd is restarted.

Thanks,

On Tue, Sep 22, 2020 at 8:44 AM Strahil Nikolov  wrote:
> Replication issue could mean that one of the client (FUSE mounts) is not 
> attached to all bricks.
> 
> You can check the amount of clients via:
> gluster volume status all client-list
> 
> 
> As a prevention , just do a rolling restart:
> - set a host in maintenance and mark it to stop glusterd service (I'm 
> reffering to the UI)
> - Activate the host , once it was moved to maintenance
> 
> Wait for the host's HE score to recover (silver/gold crown in UI) and then 
> proceed with the next one.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> 
> I did.
> 
> Here are all three nodes with restart. I find it odd ... their has been a set 
> of messages at end (see below) which I don't know enough about what oVirt 
> laid out to know if it is bad.
> 
> ###
> [root@thor vmstore]# systemctl status glusterd
> ● glusterd.service - GlusterFS, a clustered file-system server
>    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
> preset: disabled)
>   Drop-In: /etc/systemd/system/glusterd.service.d
>            └─99-cpu.conf
>    Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
>      Docs: man:glusterd(8)
>   Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
> --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
>  Main PID: 2113 (glusterd)
>     Tasks: 151 (limit: 1235410)
>    Memory: 3.8G
>       CPU: 6min 46.050s
>    CGroup: /glusterfs.slice/glusterd.service
>            ├─ 2113 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level 
> INFO
>            ├─ 2914 /usr/sbin/glusterfs -s localhost --volfile-id shd/data -p 
> /var/run/gluster/shd/data/data-shd.pid -l /var/log/glusterfs/glustershd.log 
> -S /var/run/gluster/2f41374c2e36bf4d.socket --xlator-option 
> *replicate*.node-uu>
>            ├─ 9342 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id data.thorst.penguinpages.local.gluster_bricks-data-data -p 
> /var/run/gluster/vols/data/thorst.penguinpages.local-gluster_bricks-data-data.pid
>  -S /var/r>
>            ├─ 9433 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id engine.thorst.penguinpages.local.gluster_bricks-engine-engine -p 
> /var/run/gluster/vols/engine/thorst.penguinpages.local-gluster_bricks-engine-engine.p>
>            ├─ 9444 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id vmstore.thorst.penguinpages.local.gluster_bricks-vmstore-vmstore 
> -p 
> /var/run/gluster/vols/vmstore/thorst.penguinpages.local-gluster_bricks-vmstore-vms>
>            └─35639 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id iso.thorst.penguinpages.local.gluster_bricks-iso-iso -p 
> /var/run/gluster/vols/iso/thorst.penguinpages.local-gluster_bricks-iso-iso.pid
>  -S /var/run/glu>
> 
> Sep 21 20:32:24 thor.penguinpages.local systemd[1]: Starting GlusterFS, a 
> clustered file-system server...
> Sep 21 20:32:26 thor.penguinpages.local systemd[1]: Started GlusterFS, a 
> clustered file-system server.
> Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
> 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
Usually I first start with:
'gluster volume heal  info summary'

Anything that is not 'Connected' is bad.

Yeah, the abstraction is not so nice, but the good thing is that you can always 
extract the data from a single node left (it will require to play a little bit 
with the quorum of the volume).

Usually I have seen that the FUSE fails to reconnect to a "gone bad and 
recovered" brick and then you got that endless healing (as FUSE will write the 
data to only 2 out of 3 bricks and then a heal is pending :D ).

I would go with the gluster logs and the brick logs and then you can dig deeper 
if you suspect network issue.


Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 15:50:35 Гринуич+3, Jeremey Wise 
 написа: 







when I posted last..  in the tread I paste a roling restart.    And...  now it 
is replicating.

oVirt still showing wrong.  BUT..   I did my normal test from each of the three 
nodes.

1) Mount Gluster file system with localhost as primary and other two as 
tertiary to local mount (like a client would do)
2) run test file create Ex:   echo $HOSTNAME >> /media/glustervolume/test.out
3) repeat from each node then read back that all are in sync.

I REALLY hate reboot (restart) as a fix.  I need to get better with root cause 
of gluster issues if I am going to trust it.  Before when I manually made the 
volumes and it was simply (vdo + gluster) then worst case was that gluster 
would break... but I could always go into "brick" path and copy data out.

Now with oVirt.. .and LVM and thin provisioning etc..   I am abstracted from 
simple file recovery..  Without GLUSTER AND oVirt Engine up... all my 
environment  and data is lost.  This means nodes moved more to "pets" then 
cattle.

And with three nodes.. I can't afford to loose any pets. 

I will post more when I get cluster settled and work on those wierd notes about 
quorum volumes noted on two nodes when glusterd is restarted.

Thanks,

On Tue, Sep 22, 2020 at 8:44 AM Strahil Nikolov  wrote:
> Replication issue could mean that one of the client (FUSE mounts) is not 
> attached to all bricks.
> 
> You can check the amount of clients via:
> gluster volume status all client-list
> 
> 
> As a prevention , just do a rolling restart:
> - set a host in maintenance and mark it to stop glusterd service (I'm 
> reffering to the UI)
> - Activate the host , once it was moved to maintenance
> 
> Wait for the host's HE score to recover (silver/gold crown in UI) and then 
> proceed with the next one.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> 
> I did.
> 
> Here are all three nodes with restart. I find it odd ... their has been a set 
> of messages at end (see below) which I don't know enough about what oVirt 
> laid out to know if it is bad.
> 
> ###
> [root@thor vmstore]# systemctl status glusterd
> ● glusterd.service - GlusterFS, a clustered file-system server
>    Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
> preset: disabled)
>   Drop-In: /etc/systemd/system/glusterd.service.d
>            └─99-cpu.conf
>    Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
>      Docs: man:glusterd(8)
>   Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
> --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
>  Main PID: 2113 (glusterd)
>     Tasks: 151 (limit: 1235410)
>    Memory: 3.8G
>       CPU: 6min 46.050s
>    CGroup: /glusterfs.slice/glusterd.service
>            ├─ 2113 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level 
> INFO
>            ├─ 2914 /usr/sbin/glusterfs -s localhost --volfile-id shd/data -p 
> /var/run/gluster/shd/data/data-shd.pid -l /var/log/glusterfs/glustershd.log 
> -S /var/run/gluster/2f41374c2e36bf4d.socket --xlator-option 
> *replicate*.node-uu>
>            ├─ 9342 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id data.thorst.penguinpages.local.gluster_bricks-data-data -p 
> /var/run/gluster/vols/data/thorst.penguinpages.local-gluster_bricks-data-data.pid
>  -S /var/r>
>            ├─ 9433 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id engine.thorst.penguinpages.local.gluster_bricks-engine-engine -p 
> /var/run/gluster/vols/engine/thorst.penguinpages.local-gluster_bricks-engine-engine.p>
>            ├─ 9444 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id vmstore.thorst.penguinpages.local.gluster_bricks-vmstore-vmstore 
> -p 
> /var/run/gluster/vols/vmstore/thorst.penguinpages.local-gluster_bricks-vmstore-vms>
>            └─35639 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
> --volfile-id iso.thorst.penguinpages.local.gluster_bricks-iso-iso -p 
> /var/run/gluster/vols/iso/thorst.penguinpages.local-gluster_bricks-iso-iso.pid
>  -S /var/run/glu>
> 
> Sep 21 20:32:24 thor.penguinpages.local systemd[1]: Starting GlusterFS, a 
> clustered file-system server...

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
At around Sep 21 20:33 local time , you got  a loss of quorum - that's not good.

Could it be a network 'hicup' ?

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 15:05:16 Гринуич+3, Jeremey Wise 
 написа: 






I did.

Here are all three nodes with restart. I find it odd ... their has been a set 
of messages at end (see below) which I don't know enough about what oVirt laid 
out to know if it is bad.

###
[root@thor vmstore]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
preset: disabled)
  Drop-In: /etc/systemd/system/glusterd.service.d
           └─99-cpu.conf
   Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
     Docs: man:glusterd(8)
  Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
--log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 2113 (glusterd)
    Tasks: 151 (limit: 1235410)
   Memory: 3.8G
      CPU: 6min 46.050s
   CGroup: /glusterfs.slice/glusterd.service
           ├─ 2113 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
           ├─ 2914 /usr/sbin/glusterfs -s localhost --volfile-id shd/data -p 
/var/run/gluster/shd/data/data-shd.pid -l /var/log/glusterfs/glustershd.log -S 
/var/run/gluster/2f41374c2e36bf4d.socket --xlator-option *replicate*.node-uu>
           ├─ 9342 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id data.thorst.penguinpages.local.gluster_bricks-data-data -p 
/var/run/gluster/vols/data/thorst.penguinpages.local-gluster_bricks-data-data.pid
 -S /var/r>
           ├─ 9433 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id engine.thorst.penguinpages.local.gluster_bricks-engine-engine -p 
/var/run/gluster/vols/engine/thorst.penguinpages.local-gluster_bricks-engine-engine.p>
           ├─ 9444 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id vmstore.thorst.penguinpages.local.gluster_bricks-vmstore-vmstore 
-p 
/var/run/gluster/vols/vmstore/thorst.penguinpages.local-gluster_bricks-vmstore-vms>
           └─35639 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id iso.thorst.penguinpages.local.gluster_bricks-iso-iso -p 
/var/run/gluster/vols/iso/thorst.penguinpages.local-gluster_bricks-iso-iso.pid 
-S /var/run/glu>

Sep 21 20:32:24 thor.penguinpages.local systemd[1]: Starting GlusterFS, a 
clustered file-system server...
Sep 21 20:32:26 thor.penguinpages.local systemd[1]: Started GlusterFS, a 
clustered file-system server.
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
00:32:28.605674] C [MSGID: 106003] 
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: 
Server quorum regained for volume data. Starting lo>
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
00:32:28.639490] C [MSGID: 106003] 
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: 
Server quorum regained for volume engine. Starting >
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
00:32:28.680665] C [MSGID: 106003] 
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: 
Server quorum regained for volume vmstore. Starting>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:24.813409] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
0-data-client-0: server 172.16.101.101:24007 has not responded in the last 30 
seconds, discon>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:24.815147] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
2-engine-client-0: server 172.16.101.101:24007 has not responded in the last 30 
seconds, disc>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:24.818735] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
4-vmstore-client-0: server 172.16.101.101:24007 has not responded in the last 
30 seconds, dis>
Sep 21 20:33:36 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:36.816978] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
3-iso-client-0: server 172.16.101.101:24007 has not responded in the last 42 
seconds, disconn>
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]# systemctl restart glusterd
[root@thor vmstore]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
preset: disabled)
  Drop-In: /etc/systemd/system/glusterd.service.d
           └─99-cpu.conf
   Active: active (running) since Tue 2020-09-22 07:24:34 EDT; 2s ago
     Docs: man:glusterd(8)
  Process: 245831 ExecStart=/usr/sbin/glusterd -p 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
Replication issue could mean that one of the client (FUSE mounts) is not 
attached to all bricks.

You can check the amount of clients via:
gluster volume status all client-list


As a prevention , just do a rolling restart:
- set a host in maintenance and mark it to stop glusterd service (I'm reffering 
to the UI)
- Activate the host , once it was moved to maintenance

Wait for the host's HE score to recover (silver/gold crown in UI) and then 
proceed with the next one.

Best Regards,
Strahil Nikolov




В вторник, 22 септември 2020 г., 14:55:35 Гринуич+3, Jeremey Wise 
 написа: 






I did.

Here are all three nodes with restart. I find it odd ... their has been a set 
of messages at end (see below) which I don't know enough about what oVirt laid 
out to know if it is bad.

###
[root@thor vmstore]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor 
preset: disabled)
  Drop-In: /etc/systemd/system/glusterd.service.d
           └─99-cpu.conf
   Active: active (running) since Mon 2020-09-21 20:32:26 EDT; 10h ago
     Docs: man:glusterd(8)
  Process: 2001 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
--log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 2113 (glusterd)
    Tasks: 151 (limit: 1235410)
   Memory: 3.8G
      CPU: 6min 46.050s
   CGroup: /glusterfs.slice/glusterd.service
           ├─ 2113 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
           ├─ 2914 /usr/sbin/glusterfs -s localhost --volfile-id shd/data -p 
/var/run/gluster/shd/data/data-shd.pid -l /var/log/glusterfs/glustershd.log -S 
/var/run/gluster/2f41374c2e36bf4d.socket --xlator-option *replicate*.node-uu>
           ├─ 9342 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id data.thorst.penguinpages.local.gluster_bricks-data-data -p 
/var/run/gluster/vols/data/thorst.penguinpages.local-gluster_bricks-data-data.pid
 -S /var/r>
           ├─ 9433 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id engine.thorst.penguinpages.local.gluster_bricks-engine-engine -p 
/var/run/gluster/vols/engine/thorst.penguinpages.local-gluster_bricks-engine-engine.p>
           ├─ 9444 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id vmstore.thorst.penguinpages.local.gluster_bricks-vmstore-vmstore 
-p 
/var/run/gluster/vols/vmstore/thorst.penguinpages.local-gluster_bricks-vmstore-vms>
           └─35639 /usr/sbin/glusterfsd -s thorst.penguinpages.local 
--volfile-id iso.thorst.penguinpages.local.gluster_bricks-iso-iso -p 
/var/run/gluster/vols/iso/thorst.penguinpages.local-gluster_bricks-iso-iso.pid 
-S /var/run/glu>

Sep 21 20:32:24 thor.penguinpages.local systemd[1]: Starting GlusterFS, a 
clustered file-system server...
Sep 21 20:32:26 thor.penguinpages.local systemd[1]: Started GlusterFS, a 
clustered file-system server.
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
00:32:28.605674] C [MSGID: 106003] 
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: 
Server quorum regained for volume data. Starting lo>
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
00:32:28.639490] C [MSGID: 106003] 
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: 
Server quorum regained for volume engine. Starting >
Sep 21 20:32:28 thor.penguinpages.local glusterd[2113]: [2020-09-22 
00:32:28.680665] C [MSGID: 106003] 
[glusterd-server-quorum.c:348:glusterd_do_volume_quorum_action] 0-management: 
Server quorum regained for volume vmstore. Starting>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:24.813409] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
0-data-client-0: server 172.16.101.101:24007 has not responded in the last 30 
seconds, discon>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:24.815147] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
2-engine-client-0: server 172.16.101.101:24007 has not responded in the last 30 
seconds, disc>
Sep 21 20:33:24 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:24.818735] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
4-vmstore-client-0: server 172.16.101.101:24007 has not responded in the last 
30 seconds, dis>
Sep 21 20:33:36 thor.penguinpages.local glustershd[2914]: [2020-09-22 
00:33:36.816978] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 
3-iso-client-0: server 172.16.101.101:24007 has not responded in the last 42 
seconds, disconn>
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]#
[root@thor vmstore]# systemctl restart glusterd
[root@thor vmstore]# systemctl status glusterd
● glusterd.service - GlusterFS, a 

[ovirt-users] Re: Gluster Domain Storage full

2020-09-22 Thread Strahil Nikolov via Users
Any option to extend the Gluster Volume ?

Other approaches are quite destructive. I guess , you can obtain the VM's xml 
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
dumpxml  > /some/path/.xml

Once you got the VM running on a pure-KVM host , you can go to oVirt and try to 
wipe the VM from the UI. 


Usually those 10% reserve is just in case something like this one has happened, 
but Gluster doesn't check it every second (or the overhead will be crazy).

Maybe you can extend the Gluster volume temporarily , till you manage to move 
away the VM to a bigger storage. Then you can reduce the volume back to 
original size.

Best Regards,
Strahil Nikolov



В вторник, 22 септември 2020 г., 14:53:53 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello Strahil,

I just set cluster.min-free-disk to 1%:
# gluster volume info data

Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: node2.domain.com:/home/brick1
Options Reconfigured:
cluster.min-free-disk: 1%
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on

But still get the same error: Error while executing action: Cannot move Virtual 
Disk. Low disk space on Storage Domain
I restarted the glusterfs volume.
But I can not do anything with the VM disk.


I know that filling the bricks is very bad, we lost access to the VM. I think 
there should be a mechanism to prevent stopping the VM.
we should continue to have access to the VM to free some space.

If you have a VM with a Thin Provision disk, if the VM fills the entire disk, 
we got the same problem.

Any idea?

Thanks

José




De: "Strahil Nikolov" 
Para: "users" , supo...@logicworks.pt
Enviadas: Segunda-feira, 21 De Setembro de 2020 21:28:10
Assunto: Re: [ovirt-users] Gluster Domain Storage full

Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume 
option.
You can power off the VM , then set cluster.min-free-disk
to 1% and immediately move any of the VM's disks to another storage domain.

Keep in mind that filling your bricks is bad and if you eat that reserve , the 
only option would be to try to export the VM as OVA and then wipe from current 
storage and import in a bigger storage domain.

Of course it would be more sensible to just expand the gluster volume (either 
scale-up the bricks -> add more disks, or scale-out -> adding more servers with 
disks on them), but I guess that is not an option - right ?

Best Regards,
Strahil Nikolov








В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message:

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain

Any idea?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXAIFEYVIDEVJEUCSWY7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIJUP2HZIWRSQHN4XU3BGGT2ZDKEVJZ3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBAJWBN3QSKWEPWVP4DIL7OGNTASVZLP/


[ovirt-users] Re: Cannot import VM disks from previously detached storage domain

2020-09-22 Thread Strahil Nikolov via Users
Hi Eyal,

thanks for the reply - all the proposed options make sense.
I have opened a RFE -> https://bugzilla.redhat.com/show_bug.cgi?id=1881457 , 
but can you verify that the product/team are the correct one ?

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 12:55:56 Гринуич+3, Eyal Shenitzky 
 написа: 







On Mon, 21 Sep 2020 at 23:19, Strahil Nikolov  wrote:
> Hey Eyal,
> 
> it's really irritating that only ISOs can be imported as disks.
> 
> I had to:
> 1. Delete snapshot (but I really wanted to keep it)
> 2. Detach all disks from existing VM
> 3. Delete the VM
> 4. Import the Vm from the data domain
> 5. Delete the snapshot , so disks from data domain are "in sync" with the 
> non-data disks
> 6. Attach the non-data disks to the VM
> 
> If all disks for a VM were on the same storage domain - I didn't have to wipe 
> my snapshots.
> 
> Should I file a RFE in order to allow disk import for non-ISO disks ?
> If I wanted to rebuild the engine and import the sotrage domains I would have 
> to import the VM the first time , just to delete it and import it again - so 
> I can get my VM disks from the storage...
> 

From what I understand you want to file an RFE that requests the option to 
split 'unregistered' entities in a data domain, but unfortunately this is not 
possible.

But we may add different options:
* merge/squash to identical partial VMs
* Override an existing VM
* Force import the VM with a different ID
You can file an RFE with those suggest options.

Also, please add the description of why do you think it is needed.

 
>  Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> В понеделник, 21 септември 2020 г., 11:47:04 Гринуич+3, Eyal Shenitzky 
>  написа: 
> 
> 
> 
> 
> 
> Hi Stranhil, 
> 
> Maybe those VMs has more disks on different data storage domains?
> If so, those VMs will remain on the environment with the disks that are not 
> based on the detached storage-domain.
> 
> You can try to import the VM as partial, another option is to remove the VM 
> that remained in the environment but 
> keep the disks so you will be able to import the VM and attach the disks to 
> it.
> 
> On Sat, 19 Sep 2020 at 15:49, Strahil Nikolov via Users  
> wrote:
>> Hello All,
>> 
>> I would like to ask how to proceed further.
>> 
>> Here is what I have done so far on my ovirt 4.3.10:
>> 1. Set in maintenance and detached my Gluster-based storage domain
>> 2. Did some maintenance on the gluster
>> 3. Reattached and activated my Gluster-based storage domain
>> 4. I have imported my ISOs via the Disk Import tab in UI
>> 
>> Next I tried to import the VM Disks , but they are unavailable in the disk 
>> tab
>> So I tried to import the VM:
>> 1. First try - import with partial -> failed due to MAC conflict
>> 2. Second try - import with partial , allow MAC reassignment -> failed as VM 
>> id exists -> recommends to remove the original VM
>> 3. I tried to detach the VMs disks , so I can delete it - but this is not 
>> possible as the Vm already got a snapshot.
>> 
>> 
>> What is the proper way to import my non-OS disks (data domain is slower but 
>> has more space which is more suitable for "data") ?
>> 
>> 
>> Best Regards,
>> Strahil Nikolov
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WTJXOIVDWU6DGVZQQ243VKGWJLPKHR4L/
> 
>> 
> 
> 
> -- 
> Regards,
> Eyal Shenitzky
> 
> 


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FEU3KIA76YUA6EDI6SIOY43MHI2Z2ZNB/


[ovirt-users] Re: How to discover why a VM is getting suspended without recovery possibility?

2020-09-22 Thread Strahil Nikolov via Users
This looks much like my openBSD 6.6 under Latest AMD CPUs. KVM did not accept a 
pretty valid instruction and it was a bug in KVM.

Maybe you can try to :
- power off the VM
- pick an older CPU type for that VM only
- power on and monitor in the next days 

Do you have a cluster with different cpu vendor (if currently on AMD -> Intel 
and if currently Intel -> AMD)? Maybe you can move it to another cluster and 
identify if the issue happens there too.

Another option is to try to rollback the windows updates , to identify if any 
of them has caused the problem. Yet, that's aworkaround and not a fix .


Are you using oVirt 4.3 or 4.4 ?

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 10:08:44 Гринуич+3, Vinícius Ferrão 
 написа: 





Hi Strahil, yes I can’t find anything recently either. You digged way further 
then me, I found some regressions on the kernel but I don’t know if it’s 
related or not: 



https://patchwork.kernel.org/patch/5526561/

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1045027




Regarding the OS, nothing new was installed, just regular Windows Updates.

And finally about nested virtualisation, it’s disabled on hypervisor.




One thing that caught my attention on the link you’ve sent is regarding a 
rootkit: https://devblogs.microsoft.com/oldnewthing/20060421-12/?p=31443




But come on, it’s from 2006…




Well, I’m up to other ideas, VM just crashed once again:




EAX= EBX=075c5180 ECX=75432002 EDX=000400b6
ESI=c8ddc080 EDI=075d6800 EBP=a19bbdfe ESP=7db5d770
EIP=8000 EFL=0002 [---] CPL=0 II=0 A20=1 SMM=1 HLT=0
ES =   00809300
CS =9900 7ff99000  00809300
SS =   00809300
DS =   00809300
FS =   00809300
GS =   00809300
LDT=  000f 
TR =0040 075da000 0067 8b00
GDT=     075dbfb0 0057
IDT=      
CR0=00050032 CR2=242cb25a CR3=001ad002 CR4=
DR0= DR1= DR2= 
DR3= 
DR6=4ff0 DR7=0400
EFER=
Code=ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff  ff ff ff 
ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff




[519192.536247] *** Guest State ***
[519192.536275] CR0: actual=0x00050032, shadow=0x00050032, 
gh_mask=fff7
[519192.536324] CR4: actual=0x2050, shadow=0x, 
gh_mask=f871
[519192.537322] CR3 = 0x001ad002
[519192.538166] RSP = 0xfb047db5d770  RIP = 0x8000
[519192.539017] RFLAGS=0x0002         DR7 = 0x0400
[519192.539861] Sysenter RSP= CS:RIP=:
[519192.540690] CS:   sel=0x9900, attr=0x08093, limit=0x, 
base=0x7ff99000
[519192.541523] DS:   sel=0x, attr=0x08093, limit=0x, 
base=0x
[519192.542356] SS:   sel=0x, attr=0x08093, limit=0x, 
base=0x
[519192.543167] ES:   sel=0x, attr=0x08093, limit=0x, 
base=0x
[519192.543961] FS:   sel=0x, attr=0x08093, limit=0x, 
base=0x
[519192.544747] GS:   sel=0x, attr=0x08093, limit=0x, 
base=0x
[519192.545511] GDTR:                           limit=0x0057, 
base=0xad01075dbfb0
[519192.546275] LDTR: sel=0x, attr=0x1, limit=0x000f, 
base=0x
[519192.547052] IDTR:                           limit=0x, 
base=0x
[519192.547841] TR:   sel=0x0040, attr=0x0008b, limit=0x0067, 
base=0xad01075da000
[519192.548639] EFER =     0x  PAT = 0x0007010600070106
[519192.549460] DebugCtl = 0x  DebugExceptions = 
0x
[519192.550302] Interruptibility = 0009  ActivityState = 
[519192.551137] *** Host State ***
[519192.551963] RIP = 0xc150a034  RSP = 0x88cd9cafbc90
[519192.552805] CS=0010 SS=0018 DS= ES= FS= GS= TR=0040
[519192.553646] FSBase=7f7da762a700 GSBase=88d45f2c 
TRBase=88d45f2c4000
[519192.554496] GDTBase=88d45f2cc000 IDTBase=ff528000
[519192.555347] CR0=80050033 CR3=00033dc82000 CR4=001627e0
[519192.556202] Sysenter RSP= CS:RIP=0010:91596cc0
[519192.557058] EFER = 0x0d01  PAT = 0x0007050600070106
[519192.557913] *** Control State ***
[519192.558757] PinBased=003f CPUBased=b6a1edfa SecondaryExec=0ceb
[519192.559605] EntryControls=d1ff ExitControls=002fefff
[519192.560453] ExceptionBitmap=00060042 PFECmask= PFECmatch=
[519192.561306] VMEntry: intr_info= errcode=0006 ilen=
[519192.562158] VMExit: intr_info= errcode= ilen=0001
[519192.563006]         reason=8021 qualification=
[519192.563860] IDTVectoring: 

[ovirt-users] Re: hosted engine migration

2020-09-22 Thread Strahil Nikolov via Users
So, let's summarize:

- Cannot migrate the HE due to "CPU policy".
- HE's CPU is westmere - just like hosts
- You have enough resources on the second HE host (both CPU + MEMORY)

What is the Cluster's CPU type (you can check in UI) ?

Maybe you should enable debugging on various locations to identify the issue.

Anything interesting in the libvirt's log for the HostedEngine.xml on the 
destination host ?


Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 05:37:18 Гринуич+3, ddqlo  
написа: 





Yes. I can. The host which does not host the HE could be reinstalled 
sucessfully in web UI. After this is done nothing has changed.






在 2020-09-22 03:08:18,"Strahil Nikolov"  写道:
>Can you put 1 host in maintenance and use the "Installation" -> "Reinstall" 
>and enable the HE deployment from one of the tabs ?
>
>Best Regards,
>Strahil Nikolov
>
>
>
>
>
>
>В понеделник, 21 септември 2020 г., 06:38:06 Гринуич+3, ddqlo  
>написа: 
>
>
>
>
>
>so strange! After I set global maintenance, powered off and started H The cpu 
>of HE became 'Westmere'(did not change anything). But HE still could not be 
>migrated.
>
>HE xml:
>  
>    Westmere
>    
>    
>    
>    
>    
>    
>    
>      
>    
>  
>
>host capabilities: 
>Westmere
>
>cluster cpu type (UI): 
>
>
>host cpu type (UI):
>
>
>HE cpu type (UI):
>
>
>
>
>
>
>
>在 2020-09-19 13:27:35,"Strahil Nikolov"  写道:
>>Hm... interesting.
>>
>>The VM is using 'Haswell-noTSX'  while the host is 'Westmere'.
>>
>>In my case I got no difference:
>>
>>[root@ovirt1 ~]# virsh  dumpxml HostedEngine | grep Opteron
>>   Opteron_G5
>>[root@ovirt1 ~]# virsh capabilities | grep Opteron
>> Opteron_G5
>>
>>Did you update the cluster holding the Hosted Engine ?
>>
>>
>>I guess you can try to:
>>
>>- Set global maintenance
>>- Power off the HostedEngine VM
>>- virsh dumpxml HostedEngine > /root/HE.xml
>>- use virsh edit to change the cpu of the HE (non-permanent) change
>>- try to power on the modified HE
>>
>>If it powers on , you can try to migrate it and if it succeeds - then you 
>>should make it permanent.
>>
>>
>>
>>
>>
>>Best Regards,
>>Strahil Nikolov
>>
>>В петък, 18 септември 2020 г., 04:40:39 Гринуич+3, ddqlo  
>>написа: 
>>
>>
>>
>>
>>
>>HE:
>>
>>
>>  HostedEngine
>>  b4e805ff-556d-42bd-a6df-02f5902fd01c
>>  http://ovirt.org/vm/tune/1.0; 
>>xmlns:ovirt-vm="http://ovirt.org/vm/1.0;>
>>    
>>    http://ovirt.org/vm/1.0;>
>>    4.3
>>    False
>>    false
>>    1024
>>    >type="int">1024
>>    auto_resume
>>    1600307555.19
>>    
>>        external
>>        
>>            4
>>        
>>    
>>    
>>        ovirtmgmt
>>        
>>            4
>>        
>>    
>>    
>>        
>>c17c1934-332f-464c-8f89-ad72463c00b3
>>        /dev/vda2
>>        
>>8eca143a-4535-4421-bd35-9f5764d67d70
>>        
>>----
>>        exclusive
>>        
>>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>>        
>>            1
>>        
>>        
>>            
>>                
>>c17c1934-332f-464c-8f89-ad72463c00b3
>>                
>>8eca143a-4535-4421-bd35-9f5764d67d70
>>                >type="int">108003328
>>                
>>/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases
>>                
>>/rhev/data-center/mnt/blockSD/c17c1934-332f-464c-8f89-ad72463c00b3/images/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>>                
>>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>>            
>>        
>>    
>>    
>>
>>  
>>  67108864
>>  16777216
>>  16777216
>>  64
>>  1
>>  
>>    /machine
>>  
>>  
>>    
>>      oVirt
>>      oVirt Node
>>      7-5.1804.el7.centos
>>      ----0CC47A6B3160
>>      b4e805ff-556d-42bd-a6df-02f5902fd01c
>>    
>>  
>>  
>>    hvm
>>    
>>    
>>    
>>  
>>  
>>    
>>  
>>  
>>    Haswell-noTSX
>>    
>>    
>>    
>>    
>>    
>>    
>>    
>>    
>>    
>>      
>>    
>>  
>>  
>>    
>>    
>>    
>>  
>>  destroy
>>  destroy
>>  destroy
>>  
>>    
>>    
>>  
>>  
>>    /usr/libexec/qemu-kvm
>>    
>>      
>>      
>>      
>>      
>>      
>>      
>>    
>>    
>>      >io='native' iothread='1'/>
>>      >dev='/var/run/vdsm/storage/c17c1934-332f-464c-8f89-ad72463c00b3/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33'>
>>        
>>      
>>      
>>      
>>      8eca143a-4535-4421-bd35-9f5764d67d70
>>      
>>      >function='0x0'/>
>>    
>>    
>>      
>>      
>>      >function='0x0'/>
>>    
>>    
>>      
>>      >function='0x1'/>
>>    
>>    
>>      
>>      >function='0x0'/>
>>    
>>    
>>      
>>      >function='0x2'/>
>>    
>>    
>>      
>>    
>>    
>>      c17c1934-332f-464c-8f89-ad72463c00b3
>>      ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>>      >offset='108003328'/>
>>    
>>    
>>      
>>      
>>      
>>      
>>      
>>      
>>      
>>      
>>      
>>      >function='0x0'/>
>>    
>>    
>>      
>>      
>>      
>>      
>>      
>>      
>>      
>>      
>>      
>>      

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-22 Thread Strahil Nikolov via Users
Have you restarted glusterd.service on the affected node.
glusterd is just management layer and it won't affect the brick processes.

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 01:43:36 Гринуич+3, Jeremey Wise 
 написа: 






Start is not an option.

It notes two bricks.  but command line denotes three bricks and all present

[root@odin thorst.penguinpages.local:_vmstore]# gluster volume status data
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/data/data                              49152     0          Y       33123
Brick odinst.penguinpages.local:/gluster_br
icks/data/data                              49152     0          Y       2970
Brick medusast.penguinpages.local:/gluster_
bricks/data/data                            49152     0          Y       2646
Self-heal Daemon on localhost               N/A       N/A        Y       3004
Self-heal Daemon on thorst.penguinpages.loc
al                                          N/A       N/A        Y       33230
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475

Task Status of Volume data
--
There are no active volume tasks

[root@odin thorst.penguinpages.local:_vmstore]# gluster peer status
Number of Peers: 2

Hostname: thorst.penguinpages.local
Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9
State: Peer in Cluster (Connected)

Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@odin thorst.penguinpages.local:_vmstore]#




On Mon, Sep 21, 2020 at 4:32 PM Strahil Nikolov  wrote:
> Just select the volume and press "start" . It will automatically mark "force 
> start" and will fix itself.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> 
> oVirt engine shows  one of the gluster servers having an issue.  I did a 
> graceful shutdown of all three nodes over weekend as I have to move around 
> some power connections in prep for UPS.
> 
> Came back up.. but
> 
> 
> 
> And this is reflected in 2 bricks online (should be three for each volume)
> 
> 
> Command line shows gluster should be happy.
> 
> [root@thor engine]# gluster peer status
> Number of Peers: 2
> 
> Hostname: odinst.penguinpages.local
> Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
> State: Peer in Cluster (Connected)
> 
> Hostname: medusast.penguinpages.local
> Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
> State: Peer in Cluster (Connected)
> [root@thor engine]#
> 
> # All bricks showing online
> [root@thor engine]# gluster volume status
> Status of volume: data
> Gluster process                             TCP Port  RDMA Port  Online  Pid
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/data/data                              49152     0          Y       11001
> Brick odinst.penguinpages.local:/gluster_br
> icks/data/data                              49152     0          Y       2970
> Brick medusast.penguinpages.local:/gluster_
> bricks/data/data                            49152     0          Y       2646
> Self-heal Daemon on localhost               N/A       N/A        Y       50560
> Self-heal Daemon on odinst.penguinpages.loc
> al                                          N/A       N/A        Y       3004
> Self-heal Daemon on medusast.penguinpages.l
> ocal                                        N/A       N/A        Y       2475
> 
> Task Status of Volume data
> --
> There are no active volume tasks
> 
> Status of volume: engine
> Gluster process                             TCP Port  RDMA Port  Online  Pid
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/engine/engine                          49153     0          Y       11012
> Brick odinst.penguinpages.local:/gluster_br
> icks/engine/engine                          49153     0          Y       2982
> Brick medusast.penguinpages.local:/gluster_
> bricks/engine/engine                        49153     0          Y       2657
> Self-heal Daemon on localhost               N/A       N/A        Y       50560
> Self-heal Daemon on odinst.penguinpages.loc
> al                                          N/A       N/A        Y       3004
> Self-heal Daemon on medusast.penguinpages.l
> ocal                                        N/A       N/A        Y       2475
> 
> Task Status of Volume engine
> --
> There are no active 

[ovirt-users] Re: How to discover why a VM is getting suspended without recovery possibility?

2020-09-21 Thread Strahil Nikolov via Users
Interesting is that I don't find anything recent , but this one:
https://devblogs.microsoft.com/oldnewthing/20120511-00/?p=7653

Can you check if anything in the OS was updated/changed recently ?

Also check if the VM is with nested virtualization enabled. 

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 23:56:26 Гринуич+3, Vinícius Ferrão 
 написа: 





Strahil, thank you man. We finally got some output:

2020-09-15T12:34:49.362238Z qemu-kvm: warning: CPU(s) not present in any NUMA 
nodes: CPU 10 [socket-id: 10, core-id: 0, thread-id: 0], CPU 11 [socket-id: 11, 
core-id: 0, thread-id: 0], CPU 12 [socket-id: 12, core-id: 0, thread-id: 0], 
CPU 13 [socket-id: 13, core-id: 0, thread-id: 0], CPU 14 [socket-id: 14, 
core-id: 0, thread-id: 0], CPU 15 [socket-id: 15, core-id: 0, thread-id: 0]
2020-09-15T12:34:49.362265Z qemu-kvm: warning: All CPU(s) up to maxcpus should 
be described in NUMA config, ability to start up with partial NUMA mappings is 
obsoleted and will be removed in future
KVM: entry failed, hardware error 0x8021

If you're running a guest on an Intel machine without unrestricted mode
support, the failure can be most likely due to the guest entering an invalid
state for Intel VT. For example, the guest maybe running in big real mode
which is not supported on less recent Intel processors.

EAX= EBX=01746180 ECX=4be7c002 EDX=000400b6
ESI=8b3d6080 EDI=02d70400 EBP=a19bbdfe ESP=82883770
EIP=8000 EFL=0002 [---] CPL=0 II=0 A20=1 SMM=1 HLT=0
ES =   00809300
CS =8d00 7ff8d000  00809300
SS =   00809300
DS =   00809300
FS =   00809300
GS =   00809300
LDT=  000f 
TR =0040 04c59000 0067 8b00
GDT=    04c5afb0 0057
IDT=     
CR0=00050032 CR2=c1b7ec48 CR3=001ad002 CR4=
DR0= DR1= DR2= 
DR3= 
DR6=0ff0 DR7=0400
EFER=
Code=ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff  ff ff ff 
ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
2020-09-16T04:11:55.344128Z qemu-kvm: terminating on signal 15 from pid 1 
()
2020-09-16 04:12:02.212+: shutting down, reason=shutdown






That’s the issue, I got this on the logs of both physical machines. The 
probability of both machines are damaged is not quite common right? So even 
with the log saying it’s a hardware error it may be software related? And 
again, this only happens with this VM.

> On 21 Sep 2020, at 17:36, Strahil Nikolov  wrote:
> 
> Usually libvirt's log might provide hints (yet , no clues) of any issues.
> 
> For example: 
> /var/log/libvirt/qemu/.log
> 
> Anything changed recently (maybe oVirt version was increased) ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 21 септември 2020 г., 23:28:13 Гринуич+3, Vinícius Ferrão 
>  написа: 
> 
> 
> 
> 
> 
> Hi Strahil, 
> 
> 
> 
> Both disks are VirtIO-SCSI and are Preallocated:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Thanks,
> 
> 
> 
> 
> 
> 
> 
> 
>>  
>> On 21 Sep 2020, at 17:09, Strahil Nikolov  wrote:
>> 
>> 
>>  
>> What type of disks are you using ? Any change you use thin disks ?
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> 
>> 
>> В понеделник, 21 септември 2020 г., 07:20:23 Гринуич+3, Vinícius Ferrão via 
>> Users  написа: 
>> 
>> 
>> 
>> 
>> 
>> Hi, sorry to bump the thread.
>> 
>> But I still with this issue on the VM. This crashes are still happening, and 
>> I really don’t know what to do. Since there’s nothing on logs, except from 
>> that message on `dmesg` of the host machine I started changing setting to 
>> see if anything changes or if I at least I get a pattern.
>> 
>> What I’ve tried:
>> 1. Disabled I/O Threading on VM.
>> 2. Increased I/O Threading to 2 form 1.
>> 3. Disabled Memory Balooning.
>> 4. Reduced VM resources form 10 CPU’s and 48GB of RAM to 6 CPU’s and 24GB of 
>> RAM.
>> 5. Moved the VM to another host.
>> 6. Dedicated a host specific to this VM.
>> 7. Check on the storage system to see if there’s any resource starvation, 
>> but everything seems to be fine.
>> 8. Checked both iSCSI switches to see if there’s something wrong with the 
>> fabrics: 0 errors.
>> 
>> I’m really running out of ideas. The VM was working normally and suddenly 
>> this started.
>> 
>> Thanks,
>> 
>> PS: When I was typing this message it crashed again:
>> 
>> [427483.126725] *** Guest State ***
>> [427483.127661] CR0: actual=0x00050032, shadow=0x00050032, 
>> gh_mask=fff7
>> [427483.128505] CR4: actual=0x2050, shadow=0x, 
>> gh_mask=f871
>> [427483.129342] CR3 = 0x0001849ff002
>> [427483.130177] RSP = 0xb10186b0  RIP = 0x8000
>> [427483.131014] RFLAGS=0x0002        DR7 = 0x0400
>> [427483.131859] 

[ovirt-users] Re: How to discover why a VM is getting suspended without recovery possibility?

2020-09-21 Thread Strahil Nikolov via Users
Usually libvirt's log might provide hints (yet , no clues) of any issues.

For example: 
/var/log/libvirt/qemu/.log

Anything changed recently (maybe oVirt version was increased) ?

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 23:28:13 Гринуич+3, Vinícius Ferrão 
 написа: 





Hi Strahil, 



Both disks are VirtIO-SCSI and are Preallocated:














Thanks,








>  
> On 21 Sep 2020, at 17:09, Strahil Nikolov  wrote:
> 
> 
>  
> What type of disks are you using ? Any change you use thin disks ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 21 септември 2020 г., 07:20:23 Гринуич+3, Vinícius Ferrão via 
> Users  написа: 
> 
> 
> 
> 
> 
> Hi, sorry to bump the thread.
> 
> But I still with this issue on the VM. This crashes are still happening, and 
> I really don’t know what to do. Since there’s nothing on logs, except from 
> that message on `dmesg` of the host machine I started changing setting to see 
> if anything changes or if I at least I get a pattern.
> 
> What I’ve tried:
> 1. Disabled I/O Threading on VM.
> 2. Increased I/O Threading to 2 form 1.
> 3. Disabled Memory Balooning.
> 4. Reduced VM resources form 10 CPU’s and 48GB of RAM to 6 CPU’s and 24GB of 
> RAM.
> 5. Moved the VM to another host.
> 6. Dedicated a host specific to this VM.
> 7. Check on the storage system to see if there’s any resource starvation, but 
> everything seems to be fine.
> 8. Checked both iSCSI switches to see if there’s something wrong with the 
> fabrics: 0 errors.
> 
> I’m really running out of ideas. The VM was working normally and suddenly 
> this started.
> 
> Thanks,
> 
> PS: When I was typing this message it crashed again:
> 
> [427483.126725] *** Guest State ***
> [427483.127661] CR0: actual=0x00050032, shadow=0x00050032, 
> gh_mask=fff7
> [427483.128505] CR4: actual=0x2050, shadow=0x, 
> gh_mask=f871
> [427483.129342] CR3 = 0x0001849ff002
> [427483.130177] RSP = 0xb10186b0  RIP = 0x8000
> [427483.131014] RFLAGS=0x0002        DR7 = 0x0400
> [427483.131859] Sysenter RSP= CS:RIP=:
> [427483.132708] CS:  sel=0x9b00, attr=0x08093, limit=0x, 
> base=0x7ff9b000
> [427483.133559] DS:  sel=0x, attr=0x08093, limit=0x, 
> base=0x
> [427483.134413] SS:  sel=0x, attr=0x08093, limit=0x, 
> base=0x
> [427483.135237] ES:  sel=0x, attr=0x08093, limit=0x, 
> base=0x
> [427483.136040] FS:  sel=0x, attr=0x08093, limit=0x, 
> base=0x
> [427483.136842] GS:  sel=0x, attr=0x08093, limit=0x, 
> base=0x
> [427483.137629] GDTR:                          limit=0x0057, 
> base=0xb10186eb4fb0
> [427483.138409] LDTR: sel=0x, attr=0x1, limit=0x000f, 
> base=0x
> [427483.139202] IDTR:                          limit=0x, 
> base=0x
> [427483.139998] TR:  sel=0x0040, attr=0x0008b, limit=0x0067, 
> base=0xb10186eb3000
> [427483.140816] EFER =    0x  PAT = 0x0007010600070106
> [427483.141650] DebugCtl = 0x  DebugExceptions = 
> 0x
> [427483.142503] Interruptibility = 0009  ActivityState = 
> [427483.143353] *** Host State ***
> [427483.144194] RIP = 0xc0c65024  RSP = 0x9253c0b9bc90
> [427483.145043] CS=0010 SS=0018 DS= ES= FS= GS= TR=0040
> [427483.145903] FSBase=7fcc13816700 GSBase=925adf24 
> TRBase=925adf244000
> [427483.146766] GDTBase=925adf24c000 IDTBase=ff528000
> [427483.147630] CR0=80050033 CR3=0010597b6000 CR4=001627e0
> [427483.148498] Sysenter RSP= CS:RIP=0010:8f196cc0
> [427483.149365] EFER = 0x0d01  PAT = 0x0007050600070106
> [427483.150231] *** Control State ***
> [427483.151077] PinBased=003f CPUBased=b6a1edfa SecondaryExec=0ceb
> [427483.151942] EntryControls=d1ff ExitControls=002fefff
> [427483.152800] ExceptionBitmap=00060042 PFECmask= PFECmatch=
> [427483.153661] VMEntry: intr_info= errcode=0006 ilen=
> [427483.154521] VMExit: intr_info= errcode= ilen=0004
> [427483.155376]        reason=8021 qualification=
> [427483.156230] IDTVectoring: info= errcode=
> [427483.157068] TSC Offset = 0xfffccfc261506dd9
> [427483.157905] TPR Threshold = 0x0d
> [427483.158728] EPT pointer = 0x0009b437701e
> [427483.159550] PLE Gap=0080 Window=0008
> [427483.160370] Virtual processor ID = 0x0004
> 
> 
> 
>> On 16 Sep 2020, at 17:11, Vinícius Ferrão  wrote:
>> 
>> Hello,
>> 
>> I’m an Exchange Server VM that’s going down to suspend without possibility 
>> of recovery. I need to click on shutdown and them power on. I can’t find 
>> anything 

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-21 Thread Strahil Nikolov via Users
Just select the volume and press "start" . It will automatically mark "force 
start" and will fix itself.

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise 
 написа: 






oVirt engine shows  one of the gluster servers having an issue.  I did a 
graceful shutdown of all three nodes over weekend as I have to move around some 
power connections in prep for UPS.

Came back up.. but



And this is reflected in 2 bricks online (should be three for each volume)


Command line shows gluster should be happy.

[root@thor engine]# gluster peer status
Number of Peers: 2

Hostname: odinst.penguinpages.local
Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
State: Peer in Cluster (Connected)

Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@thor engine]#

# All bricks showing online
[root@thor engine]# gluster volume status
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/data/data                              49152     0          Y       11001
Brick odinst.penguinpages.local:/gluster_br
icks/data/data                              49152     0          Y       2970
Brick medusast.penguinpages.local:/gluster_
bricks/data/data                            49152     0          Y       2646
Self-heal Daemon on localhost               N/A       N/A        Y       50560
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       3004
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475

Task Status of Volume data
--
There are no active volume tasks

Status of volume: engine
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/engine/engine                          49153     0          Y       11012
Brick odinst.penguinpages.local:/gluster_br
icks/engine/engine                          49153     0          Y       2982
Brick medusast.penguinpages.local:/gluster_
bricks/engine/engine                        49153     0          Y       2657
Self-heal Daemon on localhost               N/A       N/A        Y       50560
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       3004
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475

Task Status of Volume engine
--
There are no active volume tasks

Status of volume: iso
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/iso/iso                                49156     49157      Y       151426
Brick odinst.penguinpages.local:/gluster_br
icks/iso/iso                                49156     49157      Y       69225
Brick medusast.penguinpages.local:/gluster_
bricks/iso/iso                              49156     49157      Y       45018
Self-heal Daemon on localhost               N/A       N/A        Y       50560
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       3004
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475

Task Status of Volume iso
--
There are no active volume tasks

Status of volume: vmstore
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       11023
Brick odinst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       2993
Brick medusast.penguinpages.local:/gluster_
bricks/vmstore/vmstore                      49154     0          Y       2668
Self-heal Daemon on localhost               N/A       N/A        Y       50560
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       3004

Task Status of Volume vmstore
--
There are no active volume 

[ovirt-users] Re: Gluster Domain Storage full

2020-09-21 Thread Strahil Nikolov via Users
Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume 
option.
You can power off the VM , then set cluster.min-free-disk
to 1% and immediately move any of the VM's disks to another storage domain.

Keep in mind that filling your bricks is bad and if you eat that reserve , the 
only option would be to try to export the VM as OVA and then wipe from current 
storage and import in a bigger storage domain.

Of course it would be more sensible to just expand the gluster volume (either 
scale-up the bricks -> add more disks, or scale-out -> adding more servers with 
disks on them), but I guess that is not an option - right ?

Best Regards,
Strahil Nikolov








В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message:

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain

Any idea?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXAIFEYVIDEVJEUCSWY7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4XQYUD2DGE4CBMYXEWRKMOYBSEGW4Y2O/


[ovirt-users] Re: Question on "Memory" column/field in Virtual Machines list/table in ovirt GUI

2020-09-21 Thread Strahil Nikolov via Users
For some OS versions , the oVirt's behavior is accurate , but for other 
versions it's not accurate.
I think that it is more accurate to say that oVirt improperly calculates memory 
for SLES 15/openSUSE 15.

I would open a bug at bugzilla.redhat.com .


Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 15:15:42 Гринуич+3, KISHOR K 
 написа: 





Hi,

I think I already checked that.
What I meant (since beginning) was that ovirt is reporting memory usage in GUI 
same way regardless of CentOS or SLES in our case.
My main question is why ovirt is reporting memory usage percentage based on 
"free" memory but not actually based on "available memory", which is basically 
sum of "free" and "buff/cache". 
Buffer/Cache is a temporary memory and that's anyhow gets released for new 
processes and applications. 
That means that, ovirt should actually consider the actual available memory 
left and report usage accordingly in GUI but what we see now is different 
behavior.
I was very worried when I saw the memory usage as 98% and highlighted in red 
for many of VMs in GUI. But, when I checked the actual used memory by VM, it's 
always below 50%.

Could you clarify how can this behavior from ovirt be OS specific?

I hope I explained the issue clearly or let me know if it is still unclear.
Thanks in advance.


/Kishore
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HAJ6TT74U33FAFIJTXTYZHVHYKKSWMN7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GZATZSXBJCPY56MCEAHIGZSK7L3LV2IS/


[ovirt-users] Re: Cannot import VM disks from previously detached storage domain

2020-09-21 Thread Strahil Nikolov via Users
Hey Eyal,

it's really irritating that only ISOs can be imported as disks.

I had to:
1. Delete snapshot (but I really wanted to keep it)
2. Detach all disks from existing VM
3. Delete the VM
4. Import the Vm from the data domain
5. Delete the snapshot , so disks from data domain are "in sync" with the 
non-data disks
6. Attach the non-data disks to the VM

If all disks for a VM were on the same storage domain - I didn't have to wipe 
my snapshots.

Should I file a RFE in order to allow disk import for non-ISO disks ?
If I wanted to rebuild the engine and import the sotrage domains I would have 
to import the VM the first time , just to delete it and import it again - so I 
can get my VM disks from the storage...

Best Regards,
Strahil Nikolov





В понеделник, 21 септември 2020 г., 11:47:04 Гринуич+3, Eyal Shenitzky 
 написа: 





Hi Stranhil, 

Maybe those VMs has more disks on different data storage domains?
If so, those VMs will remain on the environment with the disks that are not 
based on the detached storage-domain.

You can try to import the VM as partial, another option is to remove the VM 
that remained in the environment but 
keep the disks so you will be able to import the VM and attach the disks to it.

On Sat, 19 Sep 2020 at 15:49, Strahil Nikolov via Users  wrote:
> Hello All,
> 
> I would like to ask how to proceed further.
> 
> Here is what I have done so far on my ovirt 4.3.10:
> 1. Set in maintenance and detached my Gluster-based storage domain
> 2. Did some maintenance on the gluster
> 3. Reattached and activated my Gluster-based storage domain
> 4. I have imported my ISOs via the Disk Import tab in UI
> 
> Next I tried to import the VM Disks , but they are unavailable in the disk tab
> So I tried to import the VM:
> 1. First try - import with partial -> failed due to MAC conflict
> 2. Second try - import with partial , allow MAC reassignment -> failed as VM 
> id exists -> recommends to remove the original VM
> 3. I tried to detach the VMs disks , so I can delete it - but this is not 
> possible as the Vm already got a snapshot.
> 
> 
> What is the proper way to import my non-OS disks (data domain is slower but 
> has more space which is more suitable for "data") ?
> 
> 
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WTJXOIVDWU6DGVZQQ243VKGWJLPKHR4L/
> 


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4F64EFRW5AHOOIMSB2OOFF4FVWCZ4YV4/


<    4   5   6   7   8   9   10   11   12   >