I would recommend you to check this one:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/chap-event_notifications
Best Regards,
Strahil Nikolov
В вторник, 17 ноември 2020 г., 22:00:08 Гринуич+2, Chris Adams
написа:
I just noticed
Hi Bradley,
usually this is not supposed to happen.
I can propose you a fast fix:
- Set a node into maintenance (via the UI) and then from the "Installation"
drop down menu (upper right) click "reinstall" There is a tab for the
HostedEngine and you have to mark it as deployed/installed
If it
Once the vm fails, you can check in the host's vdsm log the whole xml file.
Can you share that ?
Best Regards,
Strahil Nikolov
В сряда, 18 ноември 2020 г., 11:31:55 Гринуич+2, tiziano.paci...@par-tec.it
написа:
Hi,
I installed a new server, using the ovirt iso, with the target of
First ,
you need to set an alias for virsh like this one:
alias virsh='virsh -c
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
Next,
you need to check with virsh if the HostedEngine VM is available (as you need
it's config). If not , you can check the vdsm.log for the full
Actually you can import the NFS, but all VM disks should be on that NFS and the
VM must be stopped.Also , don't forget the template (if you used template) - it
also must be on the NFS.
Then importing is like a piece of cake.
Best Regards,
Strahil Nikolov
В понеделник, 16 ноември 2020
Can you check if you got any errors like :
java.lang.OutOfMemoryError: Java heap space
If yes, you can increase the JAVA HEAP values (but will be overwritten on next
upgrade).
According to https://access.redhat.com/articles/1256093 , you can increase the
JAVA heap size by:
Create a conf under
Hi Rob,
I would check the vdsm logs on the host where the HostedEngine VM was already
running (source).
Also , you can check the logs on the HE itself.
In the UI, check the cluster cpu settings and your hosts. Is it possible that
one node has newer CPU than the other ?
Best Regards,
Strahil
If this is an oVIrt 4.4 , then open a bug on bugzilla.redhat.com .
Best Regards,
Strahil Nikolov
В събота, 7 ноември 2020 г., 11:42:48 Гринуич+2, Rob Verduijn
написа:
Hi,
Found it,
The hardware is identical (3xhp microserver g10, identical disks, cpu and ram)
It is a
Hi ,
I haven't done it yet, but I'm planing to do it.
As I haven't tested the following, I can't guarantee that it will work:
0. Gluster snapshots on all volumes
1. Set a node in maintenance
2. Create a full backup of the engine
3. Set global maintenance and power off the current engine
4.
Have you made any changes recently ?
oVirt Engine requires key-based ssh to the Host's root user. If you hardened
your hosts recently , modify your /etc/sshd/sshd_config to override that
behaviour for the HostedEngine only.
Also check the 'supervdsmd.service' & 'vdsmd.service' status - they
You can use vdsm hooks to do almost everything.
About the Floating IP, I keep it for VMs in the same VLAN.
Best Regards,
Strahil Nikolov
В понеделник, 9 ноември 2020 г., 10:35:47 Гринуич+2, yam yam
написа:
Hello everyone!
I'm wondering there is any feature like applying routing
Hi,
I am interested about these steps too, for a clean an straighforward procedure.
Althought this plan looks pretty good, i am still wondering:
Step 4
>Backup all gluster config files
- could you please let me know what would be the exact location(s) of the files
to be backed up ?
You still haven't provided debug logs from the Gluster Bricks.
There will be always a chance that a bug hits you ... no matter OS and tech.
What matters - is how you debug and overcome that bug.
Check the gluster brick debug logs and you can test if the issue happens with
an older version.
Have you thought to use a vdsm hook that executes your logic once a VM is
removed ? This way users won't have the ability to alter the DNS records
themselves ,which is way more secure and reliable.
Best Regards,
Strahil Nikolov
В събота, 21 ноември 2020 г., 10:26:45 Гринуич+2, Nathanaël
No, but keep an eye on you "/var/log" as debug is providing a lot of info.
Usually when you got a failure to move the disk, you can disable and check the
logs.
Best Regards,
Strahil Nikolov
В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2,
написа:
Do I need to restart gluster
Do the files really exist ?
Any heals pending ?
Best Regards,
Strahil Nikolov
В неделя, 15 ноември 2020 г., 16:24:48 Гринуич+2, supo...@logicworks.pt
написа:
Here it is:
# sudo -u vdsm /usr/bin/qemu-img convert -p -t none -T none -f raw -O raw
It clearly indicates the problem - enable SELINUX.
Best Regards,
Strahil Nikolov
В петък, 13 ноември 2020 г., 17:08:57 Гринуич+2,
написа:
Hello,
I try use Gluster deploymont. I've the message error:
failed: [llrovirttest02.in2p3.fr] (item={u'path': u'/gluster_bricks/engine',
Usually it is best practice to have the cluster expanded by a multiple of '3'
because the gluster volume is of type 'replica 3' or 'replica 3 arbiter 1'.
Such volumes can only be expanded by 3 bricks and the best practice is to have
one node per brick.
Yet, the linux world gives freedom , so
The ansible playbook is expecting that the "/dev/sdb" (which you have defined)
to be without any partitions or data.
Just wipe it and try again.
Best Regards,
Strahil Nikolov
В понеделник, 2 ноември 2020 г., 17:59:13 Гринуич+2, garcialiang.a...@gmail.com
написа:
Hello,
I've some
Erm... noone ?
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 02:51:00 Гринуич+2, Strahil Nikolov via Users
написа:
Hello All,
I would like to learn more about OVN and especially the maximum MTU that I can
use in my environment.
Current Setup 4.3.10
Network
;
>
>
>
>
> В петък, 30 октомври 2020 г., 13:25:05 Гринуич+2, Dominik Holler
> написа:
>
>
>
>
>
>
>
> On Thu, Oct 29, 2020 at 9:36 PM Alex K wrote:
>>
>>
>> On Tue, Oct 27, 2020, 02:49 Strahil Nikolov via Users
>> wro
13:25:05 Гринуич+2, Dominik Holler
> написа:
>
>
>
>
>
>
>
> On Thu, Oct 29, 2020 at 9:36 PM Alex K wrote:
>>
>>
>> On Tue, Oct 27, 2020, 02:49 Strahil Nikolov via Users
>> wrote:
>>> Hello All,
>>>
>>>
in 4.3.10's UI it shows 1500 :)
В петък, 30 октомври 2020 г., 13:25:05 Гринуич+2, Dominik Holler
написа:
On Thu, Oct 29, 2020 at 9:36 PM Alex K wrote:
>
>
> On Tue, Oct 27, 2020, 02:49 Strahil Nikolov via Users wrote:
>> Hello All,
>>
>> I would l
The only one I know is RH318, but it is a paid one.
Best Regards,
Strahil Nikolov
В събота, 31 октомври 2020 г., 02:03:59 Гринуич+2, i...@worldhostess.com
написа:
Can someone recommend a training video of some kind of step by step document to
do the installation and administration
Check if qemu-guest-agent(s) is availabile and use that instead.
Best Regards,
Strahil Nikolov
В събота, 31 октомври 2020 г., 22:04:46 Гринуич+2,
написа:
What is the best way to install ovirt guest on Ubuntu 16.04.6?
What I did:
# apt-get install ovirt-guest-agent
I changed value
Where is that option ?
Best Regards,
Strahil Nikolov
В неделя, 1 ноември 2020 г., 08:56:44 Гринуич+2, Joris DEDIEU
написа:
Hi list,
I forgot to check "Discard after Delete" when creating a new volume. Is there a
way (other than to empty the volume) to reclaim free blocks.
In
We do not know.
Nodes 3,4,5 need to be in the same gluster network like 1,2,3 .
Once you create your bricks (inode size >= 512) and mount them permanently, you
can create an extra volume or expand the current volumes.
Once you have prepared the nodes, you can add them from UI.
Best Regards,
I agree with Alex.
Also, most of the kernel tunables proposed in that thread are also available in
the tuned profiles provided by the redhat-storage-server source rpm available
at ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/
Usually the alignment of XFS ontop the HW raid
I might be wrong, but I think that the SAN LUN is used as a PV and then each
disk is a LV from the Host Perspective.
Of course , I could be wrong and someone can correct me. All my oVirt
experience is based on HCI (Gluster + oVirt).
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври
Wed, Oct 21, 2020 at 9:16 PM Strahil Nikolov via Users
wrote:
> I usually run the following (HostedEngine):
>
> [root@engine ~]# su - postgres
>
> -bash-4.2$ source /opt/rh/rh-postgresql10/enable
This is applicable to 4.3, on el7. For 4.4 this isn't needed.
Also, IIRC this
Virt settings are those:
[root@ovirt1 slow]# cat /var/lib/glusterd/groups/virt
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
cluster.eager-lock=enable
cluster.quorum-type=auto
When you mount the gluster volume manually and run "df -h /gluster/mount/point"
, how much space does it show ?
Best Regards,
Strahil Nikolov
В сряда, 4 ноември 2020 г., 17:48:56 Гринуич+2, hjadavall...@ukaachen.de
написа:
Hello,
Good Day!
I'm Hariharan and I'm working as System
I think the minimum is 60G and it seems that your deployment has failed, so can
you cleanup the share and extend it to 65G ?
Best Regards,
Strahil Nikolov
В четвъртък, 5 ноември 2020 г., 11:17:48 Гринуич+2, hjadavall...@ukaachen.de
написа:
Dear Mr.Strahil Nikolov,
Thanks for yoor
The engine volume has to be cleaned up .
Here is mine:
[root@ovirt1 ~]# df -h /rhev/data-center/mnt/glusterSD/gluster1\:_engine/
Filesystem Size Used Avail Use% Mounted on
gluster1:/engine 100G 19G 82G 19%
/rhev/data-center/mnt/glusterSD/gluster1:_engine
It seems that
This is just a guess , but you might be able to install fence_xvm on all
Virtualized Hosts .
Best Regards,
Strahil Nikolov
В четвъртък, 5 ноември 2020 г., 16:00:40 Гринуич+2, jb
написа:
Hello,
I would like to build a hyperconverged gluster with hosted engine in a
virtual
Create a vdo device with 'emulate52' and use that for your LVM's PV.
Best Regards,
Strahil Nikolov
В четвъртък, 5 ноември 2020 г., 18:24:32 Гринуич+2, Rob Verduijn
написа:
Hello,
After a serious struggle I finally managed to get ovirt-hosted engine with the
hyperconverged setup to
..
We have nothing found to clean-up the meta files from the glusterfs.
Thanks
Marcel
-- Originalnachricht --
Von: "Strahil Nikolov via Users"
An: "users@ovirt.org" ; "hjadavall...@ukaachen.de"
Gesendet: 05.11.2020 17:10:35
Betreff: [ovirt-users] Re: Issue
---
>Originalnachricht --Von: "Strahil Nikolov via Users" An:
>"users@ovirt.org" ; "hjadavall...@ukaachen.de"
>Gesendet: 05.11.2020 17:10:35Betreff: [ovirt-users]
>Re: Issue with ovirt self hosted engine instalaltion 4.4> The engine volume
>has
Cleaning up is OK, but I got no idea why it fails.
What was the exact error ?
Best Regards,
Strahil Nikolov
В петък, 6 ноември 2020 г., 12:38:03 Гринуич+2, hjadavall...@ukaachen.de
написа:
Dear Mr.Strahil Nikolov,
Thank you once again!
I tried cleaning up the storage path but
Thank you for your reply.
I've try setting host to maintenance and the host reboot immediately, What
does vdsm do when setting host to maintenance? Thank you
Best Regards
Mark Lee
> From: Strahil Nikolov via Users
> Date: 2020-10-27 23:44
> To: users; lifuqi...@sunyainfo.com
>
Strahil Nikolov via Users
:
> Hello Gobinda,I know that gluster can easily convert distributed volume to
>replica volume, so why it is not possible to first convert to replica and then
>add the nodes as HCI ?Best Regards,Strahil NikolovВ вторник, 27 октомври 2020
>г., 08:20:56 Гринуич+2,
The actual command is :
gluster volume set help | less
optimize for virt is just applying some settings optimal for virtualization
tasks - you can do it on any type of volume
It is up to you , the good stuff with Gluster is that you can easily scale it
out and this is not so easy with
You can change it via UI -> Hosts -> select new SPM host -> Management ->
Select as SPM
Best Regards,
Strahil Nikolov
В сряда, 28 октомври 2020 г., 19:46:14 Гринуич+2,
написа:
I think I have a problem in a Nic of one host. This host is the SPM
That's probably why the gluster is
You just need to get the bricks via:
gluster volume info engine
Then you need to go to each server and extend the mount point to at least 61GB.
Also, you need to mount and delete everything inside the content.
Last issue:
/usr/sbin/ovirt-hosted-engine-cleanup
Best Regards,
Strahil Nikolov
Yes,
replica volume size is the size of the smallest brick. If you have 3 hosts with
3 directories called /gluster_bricks/engine/engine , you need to extend every
block device that is used for mounting on /gluster_bricks/engine.
Best Regards,
Strahil Nikolov
В петък, 6 ноември 2020 г.,
Keep the export domain attached only to 1 environment at a time... it's way
more safer. Usually each engine updates some metafiles on the storage domain
and when both try to do it ... you got a bad situation there.
Attach to 4.3 , move VMs, power off them , detach the storage domain - so you
Vinius,
does your storage provide dedpulication ? If yes, then you can provide a new
thin-provisioned LUN and migrate the data from the old LUN to the new one.
Best Regards,
Strahil Nikolov
В понеделник, 28 декември 2020 г., 18:27:38 Гринуич+2, Vinícius Ferrão via
Users написа:
Hi
Can you enable debug logs on the host hosting the Hosted Engine ?
Details can be found on
https://www.ovirt.org/develop/developer-guide/vdsm/log-files.html
Merry Christmas to all !
Best Regards,
Strahil Nikolov
В петък, 25 декември 2020 г., 07:24:32 Гринуич+2, ozme...@hotmail.com
В 18:22 + на 25.12.2020 (пт), Diggy Mc написа:
> Is Oracle Linux a viable alternative for the oVirt project? It is,
> after all, a rebuild of RHEL like CentOS. If not viable, why not? I
> need to make some decisions posthaste about my pending oVirt 4.4
> deployments.
It should be ,as
Any hints in vdsm logs on the affected host or on the
broker.log/agent.log ?
Happy Hollidays to everyone!
Best Regards,Strahil Nikolov
В 14:33 +0200 на 25.12.2020 (пт), Gilboa Davara написа:
> Hello,
>
> Reinstall w/ redeploy produced the same results.
>
> - Gilboa
>
>
> On Thu, Dec 24, 2020
There is some issue with the dns. Check the A/ and PTR records are correct
for the Hosted Engine .
Best Regards,
Strahil Nikolov
В понеделник, 28 декември 2020 г., 22:15:14 Гринуич+2, lejeczek via Users
написа:
hi chaps,
a newcomer here. I use cockpit to deploy hosted engine
I send it unfinished.
Another one from Red Hat's Self-Hosted Engine Recommendations:
A storage domain dedicated to the Manager virtual machine is created during
self-hosted engine deployment. Do not use this storage domain for any other
virtual machines.
If it's a fresh deployment , it's
> 1. Right now we are using one SAN with 4 LUN (each mapped into 1 >specific
>volume) and configure the storage domain for each LUn (1 LUN = 1 >Storage
>Domain). Is this configuration are good ? One more, about Hosted >Engine, when
>we setup the cluster, it provision one storage
I'm not sure if the templates are automatically transferred , but it's worth
checking before detaching the storage.
Best Regards,
Strahil Nikolov
В понеделник, 28 декември 2020 г., 18:53:27 Гринуич+2, Diggy Mc
написа:
Templates? Aren't the VM's templates automatically copied to
I can't recall the exact issue that was reported in the mailing list, but I
remember that the user had to power off the engine and the VMs... the Devs can
clearly indicate the risks of running the HostedEngine with other VMs on the
same storage domain.
Based on Red Hat's RHV documentation the
Maybe there is a missing package that is preventing that.
Let's see what the devs will find out next year (thankfully you wpn't have to
wait much).
Best Regards,
Strahil Nikolov
В сряда, 30 декември 2020 г., 16:30:37 Гринуич+2, Gilboa Davara
написа:
Short update.
1. Ran
Are you uploading to 4.4 or to the old 4.3 ?
I'm asking as there should be an enhancement that makes a checksum on the
uploads in order to verify that the upload was successfull.
Best Regards,
Strahil Nikolov
В сряда, 30 декември 2020 г., 18:37:52 Гринуич+2, Jorge Visentini
написа:
> Can I migrate storage domains, and thus all the VMs within that
> storage domain?
>
>
>
> Or will I need to build new cluster, with new storage domains, and
> migrate the VMs?
>
>
Actually you can create a new cluster and ensure that the Storage
domains are accessible by that new cluster.
> What is the best solution for making your VMs able to automatically
> boot up on another working host when something goes wrong (gluster
> problem, non responsive host etc)? Would you enable the Affinity
> Manager and enforce some policies or would you set the VMs you want
> as Highly
Are you using E1000 on the VMs or on the Host ?
If it's the second , you should change the hardware .
I have never used e1000 for VMs as it is an old tech. Better to install the
virtio drivers and then use the virtio type of NIC.
Best Regards,
Strahil Nikolov
В четвъртък, 31 декември 2020
Hi Ilan,
Do you know how to use the config for disk_upload.py (or the --engine-url,
--username, --password-file or --cafile params) on 4.3.10 as I had to directly
edit the script ?
Best Regards,
Strahil Nikolov
В четвъртък, 31 декември 2020 г., 00:07:06 Гринуич+2, Jorge Visentini
Hi All,
I noticed that Red Hat Gluster Storage Console is based on oVirt's web
interface.
Does anyone know how to deploy it on CentOS 8 for managing Gluster v8.X ?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe
> I wonder what other folks are using or if someone has any suggestions
> to offer.
I'm using Ansible do deploy some stuff from templates.
I think that terraform is also used with oVirt, so you can give it a
try.
Best Regards,
Strahil Nikolov
___
Users
It's about launching a VM , so just try to power off and it should cancel the
launch.
Best Regards,
Strahil Nikolov
В вторник, 5 януари 2021 г., 06:52:24 Гринуич+2, tommy
написа:
Hi, everyone:
How to cancel a longtime running task on ovirt web interface ?
Task looks can not be
В 16:09 + на 05.01.2021 (вт), lejeczek via Users написа:
> Hi guys,
>
> Is supported and save to transition with > 4.4 to Centos
> Stream, now when "Stream" is the only way to the future? Any
> knows for certain?
Stream is not used by all Red Hat teams (yet) , thus it might be a
little bit
В 10:41 -0400 на 05.01.2021 (вт), Gervais de Montbrun написа:
Thanks for the feedback. Are you using ansible to launch the vm from the
template, or to provision the template once it is up?
I was cloning VMs from a template, but as I'm still on oVirt 4.3 - I
cannot use this approach with EL8 (only
Have you tried to put a host into maintenance, remove and then readd it ?
You can access all Red Hat solutions with their free developer subscription.
Best Regards,
Strahil Nikolov
В сряда, 6 януари 2021 г., 13:17:42 Гринуич+2, Gary Lloyd
написа:
Hi please could someone
Sharon , what about a nginx/apache listening on different virtual hosts
and redirecting (actually proxying) to the correct portal ?
Do you think that it could work (the certs will not be trusted, but
they can take the exception) ?
Best Regards,Strahil Nikolov
В 17:24 +0200 на 06.01.2021 (ср),
What is the output of 'rpm -qa | grep vdo' ?
Most probably the ansible flow is not deploying kvdo , but it's necessary at a
later stage.Try to overcome via "yum search kvdo" and then 'yum install
kmod-kvdo" (replace kmod-kvdo with the package for EL8).
Also, I think that you can open a github
Hi Bernardo,
I think that when CentOS Stream 9 (and all EL 9 clones) come up - oVirt will
switch , so I think it's worth trying the Stream (but no earlier than April).
Best Regards,
Strahil Nikolov
В неделя, 10 януари 2021 г., 09:32:01 Гринуич+2, Bernardo Juanicó
написа:
Hello,
This sounds like firewall issue.
Best Regards,
Strahil Nikolov
В понеделник, 11 януари 2021 г., 01:05:47 Гринуич+2, Matthew Stier
написа:
I've added several new hosts to my data center, and instead of adding them to
my 'default' cluster, I created a new cluster ('two').
I can
В 02:29 + на 16.01.2021 (сб), Ariez Ahito написа:
> last dec i installed hosted-engine seems to be working, we can
> migrate the engine to different host. but we need to reinstall
> everything because of gluster additional configuration.
> so we did installed hosted-engine. but as per
> [root@medusa qemu]# virsh define /tmp/ns01.xml
> Please enter your authentication name: admin
> Please enter your password:
> Domain ns01 defined from /tmp/ns01.xml
>
> [root@medusa qemu]# virsh start /tmp/ns01.xml
> Please enter your authentication name: admin
> Please enter your password:
>
For anyone interested ,
RH are extending the developer subscription for production use of up to 16
systems [1].
For me , it's completely enough to run my oVirt nodes on EL 8.
https://www.redhat.com/en/blog/new-year-new-red-hat-enterprise-linux-programs-easier-ways-access-rhel
Best Regards,
В 14:41 + на 23.01.2021 (сб), Florian Schmid via Users написа:
> Hi Strahil,
>
> thank you very much for the information.
>
> Now the question is, will oVirt stay 100 % compatible to RH?
It should, but it it might have issues like we got with ovirt 4.4
(cluster compatibility 4.5) and CentOS
I think it's easier to get the Vmware's CA certificate and import it on
all hosts + engine and trust it.By default you should put it at
/etc/pki/ca-trust/source/anchors/ and then use "update-ca-trust" to
make all certs signed by the Vmware vCenter's CA trusted.
Best Regards,Strahil Nikolov
В
I guess the name in the NIC can be used for that purpose.
Best Regards,Strahil Nikolov
and in order to assign IP using cloud-init, "In-guest Network Interface Name"
field should be filled but how to know that name in advance?
___
Users mailing list --
> Then I enable the quorum on the server side:
>
> [root@gluster1 ~]# gluster volume set all cluster.server-quorum-ratio
> 51%
> volume set: success
> [root@gluster1 ~]#
> [root@gluster1 ~]# gluster volume set volume1 cluster.server-quorum-
> type server
> volume set: success
> [root@gluster1
It worked for me , but my HE is 4.3.10.
Best Regards,Strahil Nikolov
В 16:39 + на 22.01.2021 (пт), José Ferradeira via Users написа:
> Hello,
>
> # su - postgres
> -bash-4.2$ source /opt/rh/rh-postgresql10/enable
> -bash: /opt/rh/rh-postgresql10/enable: No such file or directory
> -bash-4.2$
Try using the 'source /opt/rh/rh-postgresql95/enable'
Best Regards,Strahil Nikolov
В 19:53 + на 22.01.2021 (пт), José Ferradeira via Users написа:
> The postgres is older than 10:
> postgresql-jdbc-9.2.1002-6.el7_5.noarch
> postgresql-libs-9.2.23-3.el7_4.x86_64
>
> 2) When I run HCI deployment... is their any work or means to scan
> the data volume and import VMs it finds? To rebuild my five or six
> VMs that keeping would be helpful would take a few hours.. .. My
> concern is what it would take when I have a few dozen.. or... how
> many tid-bits of
> 2) If above is always to be an issue .. can I make default VM
> deployment be a dedicated gluster volume (my case is 1TB SSD with VDO
> in each server) such that rebuild of engine layer can just slurp back
> in the VMs from that volume.
It is highly recommended (well actually all devs will say
First of all ,
verify the gluster volume options (gluster volume info ;
gluster volume status ).When you use HCI, ovirt sets up a lot
of optimized options in order to gain the maximum of the Gluster
storage.
Best Regards,Strahil Nikolov
В 15:03 + на 25.01.2021 (пн), Robert Tongue написа:
>
. It is a test that we do not need anymore but we cant
remove. According to
[root@node03 ~]# iscsiadm -m session
tcp: [1] 10.100.200.20:3260,1 iqn.2005-10.org.freenas.ctl:ovirt-data
(non-flash)
its attached, but something is still missing...
On 17/01/2021 11:45, Strahil Nikolov via Users wrote:
> W
. Are you saying Ceph on oVirt Node NG isn't possible?
2. Would you know which devs would be best to ask about the recent Ceph changes?
Thanks,
Shantur
On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users
wrote:
> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>> H
I think that it's complaining for the firewall. Try to restore with running
firewalld.
Best Regards,
Strahil Nikolov
В понеделник, 18 януари 2021 г., 17:52:04 Гринуич+2, penguin pages
написа:
Following document to redploy engine...
Most probably the dwh is far in the future.
The following is not the correct procedure , but it works:
ssh root@engine
su - postgres
source /opt/rh/rh-postgresql10/enable
psql engine
engine=# select * from dwh_history_timekeeping ;
Best Regards,
Strahil Nikolov
В понеделник, 18 януари
Hm... this sounds bad . If it was deleted by oVirt, it would ask you whether to
remove the disk or not and would wipe the VM configuration.
Most probably you got a data corruption there . Are you using TrueNAS ?
Best Regards,
Strahil Nikolov
В вторник, 19 януари 2021 г., 00:06:15
Can you share both ovirt and gluster logs ?
Best Regards,
Strahil Nikolov
В четвъртък, 14 януари 2021 г., 20:18:03 Гринуич+2, Charles Lam
написа:
Thank you Strahil. I have installed/updated:
dnf install --enablerepo="baseos" --enablerepo="appstream"
--enablerepo="extras"
Can you try to alter the share options like this (check for typos as I am
typing by memory):anonuid=36,anongid=36, all_squash
And of course export a fresh one:
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Mon, Jan 25, 2021 at 0:51, Matt Snow wrote:
Yes it can.
Sent from Yahoo Mail on Android
Hi, Am new using oVirt and i would like to know if i could deploy oVirt and
be able to use it to deploy and manage Gluster storage.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an
Hi Shantur,
the main question is how many nodes you have.
Ceph integration is still in development/experimental and it should be wise to
consider Gluster also. It has a great integration and it's quite easy to work
with).
There are users reporting using CEPH with their oVirt , but I can't
What is the output of 'lsblk -t' on all nodes ?
Best Regards,
Strahil NIkolov692371
В 11:19 +0100 на 17.01.2021 (нд), Christian Reiss написа:
> Hey folks,
>
> quick (I hope) question: On my 3-node cluster I am swapping out all
> the
> SSDs with fewer but higher capacity ones. So I took one
Hi,
can you share what procedure/steps you have implemented and when the issue
occurs ?
Best Regards,
Strahil Nikolov
В неделя, 17 януари 2021 г., 10:40:57 Гринуич+2, Keith Forman via Users
написа:
Hi
Need help with setting up Ovirt.
engine-setup run successfully on CentOS 7, with
t for CEPH.
> Regards
> Shantur
>
> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <
> users@ovirt.org> wrote:
> > Hi Shantur,
> >
> >
> >
> > the main question is how many nodes you have.
> >
> > Ceph integration i
[root@ohost1 ~]# firewall-cmd --list-all public (active)
> target: default
> icmp-block-inversion: no
> interfaces: bond0 ovirtmgmt
> sources:
> services: cockpit dhcpv6-client libvirt-tls ovirt-imageio ovirt-
> vmconsole snmp ssh vdsm
> ports: 22/tcp 6081/u
В 17:50 +0200 на 13.01.2021 (ср), Andrei Verovski написа:
> Hi,
>
>
> I’m currently adding new oVirt node to existing 4.4 setup.
> Which underlying OS version would you recommend for long-term
> deployment - CentOS 8 or Stream ?
Stream is not used by all RH teams, while CentOS 8 will be dead
As those are brand new,
try to install the gluster v8 repo and update the nodes to 8.3 and
then rerun the deployment:
yum install centos-release-gluster8.noarch
yum update
Best Regards,
Strahil Nikolov
В 23:37 + на 13.01.2021 (ср), Charles Lam написа:
> Dear Friends:
>
> I am still stuck
>
> Questions:
> 1) I have two important VMs that have snapshots that I need to boot
> up. Is their a means with an HCI configuration to manually start the
> VMs without oVirt engine being up?
What it worked for me was:
1) Start a VM via "virsh"
define a virsh alias:
alias virsh='virsh -c
Sadly my oriental language skills are close to "0".
Can you share the screenshot with English menu or to provide the steps which
you have executed (menu after menu) to reach that state ?
Best Regards,
Strahil Nikolov
В понеделник, 4 януари 2021 г., 12:48:55 Гринуич+2, tommy
написа:
401 - 500 of 1137 matches
Mail list logo