Hi ,
first check the brick is mounted. Then, you can force start the volume which
will force the brick to be started.
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Fri, Aug 20, 2021 at 22:13,
eev...@digitaldatatechs.com wrote: I have an
ovirt 4.3 on 3 Centos 7 with
You can 'clone' by using Gluster/Ovirt's DR functionality (keep in mind that
the receiving volume is in read-only mode, so when the cut-over comes you have
to change it).
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Sat, Aug 21, 2021 at 12:36, David White via Users
I would try latest 4.4 (I think it was 4.4.7).
Best Regards,Strahil Nikolov
On Sun, Aug 15, 2021 at 16:46, Andrew Lamarra
wrote: Thank you, both, for the help!
I see that there's a version 4.3.10. Will that version not work?
Andrew
___
Users
The offline install mode should work in both 4.3 and 4.4 .About CentOS Stream
... it is as it is. You can use any EL8 distro (for example RHEL8 with the new
developer subscription, the company can have up to 16 physical prod
systems).4.3 is not supported which means that it has no bug fixes,
Hi David,
how big are your VM disks ?
I suppose you have several very large ones.
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Thu, Aug 26, 2021 at 3:27, David White via Users wrote:
I have an HCI cluster running on Gluster storage. I exposed an NFS share into
oVirt
Don't mix THP with HP.THP is a mechanism of the kernel to "create" hugepages ,
but it's inefficient. Disable THP on the VM.Also, if you have large VMs on the
host -> consider disabling it there too.
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Thu, Aug 26, 2021 at 16:26,
I guess gou need to try:all_squash + anonuid=36 + anongid=36
Best Regards,Strahil Nikolov
On Fri, Aug 27, 2021 at 23:44, Alex K wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy
I can't call it "resolved" , but it's up to you.
I would look at gluster logs for clues.
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
Its resolved but had to rebuild the cluster and lost some data.
___
Users mailing list --
I know that the GUi should work in most setups, as oVirt is the upstream of the
Red Hat Gluster Storage Console.
Despite RH being an open-source company, it's also trying to sell support
contracts and thus it's not an open-documentation company - they need to make
money after all.
I know that
I think that first you need to install the appliance and then in offline mode
ot should skip connecting to yum repos.
Best Regards,Strahil Nikolov
On Fri, Aug 13, 2021 at 19:56, Andrew Lamarra
wrote: Hi there. I'm trying to get oVirt up & running on a server in a
network that has no
This looks like a bug. It should have 'recovered' from the failure.
I'm not sure which logs would help identify the root cause.
Best Regards,Strahil Nikolov
On Fri, Sep 3, 2021 at 16:45, Gianluca Cecchi
wrote: Hello,I was trying incremental backup with the provided
That's really odd. Maybe you can try to clone it and then experiment on the
clone itself. Once the reason is found out, you can try with the original.
My first look is to check all logs on the engine and the SPM for clues.
Best Regards,Strahil Nikolov
On Fri, Sep 3, 2021 at 11:42, David
When setting up over the UI, last step shows the ansible tasks.
Can you find your version of and print it here: Set Gluster specific SeLinux
context on the bricks'
Best Regards,Strahil Nikolov
On Wed, Sep 8, 2021 at 12:43, dhanaraj.ramesh--- via Users
wrote: Hi Team
I'm trying to
Did you check
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7I3PQVERQZT6Q6CXDWJEWCY2ELEGRHY/
?
Best Regards,Strahil Nikolov
On Wed, Sep 8, 2021 at 14:25, Staniforth,
Paul wrote:
___
Users mailing list -- users@ovirt.org
To
Dis you enable libgfapi ?engine-config -s LibgfApiSupported=true
Note: power off and then power on the VM. The qemu process should not use the
'/rhev' mountpoints.
Also, share your current setup:- disks- hw controller- did you storage align
your block devices (hw raid only)- tuned-profile-
Can you provide the output from all nodes:
gluster pool listgluster peer statusgluster volume status
Best Regards,Strahil Nikolov
On Fri, Sep 10, 2021 at 0:50, marcel d'heureuse wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe
in user benchmarks were impressive, and the underlying qemu/libvirt
bugs are now fixed or were close to be.
Guillaume Pavese
Ingénieur Système et RéseauInteractiv-Group
On Thu, Sep 9, 2021 at 7:00 PM Strahil Nikolov via Users
wrote:
Dis you enable libgfapi ?engine-config -s LibgfApiSupported=true
You need to specify the following variable:he_offline_deployment: true
Best Regards,Strahil Nikolov
[ INFO ] TASK [ovirt.ovirt.engine_setup : Gather facts on installed
packages]
[ INFO ] ok: [localhost -> 192.168.1.248]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Fail when firewall manager is
As long as thr old engine is completely offline, I think that you can import
them without issues.
Best Regards,Strahil Nikolov
On Mon, Sep 6, 2021 at 17:12, mar...@deheureu.se wrote:
so Ovirt is setup. The GlusterFS where the VMs are located also in but i have
no an illegal disk from a
I think that the following documentation explains it far better than a single
person can do:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/technical_reference/power_management
Best Regards,Strahil Nikolov
On Sat, Sep 18, 2021 at 9:00, Tommy Sway wrote:
Have you tried the windows fix (a.k.a Engine restart )?
Best Regards,Strahil Nikolov
On Fri, Sep 17, 2021 at 16:43, Andrea Chierici
wrote: ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy
Values are based on the method you want to fence the node.What type of server
do you have ?
Best Regards,Strahil Nikolov
On Sat, Sep 18, 2021 at 15:31, Tommy Sway wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
I think that in the UI there is an option to edit.Find and replace
glusterd_brick_t with system_u:object_r:glusterd_brick_t:s0 and run again.
Best Regards,Strahil Nikolov
On Thu, Sep 16, 2021 at 12:33,
bgrif...@affinityplus.org wrote: I had the same
issue with a new 3-node deploy on
It should be
system_u:object_r:glusterd_brick_t:s0
Best Regards,Strahil Nikolov
I'm having this same issue on 4.4.8 with a fresh 3-node install as well.
Same errors as the OP.
Potentially relevant test command:
[root@ovirt-node0 ~]# semanage fcontext -a -t glusterd_brick_t
If your Host is stale and is having thr SPM role -> some tasks will fail.Also,
the VMs on such host won't be recovered unless VM HA is enabled (with storage
lease).
For prod, I would setup that.Keep in mind that the fencing is issued by another
host in the cluster, so you need minimum 2 hosts
I had similar issues where the battery of the motherboard was running out of
'juice' and the hardware cloxk was just going crazy. NTP couldn't fix it , so a
replacement of the battery was the only option.
Best Regards,Strahil Nikolov
On Tue, Sep 7, 2021 at 7:26, dhanaraj.ramesh--- via
After rebootof node1 -> bricks must always come up. Most probably VDO had to
recover for a longer period blocking the bricks coming up on time.Investigate
this issue before rebooting another host.
Best Regards,Strahil Nikolov
Hi guys,
one strange thing happens, cannot understand it.
CPU passthrough and cpu pinning are important for cpu lattency workloads luke
for example SAP HANA or workloads that benefit from the extra (cpu passthrough)
instructions.
Best Regards,Strahil Nikolov
On Mon, Aug 2, 2021 at 13:13, Patrick Lomakin
wrote: Have you done CPU passthrough in
Is this the built-in 'admin' user ?
Best Regards,Strahil Nikolov
On Mon, Aug 2, 2021 at 13:09, Nicolás wrote: Hi,
A time ago I posted a similar question and I couldn't get it solved. I
couldn't spend more time on this up until now, so I'm trying again and
having the same error, which
Did you capture via tcpdump (both VM and host) any info ?
Best Regards,Strahil Nikolov
On Mon, Aug 9, 2021 at 11:00, Andrea Chierici
wrote: ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy
Corrupted metadata is the problem you see.
I think there was a command to fix it, but I can't recall it right now.
Best Regards,Strahil Nikolov
On Sun, Aug 8, 2021 at 22:09, Gilboa Davara wrote:
___
Users mailing list -- users@ovirt.org
To
I have never managed gluster via the UI, but theoretically it should work from
there.
Maybe there is a bug.
You can prepare the new bricks (don't forget to "mkfs.xfs -i size=512
/dev/vg/lv") manually, but it's more error-prone.
Let's see if someone else can confirm that this (UI) behavior is
If the VMs are on the replica 1 volume (and they should), converting to
'replica 3' will just copy the data between the nodes.
Check in the UI the volume type , brick count, etc.
I guess you will need to add the two hosts, which oVirt will automatically add
them in Gluster Cluster. You can then
I think that your VMs are already using the Gluster (replica 1 , a.k.a
distributed volume).
You might just add the new host via Engine UI, then through the UI you can
create the bricks and modify the volume .
Of course you can go via cli.
Best Regards,Strahil Nikolov
On Mon, Aug 9, 2021
If I wish to resize an non-Ovirt iSCSI LUN I would:
- resize the block paths pointing to the iSCSI LUN
- resize the multipathd device that was aggregated
I guess you can give it a try.
Best Regards,
Strahil Nikolov
В понеделник, 9 август 2021 г., 19:32:11 ч. Гринуич+3, Shantur Rathore
Yep, that's the whole idea of the single-node setup .You have to remove the
nodes from the old Engine and then add them via Engine UI, andible or api.
Best Regards,Strahil Nikolov
On Fri, Aug 6, 2021 at 19:47, Mathieu Valois wrote: Hi
everyone,
is it possible to add nodes to a single
Have you tried with an empty pass ?
Best Regards,Strahil Nikolov
I obtained the Certificate from the link on from the ovirt console main
page. The certificate has been save to storage. I attempt to import the
certificate into a FireFox Browser and get the following message:
Please
It's also valid for HA clustering (corosync/pacemaker). There are some HW
vendor implementations (HPE iLO, DELL iDRAC,etc) where the ipmi request will
trigger a graceful shutdown (power button press) ,where the system could stuck
indefinitely but still accessing the resources (for example
Usually this is not the problem.
Start checking:
1. Export FS is mounted
2. NFS server is running (after all this is a single node NFS setup)
3. Check that vdsmd , supervdsmd and sanlock are running
4. If needed, enable debug for the ovirt-ha-{agent,broker} as usually the
default log level won't
Happy SysAdmin Day!
I want to thank Duck and wish to all fellow SysAdmins in the oVirt community a
Happy Holliday !
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy
Based on my experience, just set 1 thread:1core:as many sockets you need.
You can still match the vortual sockets to numa nodes, butlater it will bwcome
harder as most probably the Hypervisor won't be dedicated to the VM.
Best Regards,Strahil Nikolov
On Fri, Jul 30, 2021 at 13:00, Milan
I'm not sure, but you can create your own vdsm hook that can alter the VM's xml
before powering up.
Best Regards,Strahil Nikolov
On Fri, Jul 30, 2021 at 15:23, Merlin Timm wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an
When the web UI reports OK (after that 1 sec), do you see a file with the same
uuid on that nfs storage ?
Maybe you can give some stat output of the file.
Best Regards,Strahil Nikolov
On Fri, Jul 30, 2021 at 16:15, Nur Imam Febrianto
wrote:
Hi,
Lastly in 4.4.6, I have a problem
You need to (all Hypervisors that will be running this script):- download the
engine's CA from
https:///ovirt-engine/services/pki-resource?resource=ca-certificate=X509-PEM-CA
- put it at :/etc/pki/ca-trust/source/anchors/- make it trousted by running:
update-ca-trust extract
Best
I guess it's pretty obvious that I miss my qwerty hardware keyboard :D
Best Regards,Strahil Nikolov
On Fri, Jul 30, 2021 at 15:31, Strahil Nikolov via Users
wrote: ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users
:33, Strahil Nikolov via Users
wrote: ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community
First of all you diddn't 'mkfs.xfs -i size=512' . You just 'mkfs.xfs' , whis is
not good and could have caused your VM problems. Also , check with xfs_info the
isize of the FS.
You have to find the uuid of the disks of the affected VM.Then go to the
removed host,and find that file -> this is
when you use 'remove-brick replica 1', you need to specify the removed bricks
which should be 1 (data brick and arbiter).Something is mising in your
description.
Best Regards,Strahil Nikolov
On Thu, Aug 5, 2021 at 7:33, Strahil Nikolov via Users
wrote
You won't be able to migrate the VM from the host, but it should work.
Best Regards,Strahil Nikolov
On Wed, Aug 4, 2021 at 12:49, Tony Pearce wrote: I have
recently added a fresh installed host on 4.4, with 3 x nvidia gpu's which have
been passed through to a guest VM instance. This
If the system is boot from SAN, you can just present the LUNs to the new
host.You might need to boot in the full initramfs (the bottom entry in grub)
and rebuild all initramfs images.
Best Regards,Strahil Nikolov
On Wed, Aug 4, 2021 at 13:45, Yedidyah Bar David wrote:
As far as I know rdma is deprecated ong glusterfs, but it most probably works.
Best Regards,Strahil Nikolov
On Thu, Aug 5, 2021 at 5:05, Vinícius Ferrão via Users
wrote: Hello,
Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?
The real issue is regarding
Hi David,
I hope you manage to recover the VM or most of the data. If you got multiple
disks in that VM (easily observeable in oVirt UI), you might need to repeat
that again for the rest of the disks.
Check with xfs_info the inode size (isize), as the default used to be 256, but
I have noticed
What happens if you define a tmpfs and then create the qemu disk ontop of that
ramdisk.Does qemu hang again ?
Best Regards,Strahil Nikolov
On Thu, Sep 23, 2021 at 18:25, Shantur Rathore
wrote: ___
Users mailing list -- users@ovirt.org
To
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-ipmi-ca
True or 1. If blank, then value is False. It is recommended that you enable
Lanplus to improve the security of your connection if your hardware supports it.
Most
When systems go 'crazy' there is no guarantee that they will be completely
unresponsive. HA VMs should be fine, but regular VMs won't be restarted as the
engine won't know if the host is dead or not (and no fencing is configured to
guarantee that).
Also, storage tasks could fail if that host is
>How do I reconfigure oVirt to use the 2nd replica as a secondary mount point?
Verify that your engine's volume is really OK.
gluster volume info enginegluster volume status enginegluster volume heal
engine info summary
>I cannot migrate the engine off of c.
And if the engine is running on the
Hey Sandro,
do we know why this has been done ?
Best Regards,Strahil Nikolov
On Sun, Oct 10, 2021 at 16:48, Ax Olmos wrote:
The problem is that the ‘glusterd_brick_t’ file context is missing from
selinux-policy-targeted 3.14.3-80 on CentOS 8 Stream.
It exists in the CentOS 8.4 version:
gluster volume set help | grep shd
Most probably you want to change cluster.shd-max-threads if your hardware can
supoort it.
Best Regards,Strahil Nikolov
On Sun, Oct 10, 2021 at 2:26, David White via Users wrote:
I can't remember if I've asked this already, or if someone else has brought
Actually it seems that glusterfs-selinux should fix the problem.
Best Regards,Strahil Nikolov
On Mon, Oct 11, 2021 at 0:28, Strahil Nikolov via Users
wrote: ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le
Try with curl -u 'admin@internal'@'pass' ...
Best Regards,Strahil Nikolov
On Thu, Oct 21, 2021 at 2:17, David White via Users wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy
Do you use proxy ?
Best Regards,Strahil Nikolov
On Thu, Oct 21, 2021 at 5:07, Raj P wrote: Hi
Sandro,
Am not sure where the problem is, I am on a 1gig link with 500 mbps upload
download on average.
I think its just happening with the repos and its a different error every time.
Have you tried
https://www.ovirt.org/documentation/virtual_machine_management_guide/#Adding_TPM_devices
?
Best Regards,Strahil Nikolov
On Thu, Oct 21, 2021 at 19:21, bob.franzke--- via Users
wrote: Need to deploy a VM of a Windows 11 guest. Windows 11 requires TPM 2.0
support for it to
Try unlock_entitiy.sh with '-t all -r'
Best Regards,Strahil Nikolov
On Sat, Oct 16, 2021 at 13:43, David White via Users wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
It was later discovered that the selinux policy was removed from the selinux
packages. You will need gluster-selinux which should be available in the latest
version of oVirt.
Best Regards,Strahil Nikolov
On Wed, Oct 20, 2021 at 3:17,
ad...@foundryserver.com wrote: I have the same
Shitdown needs integration and cooperation from the VM OS. Did you install the
qemu-guest-agent , is it working properly ?
Best Regards,Strahil Nikolov
On Fri, Oct 15, 2021 at 4:12,
zhou...@vip.friendtimes.net wrote:
#yiv1327093825 body {line-height:1.5;}#yiv1327093825 blockquote
For the score issue you can check
https://www.google.com/url?sa=t=web=j=https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf=2ahUKEwiWlZu5yMvzAhX0hf0HHaDACzQQFnoECAQQAQ=AOvVaw2SJWnH5ghQoZq7CV_f-9hs
and then identify your problem and fix it.
For the he_local, you can use hosted-engine
page memory on virtual machines?
Which one is prefer ?
From: users-boun...@ovirt.org On Behalf Of Strahil
Nikolov via Users
Sent: Tuesday, September 28, 2021 12:05 AM
To: tommy ; 'users'
Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit
https://docs.oracle.com/en/database
Yes, you can use with 4 nodes.
You have to check what has caused the crash before starting over or loosing the
logs.
Best Regards,
Strahil Nikolov
В вторник, 28 септември 2021 г., 09:56:30 ч. Гринуич+3,
написа:
I have 4 servers of identical hardware. The documentation says "you
Behalf Of Strahil
Nikolov via Users
Sent: Tuesday, September 28, 2021 3:39 PM
To: 'users' ; Tommy Sway
Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit
I think that if you run VMs with Databases, you must disable transparent huge
pages on Hypervisour level and on VM level. Yet, if you wi
ince, if it not you may end with issues while staring the guest VMs.
>
> I really don't know what to do now.
>
>
>
>
>
> -Original Message-
> From: users-boun...@ovirt.org On Behalf Of Strahil
> Nikolov via Users
> Sent: Tuesday, September 28, 2021 3:39 P
Tinkering with timeouts could be risky, so in case you can't have a second
switch - your solution (shutting down all VMs, maintenance, etc) should be the
safest.
If possible test it on a cluster on VMs, so you get used to the whole procedure.
Best Regards,Strahil Nikolov
On Wed, Sep 29,
I think you are looking for certmonger, but it will require some manual steps:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system-level_authentication_guide/certmongerx
Best Regards,Strahil Nikolov
On Thu, Sep 30, 2021 at 10:17, Tommy Sway wrote:
As you
een determined, I am also very confused.
-Original Message-
From: users-boun...@ovirt.org On Behalf Of Strahil
Nikolov via Users
Sent: Wednesday, September 29, 2021 8:50 PM
To: 'users' ; Tommy Sway
Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit
I got a 3 TB host (physi
I was thinking the same.Would you open a feature request to bugzilla.redhat.com
?
I know that certmonger can renew automatically all certs via an external CA, so
that would be a great feature.
Best Regards,Strahil Nikolov
On Fri, Oct 1, 2021 at 7:41, tommy sway wrote:
Put ovnode2 in maintenance (put a tick for stopping gluster), wait till all VMs
evacuate and the host is really in maintenance and activate it back.
Restarting the glusterd also should do the trick, but it's always better to
ensure no gluster processes have been left running(inclusing the mount
Actually yes - should speedup the process. I've edited that draft too many
times :)
On Fri, Oct 1, 2021 at 7:32, tommy sway wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy
--
发件人: Strahil Nikolov
日期: 2021年10月1日周五 22:52
收件人: tommy sway , Strahil Nikolov via Users
主 题: Re: [ovirt-users]Re: 回复: Re: About the vm memory limit
Actually yes - should speedup the process. I've edited that draft too many
times :)
On Fri, Oct 1, 2021 at 7:32, tommy sway
In cockpit installer last step allows you to edit the ansible before running
it.Just search for glusterd_brick_t and replace it.
Best Regards,Strahil Nikolov
On Fri, Oct 1, 2021 at 17:48, Woo Hsutung wrote: Same
issue happens when I deploy on single node.
And I can’t find where I can
It is possible, but without the SPM host being fenced you won't be able to do
any storage-related tasks. Even snapahot management will be impossible
withoutmanual intervention (reboot the host from the remote management and then
mark the host as restarted).
Best Regards,Strahil Nikolov
On
Also, you can edit the /etc/fstab entries by adding in the mount options:
context="system_u:object_r:glusterd_brick_t:s0"
Then remount the bricks (umount ; mount ). This tells the kernel to
skip selinux lookups and assume everything has gluster brick files, which will
reduce the I/O.
Best
Most probably it's in a variable.
just run the following:semanage fcontext -a -t
"system_u:object_r:glusterd_brick_t:s0" "/gluster_bricks(/.*)?"
restorecon -RFvv /gluster_bricks/
Best Regards,Strahil Nikolov
On Sat, Oct 2, 2021 at 3:08, Woo Hsutung wrote:
Strahil,
Thanks for your
I just checked the module source and it should be working with
'glusterd_brick_t'.
Do you have gluster-server installed on all nodes ?
Best Regards,Strahil Nikolov
On Sat, Oct 2, 2021 at 23:13, Strahil Nikolov via Users
wrote: ___
Users
For 4.3 - yes. This is due to the issues in 4.3Once you move to 4.4 , you
should be able to enable HugePages (not THP) on the Hypervisors without
troubles.
I assume that your workload is 'in production' and stability (no downtime) is
more important that some performance gain from HugePages on
vdsm supports dynamic allocation of Huge Pages, but
https://access.redhat.com/solutions/4904441 indicates the issues in 4.3 related
to hugepages (you can see the solution via RH dev subscription).If you needed
to set them manually and everything works normally, I would have picked
something
Don't you have a task just like
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/mount.yml#L64-L70
?
Best Regards,Strahil Nikolov
On Sat, Oct 2, 2021 at 23:00, Woo Hsutung wrote:
___
Users mailing list --
HugePages filter -> expects preallocated Huge Pages (not THP)
So either use dynamic allocation with disabled filter or on the contrary ->
fixed ammount of pages
Best Regards,Strahil Nikolov
On Sat, Oct 2, 2021 at 14:25, Tommy Sway wrote:
___
if you are eligible for the developr subscription you can subscribe at :
https://developers.redhat.com/register
P.S.: It now includes up to 16 RHEL machines for production usage.
Best Regards,Strahil Nikolov
On Sun, Oct 3, 2021 at 12:57, Tommy Sway wrote:
原始邮件
发件人: Strahil Nikolov
日期: 2021年10月3日周日 17:05
收件人: sz_cui...@163.com, Strahil Nikolov via Users
抄送: Simon Coter
主 题: Re: [ovirt-users]Re: 回复:Re: 回复: Re: About the vm memory limit
For 4.3 - yes. This is due to the issues in 4.3Once you move to 4.4 , you
should be able to enable
I would check the api guide at https://ovirt.somedomain/ovirt-engine/apidoc/#/
Best Regards,Strahil Nikolov
Hello, please how to increase extend resize disk of VM?
I can working with ansible or REST API.
Ansible working is here, but i not found manual for update size:
Admin Portal -> Storage -> Disks -> Select Disk -> upper right corner -> Move
-> follow the wizard
Best Regards,
Strahil Nikolov
В неделя, 26 септември 2021 г., 14:06:23 ч. Гринуич+3, Tommy Sway
написа:
From the document:
Overview of Live Storage Migration
Virtual disks can be
Just take a CentOS DVD , select troubleshooting and then once you drop to a
shell -> you can mount /proc/ , /sys , /dev & /run with the bind option and
chroot.
Then just follow the procedure for your boot type EFI vs Legacy and recover the
files missing.
Best Regards,
Strahil Nikolov
В
According to
https://portal.nutanix.com/page/documents/kbs/details?targetId=kA060008T3jCAE
:
'The lanplus parameter is required to enable the IPMI 2.0 RMPC+ protocol'
According to
https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface 'RMCP+
(a UDP-based protocol with
Transparent huge pages are enabled by default, so you need to stop them.
I would use huge pages on both host and VM, but theoretically it shouldn't be a
problem running a VM with enabled HugePages without configuring on Host.
Best Regards,
Strahil Nikolov
В събота, 25 септември 2021 г.,
Sadly, libgfapi has some limits and some of them are like your case.
If it's a linux VM, you can create new disk from the Gluster Storage and then
attach and clone/pvmove it from within the guest.
Another approach is to disable libgfapi, shutdown the vm, power on the VM,
migrate the disk and
I can't recall - it was discussed here in the list and some users had troubles.
I hope some of the devs can chime in.
Best Regards,
Strahil Nikolov
В неделя, 26 септември 2021 г., 07:04:29 ч. Гринуич+3, Tommy Sway
написа:
In fact, I am very interested in the part you mentioned,
https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/disabling-transparent-hugepages.html
https://access.redhat.com/solutions/1320153 (requires RH dev subscription or
other type of subscription) -> In short add 'transparent_hugepage=never' to the
kernel params
SLES11/12/15 ->
The migration from 4.3 to 4.4 is done by the same method... It always worked
but for major version upgrade there is a strict requirement which version the
backup was done.
Even if it doesn't work - a fresh engine can still recover the VMs by importing
the storage domain (no VM snapshots are
/etc/glusterfs & /var/lib/glusterd
Best Regards,Strahil Nikolov
On Tue, Dec 28, 2021 at 5:09, dhanaraj.ramesh--- via Users
wrote: Hi Team
As part of Hyperconverged setup, what are the Gluster config files must be
backed up in daily basis to restore in case of disaster?
Well, you can shut down the engine (don't forget the global maintenance), yum
update, reboot -> the host is updated
Then wait for ovirt-ha-agent & ovirt-ha-broker to bring the engine up.Global
maintenance, wipe the storage (assuming NFS/GlusterFS is used) or at least
rename the subdirectory to
When you mount it, did you try to:
chown 36:36 /mnt/b4tsz001chmod 755 /mnt/b4tsz001
Best Regards,Strahil Nikolov
On Wed, Dec 22, 2021 at 16:12, Andi Nør Christiansen
wrote:
Hi,
Does anyone know how to mount a spectrum scale filesystem as a storage domain
using the POSIX
701 - 800 of 1137 matches
Mail list logo