You need to provode more details like storage type , any errors indicated,
somwthing changed recently, etc
Best Regards,
Strahil Nikolov
На 22 юли 2020 г. 13:15:06 GMT+03:00, Tarun Kushwaha
написа:
>My ovirt hosted engine storage got in locked state.
>Now I am unable to perform any
Do you have NICs that support iSCSI -I guess you can use hardware offloading?
MTU size ?
Lattency is usually the killer of any performance, what is your round-trip time
?
Best Regards,
Strahil Nikolov
На 21 юли 2020 г. 2:37:10 GMT+03:00, Philip Brown написа:
>AH! my apologies. It
Just copy/paste it in a browser.
На 20 юли 2020 г. 17:00:01 GMT+03:00, lu.alfo...@almaviva.it написа:
>Hello,
>
>the link is not available
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy
На 19 юли 2020 г. 21:09:14 GMT+03:00, David White via Users
написа:
>Thank you.
>So to make sure I understand what you're saying, it sounds like if I
>need 4 nodes (or more), I should NOT do a "hyperconverged"
>installation, but should instead prepare Gluster separately from the
>oVirt Manager
There is a bug already opened for that behaviour:
https://bugzilla.redhat.com/show_bug.cgi?id=1858234
Best Regards,
Strahil Nikolov
На 19 юли 2020 г. 13:26:01 GMT+03:00, erin.s...@bookit.com написа:
>Hi Guys we attempted to deploy a new ovirt cluster two weeks ago. 4.4.1
>and 4.4.0 Once we
>gluster hyperconverged.
>> > Is it possible to run oVirt and Gluster together on the same
>hardware? So 3 physical hosts would run CentOS or something, and I
>would install oVirt Node + Gluster onto the same base host OS? If so,
>then I could probably make t
Hm... then you need to play on a TEST ovirt with the options described in in
https://www.ovirt.org/develop/developer-guide/engine/engine-config-options.html
Some of the more interesting options are:
SSHInactivityTimoutSeconds
TimeoutToResetVdsInSeconds
VDSAttemptsToResetCount
Can you provide the target's facts in the bug report ?
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 14:48:39 GMT+03:00, Gianluca Cecchi
написа:
>On Fri, Jul 17, 2020 at 1:34 PM Dominique Deschênes <
>dominique.desche...@gcgenicom.com> wrote:
>
>> Hi,
>>
>> I use ovirt ISO file
Definitely it's not a resolve issue.
Have you made changes to sshd_config on sia-svr-ct02 ?
Is root login opened ?
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 13:58:09 GMT+03:00, lu.alfo...@almaviva.it написа:
>This is the output from the engine:
>
>[root@dacs-ovirt ~]# host sia-svr-ct02
Hm...
but then setting that variable to python3 should work, but based on the list
reports - it doesn't work.
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 12:35:52 GMT+03:00, Gianluca Cecchi
написа:
>On Fri, Jul 17, 2020 at 11:25 AM Gianluca Cecchi
>
>wrote:
>
>> On Fri, Jul 17, 2020 at
What is the output of:
host sia-svr-ct02
nslookup sia-svr-ct02
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 10:46:08 GMT+03:00, lu.alfo...@almaviva.it написа:
>2020-07-15 11:41:58,968+02 ERROR
>[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
What version of CentOS 8 are you using -> Stream or regular, version ?
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 21:07:57 GMT+03:00, "Dominique Deschênes"
написа:
>
>
>HI,
>Thank you for your answers
>
>I tried to replace the "package" with "dnf". the installation of the
>gluster seems
Can you share your /etc/hosts.
As far as I remember there was an entry like:
127.0.1.2 hostname
So you have to comment it out.
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 16:53:36 GMT+03:00, Florian Schmid via Users
написа:
>Hi,
>
>I have a problem with Ubuntu 20.04 VM reporting the
Have you tried to replace 'package' with 'dnf' in
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml (somewhere
around line 33).
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 16:30:04 GMT+03:00, dominique.desche...@gcgenicom.com написа:
>I also have this message with the
What do you see in the engine's logs ?
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 13:24:03 GMT+03:00, lu.alfo...@almaviva.it написа:
>i attach the hosts info :
>
>Software
>OS Version:
>RHEL - 7 - 8.2003.0.el7.centos
>OS Description:
>CentOS Linux 7 (Core)
>Kernel Version:
>3.10.0 -
You first add the node (assign Datacenter and Cluster).
Then you define the storage volume and give details and options (for example
option like: 'backup-volfile-servers=host2:host3').
Then during Host activation phase - all storage domains are mounted on the
host and during maintenance ->
На 16 юли 2020 г. 0:41:22 GMT+03:00, Philip Brown написа:
>Hmm...
>
>
>Are you then saying, that YES, all host nodes need to be able to talk
>to the glusterfs filesystem?
>
No, but you sounded like you need that.
You can have 'replica 3' and 2 hosts won't have gluster server running (still
I guess your only option is to edit
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml and
replace 'package' with 'dnf' (keep the beginning 2 "spaces" deeper than '-
name' -> just where "package" starts).
Best Regards,
Strahil Nikolov
На 15 юли 2020 г. 22:39:09
You can use a distributed replicated volume of type 'replica 3 arbiter 1'.
For example, NodeA and NodeB are contain replica set 1 with NodeC as their
arbiter and NodeD and NodeE as the second replica set 2 with NodeC as thir
arbiter also.
In such case you got only 2 copies of a single
Also, check on system the LV size as it seema that based on your previous
outputs the PV names do not match.
You might have now a very large HostedEngine LV which will be a waste of space.
Best Regards,
Strahil Nikolov
На 15 юли 2020 г. 0:19:09 GMT+03:00, clam2...@gmail.com написа:
>Thank
Based on
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/main.yml
the used module is package, but the strange thing is why ansible doesn't
detect the python3 and dnf.
As far as I remember, you can edit the play before running it .
Maybe this will
Hm... Clearly it can't get the necessary info.
Have you tried to do full cleanup via 'ovirt-hosted-engine-cleanup' (the name
is based on my vague memories from 2017) and then wipe all data on the
storage domain ?
Best Regards,
Strahil Nikolov
На 14 юли 2020 г. 23:13:33 GMT+03:00, Andy
На 14 юли 2020 г. 16:32:42 GMT+03:00, clam2...@gmail.com написа:
>Output of pvdisplay for each of three hosts below.
> --- Physical volume ---
> PV Name /dev/mapper/vdo_nvme0n1
> VG Name gluster_vg_nvme0n1
> PV Size 100.00 GiB / not usable 4.00 MiB
Hi,
some of the components in oVirt 4.3 rely on SELINUX being enabled.
Most probably it is the same in 4.4 , so please try with SELINUX in enforcing
mode.
Best Regards,
Strahil Nikolov
На 14 юли 2020 г. 3:43:35 GMT+03:00, Andy via Users написа:
>I just tried another fresh install with oVIRT
Yeah,
but the idea is RH docu to catch up with oVirt's documentation, not the
opposite.
Best Regards,
Strahil Nikolov
На 14 юли 2020 г. 0:26:11 GMT+03:00, Jayme написа:
>Personally I find the rhev documentation much more complete:
What is the contents of your dnf.conf ?
Best Regards,
Strahil Nikolov
На 13 юли 2020 г. 17:26:42 GMT+03:00, Markus Schaufler
написа:
>Hi all,
>in our environment, CentOS7 and RHEL7 for the most part (Ovirt 4.3 and
>RHV 4.3), we are able to set a proxy directly in /etc/yum.conf (or on
>the
I'm left with the impression that we are talking about SSO from oVirt.
Yet, the author has to clarify.
Best Regards,
Strahil Nikolov
На 12 юли 2020 г. 17:39:31 GMT+03:00, Wesley Stewart
написа:
>Are you asking for troubleshooting on getting windows RDP working in a
>windows 10 guest?
>
If you have access to the HE, can you check the rpm status (rpm -Va) for
issues.
Configuration files could be changed , but libraries/binaries not.
What is the output of hosted-engine --vm-status ?I had a similar issue and it
was an addon in my browser (as I used profile, the situation was
You can use 'hosted-engine' to access the VM over VNC.
Usual advice will be to redeploy the engine and restore from backup.
You won't loose your VMs and restore will be fast.
Powering VMs manually is a tricky part. You can find each VM's configuration
file in the vdsm log on the host where
На 5 юли 2020 г. 11:31:32 GMT+03:00, Erez Zarum написа:
>We are using Dell SC (Storage) with iSCSI with oVirt, it is impossible
>to create a new Target Portal with a specific LUN so it's impossible to
>isolate the SE LUN from other LUNs that are in use by other Storage
>Domains.
>According to
На 3 юли 2020 г. 11:30:58 GMT+03:00, Andrei Verovski
написа:
>Hi !
>
>I have 2-node oVirt 4.3 installation, with engine running as KVM guest
>on SuSE file server (not hosted engine).
>Nodes are manually installed on CentOS 7.x (further referred as old
>node #1 and #2).
>
>I’m going to add 1
Yes, ovirt-ha-broker and ovirt-ha-agent take care of HostedEngine to be up
and running and in case something goes bad - to be migrated away.
Best Regards,
Strahil Nikolov
На 3 юли 2020 г. 7:53:14 GMT+03:00, Anton Louw via Users
написа:
>Hi Everybody,
>
>Thanks for all the responses. So I
Usually the vdsm-network-init and vdsm-network services take care of the
network configuration.
The easiest way to resolve is to set the host in maintenance and then use
'reinstall' from the UI.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 14:50:14 GMT+03:00, Erez Zarum написа:
>Fresh
Usually the vdsm-network-init and vdsm-network services take care of the
network configuration.
The easiest way to resolve is to set the host in maintenance and then use
'reinstall' from the UI.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 14:50:14 GMT+03:00, Erez Zarum написа:
>Fresh
I would recommend you to try to logrotate it.
I had similar issue with corrupted logrotate state file which led to vdsm
growing to 20GB
You can also use 'truncate -s 0 your.log' to wipe without removing it.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 8:00:27 GMT+03:00, Anton Louw via Users
Have you checked the qemu log on the host that was running the VMs ?
What was done recently ?
Any reason SELINUX is disabled ?
Best Regards,
Strahil Nikolov
На 30 юни 2020 г. 18:08:09 GMT+03:00, Antoine Nguyen
написа:
>Hello,
>
>Thanks for your interest and time.
>Here is the engine log:
На 29 юни 2020 г. 4:14:33 GMT+03:00, jury cat написа:
>If i destroy the brick, i might upgrade to ovirt 4.4 and Centos 8.2.
>Do you think upgrade to ovirt 4.4 with glusterfs improves performance
>or i am better with NFS ?
Actually only you can find out as we cannot know the workload of your
Ovirt is using the default shard file of 64MB and I don't think this is 'small
file' at all.
There are a lot of tunables to optimize Gluster and I can admit it's not an
easy task.
Deadline is good for databases, but with SSDs you should try the
performance of enabled multiqueue and
Hello ,
Let me ask some questions:
1. What is the scheduler for your PV ?
2. Have you aligned your PV during the setup 'pvcreate --dataalignment
alignment_value device'
3. What is your tuned profile ? Do you use rhgs-random-io from the
Can you set one of the Hypervisours into maintenance and use the "reinstall"
option from the UI ?
Best Regards,
Strahil Nikolov
На 25 юни 2020 г. 13:24:26 GMT+03:00, Erez Zarum написа:
>I have a Self-hosted Engine running on iSCSI as well as couple of
>Storage domains using iSCSI, both the SE
As you will migrate from block-based storage to file-based storage, I think
that you should use the backup & restore procedure.
Best Regards,
Strahil Nikolov
На 25 юни 2020 г. 7:31:55 GMT+03:00, Erez Zarum написа:
>I was looking for a “complete” best practice to migrate a self-hosted
>engine
Most probably the hosts's ICMP echo requests to the gateway get lost. This
leads to enough penalty, so your engine is moved away from the host.
Which 'penalty' did you disable to stabilize your environment ?
Best Regards,
Strahil Nikolov
На 27 юни 2020 г. 18:19:58 GMT+03:00,
What repos do you have enabled ?
It seems you have a repo conflict.
Best Regards,
Strahil Nikolov
На 26 юни 2020 г. 18:30:31 GMT+03:00, eev...@digitaldatatechs.com написа:
>I do not have a self hosted engine and did yum update whech update
>these files:
>Updated:
> microcode_ctl.x86_64
What is the status of the host?
Usially a VM is staless , because the engine cannot reach the VDSM on the
Hypervisour.
Best Regards,
Strahil Nikolov
На 26 юни 2020 г. 22:23:15 GMT+03:00, pas...@butterflyit.com написа:
>Currently from the ovirt web interface it is not possible to suspend a
Did you reinstall the node via the WEB UI ?
Best Regards,
Strahil Nikolov
На 25 юни 2020 г. 3:23:15 GMT+03:00, "Vinícius Ferrão via Users"
написа:
>Hello,
>
>For reasons unknown one of my hosts is trying to mount an old storage
>point that’s been removed some time ago.
>
As far as I know, oVirt 4.4 uses gluster v7.X , so you will eventually have
to upgrade the version.
As I mentioned, I have created my new volume while I was running a higher
version and copied the data to it, which prevented the acl bug hitting me
again.
I can recommend you to:
1.
As I told you, you could just downgrade gluster on all nodes and later plan to
live migrate the VM disks.
I had to copy my data to the new volume, so I can avoid the ACL bug , when I
use newer versions of gluster.
Let's clarify some details:
1. Which version of oVirt and Gluster are you using
l.
>
>‐‐‐ Original Message ‐‐‐
>On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users
> wrote:
>
>> На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via
>usersus...@ovirt.org написа:
>>
>
>> > Thank you and Strahil for your responses.
>
You should ensure that in the storage domain tab, the old storage is not
visible.
I still wander why yoiu didn't try to downgrade first.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 13:58:33 GMT+03:00, C Williams
написа:
>Strahil,
>
>The GLCL3 storage domain was detached prior to
It's the client's browser settings , but I think it's easier to either change
the certificate to something that will be trusted, or to just import it.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 11:29:20 GMT+03:00, Anton Louw via Users
написа:
>Hi All,
>
>So I managed to get the
На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via Users
написа:
>Thank you and Strahil for your responses.
>They were both very helpful.
>
>> I think a hosted engine installation VM wants 16GB RAM configured
>though I've built older versions with 8GB RAM.
>> For modern VMs CentOS8 x86_64
You can't add the new volume as it contains the same data (UUID) as the old one
, thus you need to detach the old one before adding the new one - of course
this means downtime for all VMs on that storage.
As you see , downgrading is more simpler. For me v6.5 was working, while
anything above
You are definitely reading it wrong.
1. I didn't create a new storage domain ontop this new volume.
2. I used cli
Something like this (in your case it should be 'replica 3'):
gluster volume create newvol replica 3 arbiter 1 ovirt1:/new/brick/path
ovirt2:/new/brick/path
На 21 юни 2020 г. 23:26:32 GMT+03:00, David White via Users
написа:
>I'm reading through all of the documentation at
>https://ovirt.org/documentation/, and am a bit overwhelmed with all of
>the different options for installing oVirt.
>
>My particular use case is that I'm looking for a way to
In my situation I had only the ovirt nodes.
На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
написа:
>Strahil,
>
>So should I make the target volume on 3 bricks which do not have ovirt
>--
>just gluster ? In other words (3) Centos 7 hosts ?
>
>Thank You For Your Help !
>
>On Sun, Jun 21, 2020
I created a fresh volume (which is not an ovirt sgorage domain), set the
original storage domain in maintenance and detached it.
Then I 'cp -a ' the data from the old to the new volume. Next, I just added
the new storage domain (the old one was a kind of a 'backup') -
Hello Sahina, Sandro,
I have noticed that the ACL issue with Gluster
(https://github.com/gluster/glusterfs/issues/876) is happening to multiple
oVirt users (so far at least 5) and I think that this issue needs greater
attention.
Did anyone from the RHHI team managed to reproduce the bug
Sorry to hear that.
I can say that for me 6.5 was working, while 6.6 didn't and I upgraded to
7.0 .
In the ended , I have ended with creating a new fresh volume and physically
copying the data there, then I detached the storage domains and attached to
the new ones (which holded the
Hi ,
This one really looks like the ACL bug I was hit with when I updated from
Gluster v6.5 to 6.6 and later from 7.0 to 7.2.
Did you update your setup recently ? Did you upgrade gluster also ?
You have to check the gluster logs in order to verify that, so you can try:
1. Set Gluster logs to
Hey C Williams,
sorry for the delay, but I couldn't get somw time to check your logs. Will
try a little bit later.
Best Regards,
Strahil Nikolov
На 20 юни 2020 г. 2:37:22 GMT+03:00, C Williams
написа:
>Hello,
>
>Was wanting to follow up on this issue. Users are impacted.
>
>Thank You
>
d not refreshed again until the RHV manager engine is
>restarted
>Please open a bug on that, we should be able to notice that the problem
>was
>fixed
>
>Thanks
>Eli
>
>On Thu, Jun 11, 2020 at 6:02 AM Strahil Nikolov via Users
>
>wrote:
>
>> Hello All,
>>
&
Check on the hosts tab , which is your current SPM (last column in Admin UI).
Then open the /var/log/vdsm/vdsm.log and repeat the operation.
Then provide the log from that host and the engine's log (on the HostedEngine
VM or on your standalone engine).
Best Regards,
Strahil Nikolov
На 18 юни
I don't see '/rhev/data-center/mnt/192.168.24.13:_stor_import1' mounted at
all .
What is the status of all storage domains ?
Best Regards,
Strahil Nikolov
На 18 юни 2020 г. 21:43:44 GMT+03:00, C Williams
написа:
> Resending to deal with possible email issues
>
>-- Forwarded
Log to the oVirt cluster and provide the output of:
gluster pool list
gluster volume list
for i in $(gluster volume list); do echo $i;echo; gluster volume info $i;
echo;echo;gluster volume status $i;echo;echo;echo;done
ls -l /rhev/data-center/mnt/glusterSD/
Best Regards,
Strahil Nikolov
Are you using proxy?
Check that all hosts can discover and login with the same parameters you set in
oVirt.
Best Regards,
Strahil Nikolov
На 17 юни 2020 г. 11:32:49 GMT+03:00, Ricardo Alonso
написа:
>Trying to connect to a an iSCSI target (no chap/secrets) is failing
>with the message:
>
Hello Glenn,
sadly I can't answer your questions , but I think you will find this one
interesting:
http://chrisj.cloud/?q=node/8
Best Regards,
Strahil Nikolov
На 17 юни 2020 г. 3:00:34 GMT+03:00, Glenn Marcy написа:
>I am hoping to try out adding RDO to oVirt after things with CentOS 8.2
What do you want to change ?
На 17 юни 2020 г. 0:36:49 GMT+03:00, Philip Brown написа:
>oVirt 4.3: Okay, I found documentation that I cant have more than one
>"ISO" type storage domain.
>I can kinda understand that.
>
>But, I cant even edit or delete the existing one?
>Even when logged in to
Hey Nir,
in ovirt 4.3.something the default behaviour for Gluster changed from thin
to fully allocated.
My guess is that the shard xlator cannot catch up with the I/O.
Do you think that I should file a RFE to change the shard size ?
As far as I know RedHat support only 512MB shard size,
Hey Joop,
are you using fully allocated qcow2 images ?
Best Regards,
Strahil Nikolov
На 16 юни 2020 г. 20:23:17 GMT+03:00, Joop написа:
>On 3-6-2020 14:58, Joop wrote:
>> Hi All,
>>
>> Just had a rather new experience in that starting a VM worked but the
>> kernel entered grub2 rescue
It seems that the current events are OK and have today's date.
The issue is that the Dashboard is showing events with filter '> Today' which
also catches those events logged in July 2020.
I guess If I fix those (or just wait till July), the Dashboard won't show them
any more.
Best Regards,
Hello All,
I have a problem which started after the latest patching 4.3.9 to 4.3.10 .
Symptoms so far:
1. Engine reports that Hypervisours are drifting too much
2. ETL service stopped working
3. Web UI constantly notifies me that the nodes are again active (fills the
screen till the bottom)
So
+3, Yedidyah Bar David
написа:
On Thu, Jun 11, 2020 at 6:55 AM Strahil Nikolov via Users
wrote:
>
> I'm not sure if this one is related to "time shift" in the DB (as I found
> that dwh_history_timekeeping had some entries 1 month ahead/ETL service
> issues als
You can check in
https://lists.ovirt.org/archives/search?q=spice+youtube=1=date-desc
for 'spice options hooks' . Maybe the discussed there could help.
Best Regards,
Strahil Nikolov
На 11 юни 2020 г. 12:35:30 GMT+03:00, ozme...@hotmail.com написа:
>Hi,
>While using "skype for bussiness" on
Your gluster mount option is not correct.
You need 'backup-volfile-servers=storagehost2:storagehost3' (without the volume
name as they all have thaylt volume) .
Best Regards,
Strahil Nikolov
На 13 юни 2020 г. 10:47:28 GMT+03:00, Oliver Leinfelder
написа:
>Hi,
>
>I have the following two
I just found entries in the db pointing to 11 July (1 month ahead) and I
rebooted the engine.It seems that time-drift is somehow related to all the
issues I have observed after patching from 4.3.9 to 4.3.10:
1. ovirt1 started to slow down - solved by stopping chrony, wiping
Hello All,
I have a strange error that should be fixed but the event log is still filling
with the following after the latest patching (4.3.10):
Host ovirt2.localdomain has time-drift of 2909848 seconds while maximum
configured value is 300 seconds.
Host ovirt3.localdomain has time-drift of
I use Gluster snapshots before patching the engine.
Usually the flow is:
1. Global Maintenance
2. Power off the HostedEngine VM
3. Gluster snapshot
4. Power on HE
5. Upgrade of the setup packages
6. Run the hosted-engine upgrade script
7. Patch the OS of HE
8. Reboot
9. Remove Global
Hi Vinicius,
If you don't have too much VMs and you have local storage (like a raid
controller) or NFS/iSCSI - you can also move the VMs there temporarily (live
storage migration) without any interruption.
Best Regards,
Strahil Nikolov
На 10 юни 2020 г. 12:14:38 GMT+03:00, Jayme написа:
Can you put something like this before the 'Parse server cpu list' :
- name: Debug why parsing fails
debug:
msg:
- "Loop is done over {{
server_cpu_list.json['values']['system_option_value'][0]['value'].split(';
')|list|difference(['']) }}"
- "Actual value of server_cpu_dict
Are you using ECC ram ?
Best Regards,
Strahil Nikolov
На 8 юни 2020 г. 15:06:22 GMT+03:00, Joop написа:
>On 3-6-2020 14:58, Joop wrote:
>> Hi All,
>>
>> Just had a rather new experience in that starting a VM worked but the
>> kernel entered grub2 rescue console due to the fact that something
On top of that Ansible is also using ssh, so you need to 'override' the
settings for the engine.
Best Regards,
Strahil Nikolov
На 7 юни 2020 г. 13:01:08 GMT+03:00, Yedidyah Bar David
написа:
>On Sat, Jun 6, 2020 at 8:42 PM Michael Thomas wrote:
>>
>> After a week of iterations, I finally
На 7 юни 2020 г. 1:58:27 GMT+03:00, "Vinícius Ferrão via Users"
написа:
>Hello,
>
>This is a pretty vague and difficult question to answer. But what
>happens if the shared storage holding the VMs is down or unavailable
>for a period of time?
Once a pending I/O is blocked, libvirt will pause
There is also the API and I think there was a python script for ISO upload.
You can also import from OVA.
As you have multiple VMDKs, why don't you just upload in parallel (20-30 disk
in batch).
Best Regards,
Strahil Nikolov
На 6 юни 2020 г. 13:39:44 GMT+03:00, Magnus Isaksson написа:
Hi Magnus,
there are several ways to upload a disk.
For details, check
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/sect-Virtual_Disk_Tasks#Uploading_a_Disk_Image_to_a_Storage_Domain
Best Regards,
Strahil Nikolov
На 6 юни 2020 г.
Hello Tal, Michal,
What do you think about the plan ?
Anything I have to be careful for ?
Best Regards,
Strahil Nikolov
В петък, 22 май 2020 г., 19:01:42 Гринуич+3, Sandro Bonazzola
написа:
Il giorno gio 21 mag 2020 alle ore 17:08 Strahil Nikolov via Users
ha scritto:
> He
Have you tried restarting the engine ?
Best Regards,
Strahil Nikolov
В петък, 5 юни 2020 г., 11:56:37 Гринуич+3, Krist van Besien
написа:
Hello all,
On my ovirt HC cluster I constantly get the following kinds of errors:
From /var/log/ovirt-engine/engine.log
2020-06-05
Hi Michael,
Thanks for raising that topic.
I personally believe that the CentOS Stream will be something between Fedora
and RHEL and thus it won't be as stable as I wish.
Yet on the other side , if this speeds up bug fixing - I am OK for that.
P.S.: I'm still on 4.3, but I was planing to switch
You should disable sharding prior using the volume as a storage domain.
Disabling sharding while you have data on the volume will fause havoc.
Can you try disabling before creating the storage domain ?
Best Regards,
Strahil Nikolov
На 3 юни 2020 г. 20:35:13 GMT+03:00, Gianluca
Is it UEFI based or the lagacy bios ?
Best Regards,
Strahil Nikolov
На 3 юни 2020 г. 19:00:48 GMT+03:00, Marco Fais написа:
>Hi Joop
>
>I am having the same problem -- thought initially was due to the VM
>import
>but it is now happening even on newly created VMs.
>Rebooting (e.g.
Please open a bug for that issue with the relevant logs.
4.4 HCI should depend on that one.
Thanks in advance.
Best Regards,
Strahil Nikolov
На 3 юни 2020 г. 18:31:18 GMT+03:00, Jillian Morgan
написа:
>Thank you, Strahil.
>
>That was exactly the problem. I had already figured out
When you click on the console button in the UI, a file with necessary info
(inclusing auth) is provided. That file can be used with many remote
connection apps.
Keep in mind that auth info is valid for a very short term (a minute, maybe
2) so you need to open it without waiting too
Maybe you miss a rpm.
Do you have vdsm-gluster package installed ?
Best Regards,
Strahil Nikolov
На 2 юни 2020 г. 19:18:43 GMT+03:00, jillian.mor...@primordial.ca написа:
>I've successfully migrated to a new 4.4 engine, now managing the older
>4.3 (CentOS 7) nodes. So far so good there.
>
I think that feature didn't work for me on 4.2.7 , so I doubt it ever
worked.
It's worth opening a bugzilla.
Best Regards,
Strahil Nikolov
На 2 юни 2020 г. 2:29:50 GMT+03:00, Jaret Garcia via Users
написа:
>
>Hi guys I'm trying to deploy a new ovirt environmet based in version
>4.4
На 31 май 2020 г. 15:52:14 GMT+03:00, aigin...@gmail.com написа:
>Hi,
>
>Our company uses Ovirt to host some of its virtual machines. The
>version used is 4.2.6.4-1.el7. There are about 36 virtual hosts in it.
>The specifications used for the host machine is 30G RAM and 6 CPUs.
>Some of the VMs
And what about https://bugzilla.redhat.com/show_bug.cgi?id=1787906
Do we have any validation of the checksum via the python script ?
Best Regards,
Strahil Nikolov
На 31 май 2020 г. 0:18:43 GMT+03:00, Carlos C написа:
>Hi,
>
>You can try upload using the python as described here
Last time I used oVirt to deploy my gluster, cockpit was using thick LVs
instead of thin LVM.
I think that thin LVM is way more suitable for the task. Then you can set the
size to anything needed.
Best Regards,
Strahil Nikolov
На 29 май 2020 г. 18:27:59 GMT+03:00, Jayme написа:
>Also, I
I can give you another tip - use 'sealert'.
yum install setroubleshoot-server
sealert -a /var/log/audit/audit.log
It will provide you with guidance.
Actually selinux hast 'allow' rules based on process type (last part after
':') with the file type.
ps aux -Z
ls -lZ file
Sometimes
You mentioned that your certificates were different. Did you try converting
them to the type used in the example ?
Best Regards,
Strahil Nikolov
На 29 май 2020 г. 1:29:51 GMT+03:00, Stack Korora
написа:
>On 2020-05-28 16:07, Strahil Nikolov wrote:
>> Can you check
Based on my vague memories from Dec 2018, I think I got a similar situation
where I had to delete that external Engine.
Of course that was on 4.2.7 and the story here can be different. If you use
gluster, you can power off the engine (Global Maintenance) and then create a
gluster snapshot
Hi Marc,
4.3.X will not have a long life and the upgrade will be a p**n in the *** .
I know that 4.4 will be hard to install untill all the bugs are polished, but I
highly advise you to try to deploy 4.4 first.
Best Regards,
Strahil Nikolov
На 28 май 2020 г. 19:46:57 GMT+03:00, msantoro---
1001 - 1100 of 1137 matches
Mail list logo