You need to provode more details like storage type , any errors indicated,
somwthing changed recently, etc
Best Regards,
Strahil Nikolov
На 22 юли 2020 г. 13:15:06 GMT+03:00, Tarun Kushwaha
написа:
>My ovirt hosted engine storage got in locked state.
>Now I am unable to perfo
Do you have NICs that support iSCSI -I guess you can use hardware offloading?
MTU size ?
Lattency is usually the killer of any performance, what is your round-trip time
?
Best Regards,
Strahil Nikolov
На 21 юли 2020 г. 2:37:10 GMT+03:00, Philip Brown написа:
>AH! my apolog
Just copy/paste it in a browser.
На 20 юли 2020 г. 17:00:01 GMT+03:00, lu.alfo...@almaviva.it написа:
>Hello,
>
>the link is not available
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy
y in oVirt , but not part of
Gluster) - you can allow the engine to power off and power on Hosts on demand -
so you can conserve power and cooling while keeping the count of oVirt nodes in
the healthy zone.
Best Regards,
Strahil Nikolov
___
Users mai
There is a bug already opened for that behaviour:
https://bugzilla.redhat.com/show_bug.cgi?id=1858234
Best Regards,
Strahil Nikolov
На 19 юли 2020 г. 13:26:01 GMT+03:00, erin.s...@bookit.com написа:
>Hi Guys we attempted to deploy a new ovirt cluster two weeks ago. 4.4.1
>and 4.4.0 O
systems as ovirt
nodes (CPU & RAM) to host VMs.
In case of a 4 node setup - 3 hosts have the gluster data and the 4th - is not
part of ths gluster, just hosting VMs.
Best Regards,
Strahil Nikolov
На 19 юли 2020 г. 15:25:10 GMT+03:00, David White via Users
написа:
>Thanks again for
VdsRecoveryTimeoutInMintues
VdsRefreshRate
vdsTimeout
Try them first in a test system before deploying on production.
Best Regards,
Strahil Nikolov
В събота, 18 юли 2020 г., 10:40:14 Гринуич+3, lu.alfo...@almaviva.it
написа:
Hello,
yes, root login is opened.
No, I didn't make changes
Can you provide the target's facts in the bug report ?
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 14:48:39 GMT+03:00, Gianluca Cecchi
написа:
>On Fri, Jul 17, 2020 at 1:34 PM Dominique Deschênes <
>dominique.desche...@gcgenicom.com> wrote:
>
>> Hi,
>>
>> I
Definitely it's not a resolve issue.
Have you made changes to sshd_config on sia-svr-ct02 ?
Is root login opened ?
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 13:58:09 GMT+03:00, lu.alfo...@almaviva.it написа:
>This is the output from the engine:
>
>[root@dacs-ovirt ~]# host sia
Hm...
but then setting that variable to python3 should work, but based on the list
reports - it doesn't work.
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 12:35:52 GMT+03:00, Gianluca Cecchi
написа:
>On Fri, Jul 17, 2020 at 11:25 AM Gianluca Cecchi
>
>wrote:
>
>> On
What is the output of:
host sia-svr-ct02
nslookup sia-svr-ct02
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 10:46:08 GMT+03:00, lu.alfo...@almaviva.it написа:
>2020-07-15 11:41:58,968+02 ERROR
>[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
>(EE-ManagedThre
What version of CentOS 8 are you using -> Stream or regular, version ?
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 21:07:57 GMT+03:00, "Dominique Deschênes"
написа:
>
>
>HI,
>Thank you for your answers
>
>I tried to replace the "package" with "
Can you share your /etc/hosts.
As far as I remember there was an entry like:
127.0.1.2 hostname
So you have to comment it out.
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 16:53:36 GMT+03:00, Florian Schmid via Users
написа:
>Hi,
>
>I have a problem with Ubuntu 20.04 VM
Have you tried to replace 'package' with 'dnf' in
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml (somewhere
around line 33).
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 16:30:04 GMT+03:00, dominique.desche...@gcgenicom.com написа:
>I also have this mess
What do you see in the engine's logs ?
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 13:24:03 GMT+03:00, lu.alfo...@almaviva.it написа:
>i attach the hosts info :
>
>Software
>OS Version:
>RHEL - 7 - 8.2003.0.el7.centos
>OS Description:
>CentOS Linux 7 (Core)
>Kernel
-> umounted on the host.
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 1:21:33 GMT+03:00, Philip Brown написа:
>Awesome thats good news.
>
>So... does that happen automatically?
>
>ie: install ovirt "node" image, then tell ovirt hosted engine "go add
>that node to
n set hosts, and multiple
>(arbitrariliy many) arbiter nodes?
>
You need 2 copies of the data, but 'replica 3' is most optimal.
>- Original Message -
>From: "Strahil Nikolov"
>To: "users" , "Philip Brown"
>Sent: Wednesday, July 15, 2020 1:59:40
I guess your only option is to edit
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml and
replace 'package' with 'dnf' (keep the beginning 2 "spaces" deeper than '-
name' -> just where "package" starts).
Best Regards,
Strahil Nikolov
На 15 юли
shard, but you are fully
"supported" from gluster perspective.
Also, all hosts can have an external storage like your NAS.
Best Regards,
Strahil Nikolov
На 15 юли 2020 г. 21:11:34 GMT+03:00, Philip Brown написа:
>arg. when I said "add 2 more nodes that arent part of the
Also, check on system the LV size as it seema that based on your previous
outputs the PV names do not match.
You might have now a very large HostedEngine LV which will be a waste of space.
Best Regards,
Strahil Nikolov
На 15 юли 2020 г. 0:19:09 GMT+03:00, clam2...@gmail.com написа:
>Th
be indented to the
write with 2 spaces (no tabs allowed).
Best Regards,
Strahil Nikolov
На 15 юли 2020 г. 0:19:09 GMT+03:00, clam2...@gmail.com написа:
>Thank you Strahil. I think I edited the oVirt Node Cockpit
>Hyperconverged Wizard Gluster Deployment Ansible playbook as detailed
>in
Hm... Clearly it can't get the necessary info.
Have you tried to do full cleanup via 'ovirt-hosted-engine-cleanup' (the name
is based on my vague memories from 2017) and then wipe all data on the
storage domain ?
Best Regards,
Strahil Nikolov
На 14 юли 2020 г. 23:13:33 GMT+03:00, Andy
.00 MiB'. Select 99G or
'100%PVS'
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
http
Hi,
some of the components in oVirt 4.3 rely on SELINUX being enabled.
Most probably it is the same in 4.4 , so please try with SELINUX in enforcing
mode.
Best Regards,
Strahil Nikolov
На 14 юли 2020 г. 3:43:35 GMT+03:00, Andy via Users написа:
>I just tried another fresh install with oV
Yeah,
but the idea is RH docu to catch up with oVirt's documentation, not the
opposite.
Best Regards,
Strahil Nikolov
На 14 юли 2020 г. 0:26:11 GMT+03:00, Jayme написа:
>Personally I find the rhev documentation much more complete:
>https://access.redhat.com/documentation
What is the contents of your dnf.conf ?
Best Regards,
Strahil Nikolov
На 13 юли 2020 г. 17:26:42 GMT+03:00, Markus Schaufler
написа:
>Hi all,
>in our environment, CentOS7 and RHEL7 for the most part (Ovirt 4.3 and
>RHV 4.3), we are able to set a proxy directly in /etc
I'm left with the impression that we are talking about SSO from oVirt.
Yet, the author has to clarify.
Best Regards,
Strahil Nikolov
На 12 юли 2020 г. 17:39:31 GMT+03:00, Wesley Stewart
написа:
>Are you asking for troubleshooting on getting windows RDP working in a
>windows 10
was the same on
Windows and Linux :D )
Best Regards,
Strahil Nikolov
На 9 юли 2020 г. 18:32:39 GMT+03:00, Michael Watters
написа:
>After installing updates on our ovirt-engine running CentOS 7.8 the
>administration portal will no longer load. The engine.log shows an
>error as follows.
&
storage (which is not your case):
You can check & in the xml. Once you
mount the storage and try to power up the VM, libvirt will tell you (in the
log) where exactly it is expecting a symlink to exists. It points to the path
of the storage for the VMs.
Best Regards,
Strahil Nikolov
На 7
can make a restore point via:
power off the HE, snapshot the volume, power on the HE, do any kkind of
change on HE (like upgrade).
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le..
На 3 юли 2020 г. 11:30:58 GMT+03:00, Andrei Verovski
написа:
>Hi !
>
>I have 2-node oVirt 4.3 installation, with engine running as KVM guest
>on SuSE file server (not hosted engine).
>Nodes are manually installed on CentOS 7.x (further referred as old
>node #1 and #2).
>
>I’m going to add 1
Yes, ovirt-ha-broker and ovirt-ha-agent take care of HostedEngine to be up
and running and in case something goes bad - to be migrated away.
Best Regards,
Strahil Nikolov
На 3 юли 2020 г. 7:53:14 GMT+03:00, Anton Louw via Users
написа:
>Hi Everybody,
>
>Thanks for all the respon
Usually the vdsm-network-init and vdsm-network services take care of the
network configuration.
The easiest way to resolve is to set the host in maintenance and then use
'reinstall' from the UI.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 14:50:14 GMT+03:00, Erez Zarum написа:
>Fr
Usually the vdsm-network-init and vdsm-network services take care of the
network configuration.
The easiest way to resolve is to set the host in maintenance and then use
'reinstall' from the UI.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 14:50:14 GMT+03:00, Erez Zarum написа:
>Fr
I would recommend you to try to logrotate it.
I had similar issue with corrupted logrotate state file which led to vdsm
growing to 20GB
You can also use 'truncate -s 0 your.log' to wipe without removing it.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 8:00:27 GMT+03:00, Anton Louw via Users
Have you checked the qemu log on the host that was running the VMs ?
What was done recently ?
Any reason SELINUX is disabled ?
Best Regards,
Strahil Nikolov
На 30 юни 2020 г. 18:08:09 GMT+03:00, Antoine Nguyen
написа:
>Hello,
>
>Thanks for your interest and time.
>Here is th
На 29 юни 2020 г. 4:14:33 GMT+03:00, jury cat написа:
>If i destroy the brick, i might upgrade to ovirt 4.4 and Centos 8.2.
>Do you think upgrade to ovirt 4.4 with glusterfs improves performance
>or i am better with NFS ?
Actually only you can find out as we cannot know the workload of your
-> stays enabled!
Best Regards,
Strahil Nikolov
На 29 юни 2020 г. 1:33:20 GMT+03:00, Jayme написа:
>I’ve tried various methods to improve gluster performance on similar
>hardware and never had much luck. Small file workloads were
>particularly
>troublesome. I ended up
"optimize for virt" are located at
/var/lib/glusterd/groups/virt on each gluster node.
Best Regards,
Strahil Nikolov
В неделя, 28 юни 2020 г., 22:13:09 Гринуич+3, jury cat
написа:
Hello all,
I am using Ovirt 4.3.10 on Centos 7.8 with glusterfs 6.9 .
My Gluster setup is of 3 h
Can you set one of the Hypervisours into maintenance and use the "reinstall"
option from the UI ?
Best Regards,
Strahil Nikolov
На 25 юни 2020 г. 13:24:26 GMT+03:00, Erez Zarum написа:
>I have a Self-hosted Engine running on iSCSI as well as couple of
>Storage domains using iS
As you will migrate from block-based storage to file-based storage, I think
that you should use the backup & restore procedure.
Best Regards,
Strahil Nikolov
На 25 юни 2020 г. 7:31:55 GMT+03:00, Erez Zarum написа:
>I was looking for a “complete” best practice to migrate a self-hosted
Most probably the hosts's ICMP echo requests to the gateway get lost. This
leads to enough penalty, so your engine is moved away from the host.
Which 'penalty' did you disable to stabilize your environment ?
Best Regards,
Strahil Nikolov
На 27 юни 2020 г. 18:19:58 GMT+03:00, tho
What repos do you have enabled ?
It seems you have a repo conflict.
Best Regards,
Strahil Nikolov
На 26 юни 2020 г. 18:30:31 GMT+03:00, eev...@digitaldatatechs.com написа:
>I do not have a self hosted engine and did yum update whech update
>these files:
>Updated:
> microcode_
What is the status of the host?
Usially a VM is staless , because the engine cannot reach the VDSM on the
Hypervisour.
Best Regards,
Strahil Nikolov
На 26 юни 2020 г. 22:23:15 GMT+03:00, pas...@butterflyit.com написа:
>Currently from the ovirt web interface it is not possible to susp
Did you reinstall the node via the WEB UI ?
Best Regards,
Strahil Nikolov
На 25 юни 2020 г. 3:23:15 GMT+03:00, "Vinícius Ferrão via Users"
написа:
>Hello,
>
>For reasons unknown one of my hosts is trying to mount an old storage
>point that’s been removed some time
.
Best Regards,
Strahil Nikolov
На 23 юни 2020 г. 23:42:13 GMT+03:00, C Williams
написа:
>Strahil,
>
>Thanks for getting back with me !
>
>Sounds like it is best to evacuate VM disks to another storage domain
>--
>if possible from a Gluster storage domain -- prior to an
?
2. You now have your old gluster volume attached to oVirt and the new volume
unused, right ?
3. Did you copy the contents of the old volume to the new one ?
Best Regards,
Strahil Nikolov
На 23 юни 2020 г. 4:34:19 GMT+03:00, C Williams
написа:
>Strahil,
>
>Thank You
a
pain in the @$$.
I think that optimal is to have several 10Gbit NICs (at least 1 for gluster
and 1 for oVirt live migration).
Also, NVMEs can be used as lvm cache for spinning disks.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 18:50:01 GMT+03:00, David White
написа:
>> For mig
You should ensure that in the storage domain tab, the old storage is not
visible.
I still wander why yoiu didn't try to downgrade first.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 13:58:33 GMT+03:00, C Williams
написа:
>Strahil,
>
>The GLCL3 storage domain was detac
It's the client's browser settings , but I think it's easier to either change
the certificate to something that will be trusted, or to just import it.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 11:29:20 GMT+03:00, Anton Louw via Users
написа:
>Hi All,
>
>So I manag
На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via Users
написа:
>Thank you and Strahil for your responses.
>They were both very helpful.
>
>> I think a hosted engine installation VM wants 16GB RAM configured
>though I've built older versions with 8GB RAM.
>> For m
(6.6+) was causing complete lockdown. Also v7.0 was working,
but it's supported in oVirt 4.4.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 7:21:15 GMT+03:00, C Williams
написа:
>Another question
>
>What version could I downgrade to safely ? I am at 6.9 .
>
>Thank Yo
packages!!!
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams
написа:
>Strahil,
>
>It sounds like you used a "System Managed Volume" for the new storage
>domain,is that correct?
>
>Thank You For Your Help !
>
>On Sun, Jun 21,
На 21 юни 2020 г. 23:26:32 GMT+03:00, David White via Users
написа:
>I'm reading through all of the documentation at
>https://ovirt.org/documentation/, and am a bit overwhelmed with all of
>the different options for installing oVirt.
>
>My particular use case is that I'm looking for a way to
In my situation I had only the ovirt nodes.
На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
написа:
>Strahil,
>
>So should I make the target volume on 3 bricks which do not have ovirt
>--
>just gluster ? In other words (3) Centos 7 hosts ?
>
>Thank You For Your Help !
&g
') -
pointing to the new volume name.
If you observe issues , I would recommend you to downgrade gluster
packages one node at a time . Then you might be able to restore your
oVirt operations.
Best Regards,
Strahil Nikolov
На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
написа
?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about
the old data), but I could afford the downtime.
Also, I can say that v7.0 ( but not 7.1 or anything later) also worked
without the ACL issue, but it causes some trouble in oVirt - so avoid that
unless you have no other options.
Best Regards,
Strahil Nikolov
На 21 юни 2020 г. 4:39:46
, the workaround was to downgrade the gluster packages on all nodes
(and reboot each node 1 by 1 ) if the major version is the same, but if you
upgraded to v7.X - then you can try the v7.0 .
Best Regards,
Strahil Nikolov
В събота, 20 юни 2020 г., 18:48:42 ч. Гринуич+3, C Williams
написа:
Hello
Hey C Williams,
sorry for the delay, but I couldn't get somw time to check your logs. Will
try a little bit later.
Best Regards,
Strahil Nikolov
На 20 юни 2020 г. 2:37:22 GMT+03:00, C Williams
написа:
>Hello,
>
>Was wanting to follow up on this issue. Users are impacted.
>
Thanks Eli for your reply.
Bug is opened: https://bugzilla.redhat.com/show_bug.cgi?id=1848353
Best Regards,
Strahil Nikolov
На 16 юни 2020 г. 0:20:45 GMT+03:00, Eli Mesika написа:
>Hi
>
>Looking at the code I realized that the date/time retrieved from the
>host
>is cached an
Check on the hosts tab , which is your current SPM (last column in Admin UI).
Then open the /var/log/vdsm/vdsm.log and repeat the operation.
Then provide the log from that host and the engine's log (on the HostedEngine
VM or on your standalone engine).
Best Regards,
Strahil Nikolov
На 18 юни
I don't see '/rhev/data-center/mnt/192.168.24.13:_stor_import1' mounted at
all .
What is the status of all storage domains ?
Best Regards,
Strahil Nikolov
На 18 юни 2020 г. 21:43:44 GMT+03:00, C Williams
написа:
> Resending to deal with possible email issues
>
>--
Log to the oVirt cluster and provide the output of:
gluster pool list
gluster volume list
for i in $(gluster volume list); do echo $i;echo; gluster volume info $i;
echo;echo;gluster volume status $i;echo;echo;echo;done
ls -l /rhev/data-center/mnt/glusterSD/
Best Regards,
Strahil Nikolov
Are you using proxy?
Check that all hosts can discover and login with the same parameters you set in
oVirt.
Best Regards,
Strahil Nikolov
На 17 юни 2020 г. 11:32:49 GMT+03:00, Ricardo Alonso
написа:
>Trying to connect to a an iSCSI target (no chap/secrets) is failing
>with the m
Hello Glenn,
sadly I can't answer your questions , but I think you will find this one
interesting:
http://chrisj.cloud/?q=node/8
Best Regards,
Strahil Nikolov
На 17 юни 2020 г. 3:00:34 GMT+03:00, Glenn Marcy написа:
>I am hoping to try out adding RDO to oVirt after things with CentOS
What do you want to change ?
На 17 юни 2020 г. 0:36:49 GMT+03:00, Philip Brown написа:
>oVirt 4.3: Okay, I found documentation that I cant have more than one
>"ISO" type storage domain.
>I can kinda understand that.
>
>But, I cant even edit or delete the existing one?
>Even when logged in to
, while gluster's default
is only 64MB.
Best Rregards,
Strahil Nikolov
На 16 юни 2020 г. 23:22:53 GMT+03:00, Nir Soffer написа:
>On Tue, Jun 16, 2020 at 11:01 PM Joop wrote:
>>
>> On 16-6-2020 19:44, Strahil Nikolov wrote:
>> > Hey Joop,
>> >
>> >
Hey Joop,
are you using fully allocated qcow2 images ?
Best Regards,
Strahil Nikolov
На 16 юни 2020 г. 20:23:17 GMT+03:00, Joop написа:
>On 3-6-2020 14:58, Joop wrote:
>> Hi All,
>>
>> Just had a rather new experience in that starting a VM worked but the
>>
rds,
Strahil Nikolov
В неделя, 14 юни 2020 г., 11:41:36 ч. Гринуич+3, Strahil Nikolov
написа:
Hello All,
I have a problem which started after the latest patching 4.3.9 to 4.3.10 .
Symptoms so far:
1. Engine reports that Hypervisours are drifting too much
2. ETL service stopped working
3.
sec
server 162.159.200.123, stratum 3, offset 0.001765, delay 0.02742
server 162.159.200.1, stratum 3, offset 0.002551, delay 0.02924
14 Jun 11:40:37 ntpdate[29618]: adjust time server 162.159.200.123 offset
0.001765 sec
Best Regards,
Strahil Nikolov
___
Use
Hey Didi,
it seems that there is still timeshift in the DB - lots of stuff was reporting
ahead of time.
I had to update the table jobs with the correct month and now at least I have
no more spam in the web ui.
Best Regards,
Strahil Nikolov
В четвъртък, 11 юни 2020 г., 9:39:45 ч. Гринуич
You can check in
https://lists.ovirt.org/archives/search?q=spice+youtube=1=date-desc
for 'spice options hooks' . Maybe the discussed there could help.
Best Regards,
Strahil Nikolov
На 11 юни 2020 г. 12:35:30 GMT+03:00, ozme...@hotmail.com написа:
>Hi,
>While using "skype fo
Your gluster mount option is not correct.
You need 'backup-volfile-servers=storagehost2:storagehost3' (without the volume
name as they all have thaylt volume) .
Best Regards,
Strahil Nikolov
На 13 юни 2020 г. 10:47:28 GMT+03:00, Oliver Leinfelder
написа:
>Hi,
>
>I have the foll
activated - most probably that
one is also 1 month ahead in the DB :)
Not fixed yet.
Best Regards,
Strahil Nikolov
At least the events now show real time.
Best Regards
На 11 юни 2020 г. 6:00:52 GMT+03:00, Strahil Nikolov
написа:
>Hello All,
>
>I have a strange error that should
5.85.215.8, stratum 1, offset 0.000291, delay 0.02888
11 Jun 05:49:15 ntpdate[13911]: adjust time server 195.85.215.8 offset 0.000291
sec
Any ideas ?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email t
Maintenance
If you need to revert a snapshot, you need to stop the gluster volume , so you
need to follow the rule and keep the engine on a separate gluster volume.
Best Regards,
Strahil Nikolov
На 10 юни 2020 г. 13:21:08 GMT+03:00, Yedidyah Bar David
написа:
>On Wed, Jun 10, 2020 at 1:05 PM wr
Hi Vinicius,
If you don't have too much VMs and you have local storage (like a raid
controller) or NFS/iSCSI - you can also move the VMs there temporarily (live
storage migration) without any interruption.
Best Regards,
Strahil Nikolov
На 10 юни 2020 г. 12:14:38 GMT+03:00, Jayme написа
server_cpu_dict before the set_fact is {{
server_cpu_dict }}"
Note: e-mail clients can distort code. Don't copy/paste , but type the example
from above.
Best Regards,
Strahil Nikolov
На 9 юни 2020 г. 19:34:07 GMT+03:00, "Angel R. Gonzalez"
написа:
>Hi all!
>
>I'm
Are you using ECC ram ?
Best Regards,
Strahil Nikolov
На 8 юни 2020 г. 15:06:22 GMT+03:00, Joop написа:
>On 3-6-2020 14:58, Joop wrote:
>> Hi All,
>>
>> Just had a rather new experience in that starting a VM worked but the
>> kernel entered grub2 rescue console due
On top of that Ansible is also using ssh, so you need to 'override' the
settings for the engine.
Best Regards,
Strahil Nikolov
На 7 юни 2020 г. 13:01:08 GMT+03:00, Yedidyah Bar David
написа:
>On Sat, Jun 6, 2020 at 8:42 PM Michael Thomas wrote:
>>
>> After a week of iterat
На 7 юни 2020 г. 1:58:27 GMT+03:00, "Vinícius Ferrão via Users"
написа:
>Hello,
>
>This is a pretty vague and difficult question to answer. But what
>happens if the shared storage holding the VMs is down or unavailable
>for a period of time?
Once a pending I/O is blocked, libvirt will pause
There is also the API and I think there was a python script for ISO upload.
You can also import from OVA.
As you have multiple VMDKs, why don't you just upload in parallel (20-30 disk
in batch).
Best Regards,
Strahil Nikolov
На 6 юни 2020 г. 13:39:44 GMT+03:00, Magnus Isaksson написа
Hi Magnus,
there are several ways to upload a disk.
For details, check
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/sect-Virtual_Disk_Tasks#Uploading_a_Disk_Image_to_a_Storage_Domain
Best Regards,
Strahil Nikolov
На 6 юни 2020 г. 12:15
Hello Tal, Michal,
What do you think about the plan ?
Anything I have to be careful for ?
Best Regards,
Strahil Nikolov
В петък, 22 май 2020 г., 19:01:42 Гринуич+3, Sandro Bonazzola
написа:
Il giorno gio 21 mag 2020 alle ore 17:08 Strahil Nikolov via Users
ha scritto:
> He
Have you tried restarting the engine ?
Best Regards,
Strahil Nikolov
В петък, 5 юни 2020 г., 11:56:37 Гринуич+3, Krist van Besien
написа:
Hello all,
On my ovirt HC cluster I constantly get the following kinds of errors:
From /var/log/ovirt-engine/engine.log
2020-06-05 10:38
to regular CentOS instead
of Stream.
Best Regards,
Strahil Nikolov
В петък, 5 юни 2020 г., 11:37:16 Гринуич+3, Michal Skrivanek
написа:
Hi all,
we would like to ask about interest in community about oVirt moving to CentOS
Stream.
There were some requests before but it’s hard
You should disable sharding prior using the volume as a storage domain.
Disabling sharding while you have data on the volume will fause havoc.
Can you try disabling before creating the storage domain ?
Best Regards,
Strahil Nikolov
На 3 юни 2020 г. 20:35:13 GMT+03:00, Gianluca
Is it UEFI based or the lagacy bios ?
Best Regards,
Strahil Nikolov
На 3 юни 2020 г. 19:00:48 GMT+03:00, Marco Fais написа:
>Hi Joop
>
>I am having the same problem -- thought initially was due to the VM
>import
>but it is now happening even on newly created VMs.
>Rebooting
Please open a bug for that issue with the relevant logs.
4.4 HCI should depend on that one.
Thanks in advance.
Best Regards,
Strahil Nikolov
На 3 юни 2020 г. 18:31:18 GMT+03:00, Jillian Morgan
написа:
>Thank you, Strahil.
>
>That was exactly the problem. I had already fi
much.
Best Regards,
Strahil Nikolov
На 3 юни 2020 г. 11:28:50 GMT+03:00, fa...@kdsplumbing.com написа:
>Hi guys,
>
>After a lot of attempts, i was finally able to open a VM (centos8-cld)
>using virt-viewer on my mac (catalina). but i am faced with a login
>screen. So,
Maybe you miss a rpm.
Do you have vdsm-gluster package installed ?
Best Regards,
Strahil Nikolov
На 2 юни 2020 г. 19:18:43 GMT+03:00, jillian.mor...@primordial.ca написа:
>I've successfully migrated to a new 4.4 engine, now managing the older
>4.3 (CentOS 7) nodes. So far s
I think that feature didn't work for me on 4.2.7 , so I doubt it ever
worked.
It's worth opening a bugzilla.
Best Regards,
Strahil Nikolov
На 2 юни 2020 г. 2:29:50 GMT+03:00, Jaret Garcia via Users
написа:
>
>Hi guys I'm trying to deploy a new ovirt environmet based in versio
tement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/QX7NAZQ67VBA3KLPYIOXYSTPNU46XOBO/
Best Regards,
Strahil Nikolov
_
And what about https://bugzilla.redhat.com/show_bug.cgi?id=1787906
Do we have any validation of the checksum via the python script ?
Best Regards,
Strahil Nikolov
На 31 май 2020 г. 0:18:43 GMT+03:00, Carlos C написа:
>Hi,
>
>You can try upload using the python as described he
Last time I used oVirt to deploy my gluster, cockpit was using thick LVs
instead of thin LVM.
I think that thin LVM is way more suitable for the task. Then you can set the
size to anything needed.
Best Regards,
Strahil Nikolov
На 29 май 2020 г. 18:27:59 GMT+03:00, Jayme написа:
>Also
).
Best Regards,
Strahil Nikolov
На 29 май 2020 г. 17:26:37 GMT+03:00, Andrei Verovski
написа:
>Hi,
>
>OK, Michael, thanks a LOT, these commands fixed problem.
>
>cat /var/log/audit/audit.log | grep snmpd | grep sed | audit2allow -M
>my_module_for_snmpd
>semodule -i m
You mentioned that your certificates were different. Did you try converting
them to the type used in the example ?
Best Regards,
Strahil Nikolov
На 29 май 2020 г. 1:29:51 GMT+03:00, Stack Korora
написа:
>On 2020-05-28 16:07, Strahil Nikolov wrote:
>> Can you check
>https://w
before wiping the remnants.
Best Regards,
Strahil Nikolov
На 29 май 2020 г. 0:52:46 GMT+03:00, Gianluca Cecchi
написа:
>On Thu, May 28, 2020 at 3:09 PM Gianluca Cecchi
>
>wrote:
>
>[snip]
>
>>
>>
>> for the cluster type in the mean time I was able to change
Hi Marc,
4.3.X will not have a long life and the upgrade will be a p**n in the *** .
I know that 4.4 will be hard to install untill all the bugs are polished, but I
highly advise you to try to deploy 4.4 first.
Best Regards,
Strahil Nikolov
На 28 май 2020 г. 19:46:57 GMT+03:00, msantoro
1001 - 1100 of 1948 matches
Mail list logo