На 7 юни 2020 г. 1:58:27 GMT+03:00, "Vinícius Ferrão via Users"
написа:
>Hello,
>
>This is a pretty vague and difficult question to answer. But what
>happens if the shared storage holding the VMs is down or unavailable
>for a period of time?
Once a pending I/O is blocked, libvirt will pause t
On top of that Ansible is also using ssh, so you need to 'override' the
settings for the engine.
Best Regards,
Strahil Nikolov
На 7 юни 2020 г. 13:01:08 GMT+03:00, Yedidyah Bar David
написа:
>On Sat, Jun 6, 2020 at 8:42 PM Michael Thomas wrote:
>>
>> After a week
Are you using ECC ram ?
Best Regards,
Strahil Nikolov
На 8 юни 2020 г. 15:06:22 GMT+03:00, Joop написа:
>On 3-6-2020 14:58, Joop wrote:
>> Hi All,
>>
>> Just had a rather new experience in that starting a VM worked but the
>> kernel entered grub2 rescue console due
erence(['']) }}"
- "Actual value of server_cpu_dict before the set_fact is {{
server_cpu_dict }}"
Note: e-mail clients can distort code. Don't copy/paste , but type the example
from above.
Best Regards,
Strahil Nikolov
На 9 юни 2020 г. 19:34:07 GMT+03:00, &quo
Hi Vinicius,
If you don't have too much VMs and you have local storage (like a raid
controller) or NFS/iSCSI - you can also move the VMs there temporarily (live
storage migration) without any interruption.
Best Regards,
Strahil Nikolov
На 10 юни 2020 г. 12:14:38 GMT+03:00, Jayme н
Maintenance
If you need to revert a snapshot, you need to stop the gluster volume , so you
need to follow the rule and keep the engine on a separate gluster volume.
Best Regards,
Strahil Nikolov
На 10 юни 2020 г. 13:21:08 GMT+03:00, Yedidyah Bar David
написа:
>On Wed, Jun 10, 2020 at 1:05 PM wr
x27;s password:
server 195.85.215.8, stratum 1, offset 0.000291, delay 0.02888
11 Jun 05:49:15 ntpdate[13911]: adjust time server 195.85.215.8 offset 0.000291
sec
Any ideas ?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To un
activated - most probably that
one is also 1 month ahead in the DB :)
Not fixed yet.
Best Regards,
Strahil Nikolov
At least the events now show real time.
Best Regards
На 11 юни 2020 г. 6:00:52 GMT+03:00, Strahil Nikolov
написа:
>Hello All,
>
>I have a strange error that should be fixe
Your gluster mount option is not correct.
You need 'backup-volfile-servers=storagehost2:storagehost3' (without the volume
name as they all have thaylt volume) .
Best Regards,
Strahil Nikolov
На 13 юни 2020 г. 10:47:28 GMT+03:00, Oliver Leinfelder
написа:
>Hi,
>
>I hav
You can check in
https://lists.ovirt.org/archives/search?q=spice+youtube&page=1&sort=date-desc
for 'spice options hooks' . Maybe the discussed there could help.
Best Regards,
Strahil Nikolov
На 11 юни 2020 г. 12:35:30 GMT+03:00, ozme...@hotmail.com написа:
>Hi,
&g
Hey Didi,
it seems that there is still timeshift in the DB - lots of stuff was reporting
ahead of time.
I had to update the table jobs with the correct month and now at least I have
no more spam in the web ui.
Best Regards,
Strahil Nikolov
В четвъртък, 11 юни 2020 г., 9:39:45 ч. Гринуич
fset
0.001931 sec
server 162.159.200.123, stratum 3, offset 0.001765, delay 0.02742
server 162.159.200.1, stratum 3, offset 0.002551, delay 0.02924
14 Jun 11:40:37 ntpdate[29618]: adjust time server 162.159.200.123 offset
0.001765 sec
Best Regards,
Strahil Nikolov
any more.
Best Regards,
Strahil Nikolov
В неделя, 14 юни 2020 г., 11:41:36 ч. Гринуич+3, Strahil Nikolov
написа:
Hello All,
I have a problem which started after the latest patching 4.3.9 to 4.3.10 .
Symptoms so far:
1. Engine reports that Hypervisours are drifting too much
2. ETL service
Hey Joop,
are you using fully allocated qcow2 images ?
Best Regards,
Strahil Nikolov
На 16 юни 2020 г. 20:23:17 GMT+03:00, Joop написа:
>On 3-6-2020 14:58, Joop wrote:
>> Hi All,
>>
>> Just had a rather new experience in that starting a VM worked but the
>>
, while gluster's default
is only 64MB.
Best Rregards,
Strahil Nikolov
На 16 юни 2020 г. 23:22:53 GMT+03:00, Nir Soffer написа:
>On Tue, Jun 16, 2020 at 11:01 PM Joop wrote:
>>
>> On 16-6-2020 19:44, Strahil Nikolov wrote:
>> > Hey Joop,
>> >
>>
What do you want to change ?
На 17 юни 2020 г. 0:36:49 GMT+03:00, Philip Brown написа:
>oVirt 4.3: Okay, I found documentation that I cant have more than one
>"ISO" type storage domain.
>I can kinda understand that.
>
>But, I cant even edit or delete the existing one?
>Even when logged in to th
Hello Glenn,
sadly I can't answer your questions , but I think you will find this one
interesting:
http://chrisj.cloud/?q=node/8
Best Regards,
Strahil Nikolov
На 17 юни 2020 г. 3:00:34 GMT+03:00, Glenn Marcy написа:
>I am hoping to try out adding RDO to oVirt after things with Ce
Are you using proxy?
Check that all hosts can discover and login with the same parameters you set in
oVirt.
Best Regards,
Strahil Nikolov
На 17 юни 2020 г. 11:32:49 GMT+03:00, Ricardo Alonso
написа:
>Trying to connect to a an iSCSI target (no chap/secrets) is failing
>with the m
Log to the oVirt cluster and provide the output of:
gluster pool list
gluster volume list
for i in $(gluster volume list); do echo $i;echo; gluster volume info $i;
echo;echo;gluster volume status $i;echo;echo;echo;done
ls -l /rhev/data-center/mnt/glusterSD/
Best Regards,
Strahil Nikolov
I don't see '/rhev/data-center/mnt/192.168.24.13:_stor_import1' mounted at
all .
What is the status of all storage domains ?
Best Regards,
Strahil Nikolov
На 18 юни 2020 г. 21:43:44 GMT+03:00, C Williams
написа:
> Resending to deal with possible email issues
>
Check on the hosts tab , which is your current SPM (last column in Admin UI).
Then open the /var/log/vdsm/vdsm.log and repeat the operation.
Then provide the log from that host and the engine's log (on the HostedEngine
VM or on your standalone engine).
Best Regards,
Strahil Nikolov
На 1
Thanks Eli for your reply.
Bug is opened: https://bugzilla.redhat.com/show_bug.cgi?id=1848353
Best Regards,
Strahil Nikolov
На 16 юни 2020 г. 0:20:45 GMT+03:00, Eli Mesika написа:
>Hi
>
>Looking at the code I realized that the date/time retrieved from the
>host
>is cached an
Hey C Williams,
sorry for the delay, but I couldn't get somw time to check your logs. Will
try a little bit later.
Best Regards,
Strahil Nikolov
На 20 юни 2020 г. 2:37:22 GMT+03:00, C Williams
написа:
>Hello,
>
>Was wanting to follow up on this issue. Users are impacted
workaround was to downgrade the gluster packages on all nodes
(and reboot each node 1 by 1 ) if the major version is the same, but if you
upgraded to v7.X - then you can try the v7.0 .
Best Regards,
Strahil Nikolov
В събота, 20 юни 2020 г., 18:48:42 ч. Гринуич+3, C Williams
написа:
Hello
d the old data), but I could afford the downtime.
Also, I can say that v7.0 ( but not 7.1 or anything later) also worked
without the ACL issue, but it causes some trouble in oVirt - so avoid that
unless you have no other options.
Best Regards,
Strahil Nikolov
На 21 юни 2020 г. 4:
?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about
a 'backup') -
pointing to the new volume name.
If you observe issues , I would recommend you to downgrade gluster
packages one node at a time . Then you might be able to restore your
oVirt operations.
Best Regards,
Strahil Nikolov
На 21 юни 2020 г. 18:01:31 GMT+03:00,
t;
>On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov
>wrote:
>
>> I created a fresh volume (which is not an ovirt sgorage domain),
>set
>> the original storage domain in maintenance and detached it.
>> Then I 'cp -a ' the data from the old to the ne
На 21 юни 2020 г. 23:26:32 GMT+03:00, David White via Users
написа:
>I'm reading through all of the documentation at
>https://ovirt.org/documentation/, and am a bit overwhelmed with all of
>the different options for installing oVirt.
>
>My particular use case is that I'm looking for a way to m
de your gluster packages!!!
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams
написа:
>Strahil,
>
>It sounds like you used a "System Managed Volume" for the new storage
>domain,is that correct?
>
>Thank You For Your Help !
>
>O
above (6.6+) was causing complete lockdown. Also v7.0 was working,
but it's supported in oVirt 4.4.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 7:21:15 GMT+03:00, C Williams
написа:
>Another question
>
>What version could I downgrade to safely ? I am at 6.9 .
>
>Thank Y
На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via Users
написа:
>Thank you and Strahil for your responses.
>They were both very helpful.
>
>> I think a hosted engine installation VM wants 16GB RAM configured
>though I've built older versions with 8GB RAM.
>> For modern VMs CentOS8 x86_64 re
It's the client's browser settings , but I think it's easier to either change
the certificate to something that will be trusted, or to just import it.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 11:29:20 GMT+03:00, Anton Louw via Users
написа:
>Hi All,
>
>
You should ensure that in the storage domain tab, the old storage is not
visible.
I still wander why yoiu didn't try to downgrade first.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 13:58:33 GMT+03:00, C Williams
написа:
>Strahil,
>
>The GLCL3 storage domain was det
7;s a
pain in the @$$.
I think that optimal is to have several 10Gbit NICs (at least 1 for gluster
and 1 for oVirt live migration).
Also, NVMEs can be used as lvm cache for spinning disks.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 18:50:01 GMT+03:00, David White
написа:
>> Fo
using ?
2. You now have your old gluster volume attached to oVirt and the new volume
unused, right ?
3. Did you copy the contents of the old volume to the new one ?
Best Regards,
Strahil Nikolov
На 23 юни 2020 г. 4:34:19 GMT+03:00, C Williams
написа:
>Strahil,
>
>Than
rth trying .
Best Regards,
Strahil Nikolov
На 23 юни 2020 г. 23:42:13 GMT+03:00, C Williams
написа:
>Strahil,
>
>Thanks for getting back with me !
>
>Sounds like it is best to evacuate VM disks to another storage domain
>--
>if possible from a Gluster storage domain -- pr
Did you reinstall the node via the WEB UI ?
Best Regards,
Strahil Nikolov
На 25 юни 2020 г. 3:23:15 GMT+03:00, "Vinícius Ferrão via Users"
написа:
>Hello,
>
>For reasons unknown one of my hosts is trying to mount an old storage
>point that’s been removed some time
What is the status of the host?
Usially a VM is staless , because the engine cannot reach the VDSM on the
Hypervisour.
Best Regards,
Strahil Nikolov
На 26 юни 2020 г. 22:23:15 GMT+03:00, pas...@butterflyit.com написа:
>Currently from the ovirt web interface it is not possible to suspen
What repos do you have enabled ?
It seems you have a repo conflict.
Best Regards,
Strahil Nikolov
На 26 юни 2020 г. 18:30:31 GMT+03:00, eev...@digitaldatatechs.com написа:
>I do not have a self hosted engine and did yum update whech update
>these files:
>Updated:
> microcode_
Most probably the hosts's ICMP echo requests to the gateway get lost. This
leads to enough penalty, so your engine is moved away from the host.
Which 'penalty' did you disable to stabilize your environment ?
Best Regards,
Strahil Nikolov
На 27 юни 2020 г. 18:19:58
As you will migrate from block-based storage to file-based storage, I think
that you should use the backup & restore procedure.
Best Regards,
Strahil Nikolov
На 25 юни 2020 г. 7:31:55 GMT+03:00, Erez Zarum написа:
>I was looking for a “complete” best practice to migrate a self-hosted
Can you set one of the Hypervisours into maintenance and use the "reinstall"
option from the UI ?
Best Regards,
Strahil Nikolov
На 25 юни 2020 г. 13:24:26 GMT+03:00, Erez Zarum написа:
>I have a Self-hosted Engine running on iSCSI as well as couple of
>Storage domains using iS
ns for "optimize for virt" are located at
/var/lib/glusterd/groups/virt on each gluster node.
Best Regards,
Strahil Nikolov
В неделя, 28 юни 2020 г., 22:13:09 Гринуич+3, jury cat
написа:
Hello all,
I am using Ovirt 4.3.10 on Centos 7.8 with glusterfs 6.9 .
My Gluster setup
isable sharding once enabled -> stays enabled!
Best Regards,
Strahil Nikolov
На 29 юни 2020 г. 1:33:20 GMT+03:00, Jayme написа:
>I’ve tried various methods to improve gluster performance on similar
>hardware and never had much luck. Small file workloads were
>particular
На 29 юни 2020 г. 4:14:33 GMT+03:00, jury cat написа:
>If i destroy the brick, i might upgrade to ovirt 4.4 and Centos 8.2.
>Do you think upgrade to ovirt 4.4 with glusterfs improves performance
>or i am better with NFS ?
Actually only you can find out as we cannot know the workload of your V
Have you checked the qemu log on the host that was running the VMs ?
What was done recently ?
Any reason SELINUX is disabled ?
Best Regards,
Strahil Nikolov
На 30 юни 2020 г. 18:08:09 GMT+03:00, Antoine Nguyen
написа:
>Hello,
>
>Thanks for your interest and time.
>Here is th
I would recommend you to try to logrotate it.
I had similar issue with corrupted logrotate state file which led to vdsm
growing to 20GB
You can also use 'truncate -s 0 your.log' to wipe without removing it.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 8:00:27 GMT+03:00, Anton Louw
Usually the vdsm-network-init and vdsm-network services take care of the
network configuration.
The easiest way to resolve is to set the host in maintenance and then use
'reinstall' from the UI.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 14:50:14 GMT+03:00, Erez Zarum напи
Usually the vdsm-network-init and vdsm-network services take care of the
network configuration.
The easiest way to resolve is to set the host in maintenance and then use
'reinstall' from the UI.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 14:50:14 GMT+03:00, Erez Zarum напи
Yes, ovirt-ha-broker and ovirt-ha-agent take care of HostedEngine to be up
and running and in case something goes bad - to be migrated away.
Best Regards,
Strahil Nikolov
На 3 юли 2020 г. 7:53:14 GMT+03:00, Anton Louw via Users
написа:
>Hi Everybody,
>
>Thanks for all the respon
На 3 юли 2020 г. 11:30:58 GMT+03:00, Andrei Verovski
написа:
>Hi !
>
>I have 2-node oVirt 4.3 installation, with engine running as KVM guest
>on SuSE file server (not hosted engine).
>Nodes are manually installed on CentOS 7.x (further referred as old
>node #1 and #2).
>
>I’m going to add 1 add
n
LVM so you can make a restore point via:
power off the HE, snapshot the volume, power on the HE, do any kkind of
change on HE (like upgrade).
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an emai
uld be fine.
The following is for FILE-based storage (which is not your case):
You can check & in the xml. Once you
mount the storage and try to power up the VM, libvirt will tell you (in the
log) where exactly it is expecting a symlink to exists. It points to the path
of the storage for t
the same on
Windows and Linux :D )
Best Regards,
Strahil Nikolov
На 9 юли 2020 г. 18:32:39 GMT+03:00, Michael Watters
написа:
>After installing updates on our ovirt-engine running CentOS 7.8 the
>administration portal will no longer load. The engine.log shows an
>error as follows.
&
I'm left with the impression that we are talking about SSO from oVirt.
Yet, the author has to clarify.
Best Regards,
Strahil Nikolov
На 12 юли 2020 г. 17:39:31 GMT+03:00, Wesley Stewart
написа:
>Are you asking for troubleshooting on getting windows RDP working in a
>windo
What is the contents of your dnf.conf ?
Best Regards,
Strahil Nikolov
На 13 юли 2020 г. 17:26:42 GMT+03:00, Markus Schaufler
написа:
>Hi all,
>in our environment, CentOS7 and RHEL7 for the most part (Ovirt 4.3 and
>RHV 4.3), we are able to set a proxy directly in /etc/yum.conf (o
Yeah,
but the idea is RH docu to catch up with oVirt's documentation, not the
opposite.
Best Regards,
Strahil Nikolov
На 14 юли 2020 г. 0:26:11 GMT+03:00, Jayme написа:
>Personally I find the rhev documentation much more complete:
>https://access.redhat.com/document
Hi,
some of the components in oVirt 4.3 rely on SELINUX being enabled.
Most probably it is the same in 4.4 , so please try with SELINUX in enforcing
mode.
Best Regards,
Strahil Nikolov
На 14 юли 2020 г. 3:43:35 GMT+03:00, Andy via Users написа:
>I just tried another fresh install with oV
t usable 4.00 MiB'. Select 99G or
'100%PVS'
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVi
Hm... Clearly it can't get the necessary info.
Have you tried to do full cleanup via 'ovirt-hosted-engine-cleanup' (the name
is based on my vague memories from 2017) and then wipe all data on the
storage domain ?
Best Regards,
Strahil Nikolov
На 14 юли 2020 г. 23:13:33
interpreter' must be indented to the
write with 2 spaces (no tabs allowed).
Best Regards,
Strahil Nikolov
На 15 юли 2020 г. 0:19:09 GMT+03:00, clam2...@gmail.com написа:
>Thank you Strahil. I think I edited the oVirt Node Cockpit
>Hyperconverged Wizard Gluster Deployment Ansible pla
Also, check on system the LV size as it seema that based on your previous
outputs the PV names do not match.
You might have now a very large HostedEngine LV which will be a waste of space.
Best Regards,
Strahil Nikolov
На 15 юли 2020 г. 0:19:09 GMT+03:00, clam2...@gmail.com написа:
>Th
f a single shard, but you are fully
"supported" from gluster perspective.
Also, all hosts can have an external storage like your NAS.
Best Regards,
Strahil Nikolov
На 15 юли 2020 г. 21:11:34 GMT+03:00, Philip Brown написа:
>arg. when I said "add 2 more nodes that arent pa
I guess your only option is to edit
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml and
replace 'package' with 'dnf' (keep the beginning 2 "spaces" deeper than '-
name' -> just where "package" starts).
Best Regard
ve only 2 replication set hosts, and multiple
>(arbitrariliy many) arbiter nodes?
>
You need 2 copies of the data, but 'replica 3' is most optimal.
>- Original Message -
>From: "Strahil Nikolov"
>To: "users" , "Philip Brown"
>Sent: We
aintenance -> umounted on the host.
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 1:21:33 GMT+03:00, Philip Brown написа:
>Awesome thats good news.
>
>So... does that happen automatically?
>
>ie: install ovirt "node" image, then tell ovirt hosted engine "go add
>th
What do you see in the engine's logs ?
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 13:24:03 GMT+03:00, lu.alfo...@almaviva.it написа:
>i attach the hosts info :
>
>Software
>OS Version:
>RHEL - 7 - 8.2003.0.el7.centos
>OS Description:
>CentOS Linux 7 (Core)
>K
Have you tried to replace 'package' with 'dnf' in
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml (somewhere
around line 33).
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 16:30:04 GMT+03:00, dominique.desche...@gcgenicom.com написа:
>I also hav
Can you share your /etc/hosts.
As far as I remember there was an entry like:
127.0.1.2 hostname
So you have to comment it out.
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 16:53:36 GMT+03:00, Florian Schmid via Users
написа:
>Hi,
>
>I have a problem with Ubuntu 20.04 VM repo
What version of CentOS 8 are you using -> Stream or regular, version ?
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 21:07:57 GMT+03:00, "Dominique Deschênes"
написа:
>
>
>HI,
>Thank you for your answers
>
>I tried to replace the "package" with "
What is the output of:
host sia-svr-ct02
nslookup sia-svr-ct02
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 10:46:08 GMT+03:00, lu.alfo...@almaviva.it написа:
>2020-07-15 11:41:58,968+02 ERROR
>[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
>(EE-ManagedThre
Hm...
but then setting that variable to python3 should work, but based on the list
reports - it doesn't work.
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 12:35:52 GMT+03:00, Gianluca Cecchi
написа:
>On Fri, Jul 17, 2020 at 11:25 AM Gianluca Cecchi
>
>wrote:
>
>>
Definitely it's not a resolve issue.
Have you made changes to sshd_config on sia-svr-ct02 ?
Is root login opened ?
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 13:58:09 GMT+03:00, lu.alfo...@almaviva.it написа:
>This is the output from the engine:
>
>[root@dacs-ovirt ~]# hos
Can you provide the target's facts in the bug report ?
Best Regards,
Strahil Nikolov
На 17 юли 2020 г. 14:48:39 GMT+03:00, Gianluca Cecchi
написа:
>On Fri, Jul 17, 2020 at 1:34 PM Dominique Deschênes <
>dominique.desche...@gcgenicom.com> wrote:
>
>> Hi,
>>
>&
VdsRecoveryTimeoutInMintues
VdsRefreshRate
vdsTimeout
Try them first in a test system before deploying on production.
Best Regards,
Strahil Nikolov
В събота, 18 юли 2020 г., 10:40:14 Гринуич+3, lu.alfo...@almaviva.it
написа:
Hello,
yes, root login is opened.
No, I didn't make changes t
use only 3 servers for Gluster , while much more systems as ovirt
nodes (CPU & RAM) to host VMs.
In case of a 4 node setup - 3 hosts have the gluster data and the 4th - is not
part of ths gluster, just hosting VMs.
Best Regards,
Strahil Nikolov
На 19 юли 2020 г. 15:25:10 GMT+03:00, David
There is a bug already opened for that behaviour:
https://bugzilla.redhat.com/show_bug.cgi?id=1858234
Best Regards,
Strahil Nikolov
На 19 юли 2020 г. 13:26:01 GMT+03:00, erin.s...@bookit.com написа:
>Hi Guys we attempted to deploy a new ovirt cluster two weeks ago. 4.4.1
>and 4.4.0 O
es (servers only in oVirt , but not part of
Gluster) - you can allow the engine to power off and power on Hosts on demand -
so you can conserve power and cooling while keeping the count of oVirt nodes in
the healthy zone.
Best Regards,
Strahil Nikolov
_
Just copy/paste it in a browser.
На 20 юли 2020 г. 17:00:01 GMT+03:00, lu.alfo...@almaviva.it написа:
>Hello,
>
>the link is not available
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Stateme
Do you have NICs that support iSCSI -I guess you can use hardware offloading?
MTU size ?
Lattency is usually the killer of any performance, what is your round-trip time
?
Best Regards,
Strahil Nikolov
На 21 юли 2020 г. 2:37:10 GMT+03:00, Philip Brown написа:
>AH! my apologies.
You need to provode more details like storage type , any errors indicated,
somwthing changed recently, etc
Best Regards,
Strahil Nikolov
На 22 юли 2020 г. 13:15:06 GMT+03:00, Tarun Kushwaha
написа:
>My ovirt hosted engine storage got in locked state.
>Now I am unable to perfo
There is a bug whose fix is pending release.
Best Regards,
Strahil Nikolov
На 22 юли 2020 г. 17:54:11 GMT+03:00, Vijay Sachdeva via Users
написа:
>Hello Everyone,
>
>
>
>Waiting for host to be up task is stuck for hours and when checked
>engine log found this below:
>
&
Hi Miguel,
Do all hosts support the CPU type of the VM ?
Best Regards,
Strahil Nikolov
На 23 юли 2020 г. 0:58:50 GMT+03:00, miguel.gar...@toshibagcs.com написа:
>I had added a couple new hosts in my cluster (hyp16,hyp17) both
>followed the same procedure but at the moment to start vms
You need to keep the ssh root access from the engine , so you will need a
'Match' stanza for the engine.
Of course testing is very important, but in case you got no test setup - you
can set a node in maintenance and experiment a little bit.
Best Regards,
Strahil Nikolov
На 2
It's not yet fixed. Either check the other threads for the bug id and fix
manually or wait a while for the fix to be released.
Best Regards,
Strahil Nikolov
На 22 юли 2020 г. 22:43:22 GMT+03:00, Vijay Sachdeva via Users
написа:
>Which version to use for Self-hosted de
It's not yet fixed. Either check the other threads for the bug id and fix
manually or wait a while for the fix to be released.
Best Regards,
Strahil Nikolov
На 22 юли 2020 г. 22:43:22 GMT+03:00, Vijay Sachdeva via Users
написа:
>Which version to use for Self-hosted de
pushed against a typical DB
5. Measure performance during point 4 (for example time of execution)
6. Start over
Anything else is a waste of time.
Best Regards,
Strahil Nikolov
На 24 юли 2020 г. 13:26:18 GMT+03:00, Stefan Hajnoczi
написа:
>On Thu, Jul 23, 2020 at 07:25:14AM -0700, Philip Br
For the subscription you got a way around -> just subscripe at
developers.redhat.com
Best Regards,
Strahil Nikolov
На 24 юли 2020 г. 17:22:17 GMT+03:00, Dmitry Kharlamov
написа:
>If it does not make it difficult, please tell me at least the general
>direction in which you need to l
Hi Jiri,
you are the second person who mentions it. Can you open a bug at
bugzilla.redhat.com about that ?
Best Regards,
Strahil Nikolov
На 24 юли 2020 г. 16:30:02 GMT+03:00, "Jiří Sléžka" написа:
>On 7/24/20 11:36 AM, Jiří Sléžka wrote:
>> On 7/24/20 10:56 A
You can suppress them by following
https://access.redhat.com/solutions/3556491
if $msg contains "messsage to suppress" then stop
На 24 юли 2020 г. 18:46:37 GMT+03:00, Dmitry Kharlamov
написа:
>Yes! Happened! Thank you so much!
>Didn't know about this possibility.
>
>Oops, the solution is ve
Have you tried ovn-trace to detect your issues ?
I think the following blog is quite good:
https://www.google.com/amp/s/blog.russellbryant.net/2016/11/11/ovn-logical-flows-and-ovn-trace/amp/
Best Regards,
Strahil Nikolov
На 27 юли 2020 г. 15:43:48 GMT+03:00, Konstantinos B
написа:
>Hi
I've run KVM VMs ontop oVirt Guest. Are you sure that the Nested
Virtualization is your problem ?
Best Regards,
Strahil Nikolov
На 29 юли 2020 г. 23:33:48 GMT+03:00, tho...@hoberg.net написа:
>I tried using nested virtualization, too, a couple of weeks ago.
>
>I was usin
I have been using 7.6 (and rewntly migrated to 7.7) on my oVirt 4.3.10
withkut any issues so far.
Are you sure that it's not oVirt 4.4 specific ?
Best Regards,
Strahil Nikolov
На 30 юли 2020 г. 15:03:17 GMT+03:00, shadow emy написа:
>Good that is ok for you now.
>As Gianlu
Damn, those thick fingers...
На 30 юли 2020 г. 23:12:00 GMT+03:00, Strahil Nikolov
написа:
>I have been using 7.6 (and rewntly migrated to 7.7) on my oVirt 4.3.10
> withkut any issues so far.
>
>Are you sure that it's not oVirt 4.4 specific ?
>
>Best Regards,
&g
andro,
can you assist with this one ?
Best Regards,
Strahil Nikolov
На 31 юли 2020 г. 10:01:17 GMT+03:00, Alex K написа:
>Has anyone been able to import a storage domain and still have access
>to VM
>snapshots or this might be a missing feature/bug that needs to be
>reported?
&g
hot potato' .
Yet, I agree that QA tests should have catched it in the first place, but
here comes the community part - to assist the devs with finding the test
cases we all need.
Best Regards,
Strahil Nikolov
На 1 август 2020 г. 12:51:37 GMT+03:00, tho...@hoberg.net написа:
>Unfort
disks remain, while the rest are merged into a single file.
Restoring a snapshot is simplest - everything after that snapshot is deleted
and the vm1 will use the snapshot disk till you delete (which will merge base
disk with snapshot disk) that snapshot.
Best Regards,
Strahil Nikolov
На 2 август 2
Are you using the oVirt node ?
If you use custom setup, you need to have the same partitions/LVs that are
used by default .
Can you give a screenshot of the installer?
Best Regards,
Strahil Nikolov
На 3 август 2020 г. 16:28:02 GMT+03:00, Gianluca Cecchi
написа:
>On Mon, Aug 3, 2
oVirt should merge the disks and release any disks space used.
The best way is to find the VM disks and then identify the disk chain (via
qemu-img) and the find the size of the base disk + all the snapshots.
Best Regards,
Strahil Nikolov
На 4 август 2020 г. 16:48:23 GMT+03:00, jorgevisent
501 - 600 of 1587 matches
Mail list logo