Can you check the output on the VM that was affected:
# cat /etc/*release
# sysctl -a | grep dirty
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:07:48 Гринуич+2, Vinícius Ferrão via Users
написа:
Hi Strahil.
I’m not using barrier options on mount. It’s the default
if it frees most
of their cache.Sadly without cache performance will drop , but you can't assign
unlimited memory :D
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 10:57:00 Гринуич+2, Stefan Seifried
написа:
Hi,
I'm quite new to oVirt, so my apologizies if I'm asking
As you use proto=TCP it should not cause the behaviour you are observing.
I was wondering if the VM is rebooted for some reason (maybe HA) during
intensive I/O.
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:07:48 Гринуич+2, Vinícius Ferrão via Users
написа:
Hi
//
of=/dev/null bs=4M status=progress
Does it give errors ?
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 20:06:42 Гринуич+2, supo...@logicworks.pt
написа:
No heals pending
There are some VM's I can move the disk but some others VM's I cannot move the
disk
It's a simple
Damn...
You are using EFI boot. Does this happen only to EFI machines ?
Did you notice if only EL 8 is affected ?
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:36:09 Гринуич+2, Vinícius Ferrão
написа:
Yes!
I have a live VM right now that will de dead on a reboot
Could it be faulty ram ?
Do you use ECC ram ?
Best Regards,
Strahil Nikolov
В вторник, 1 декември 2020 г., 06:17:10 Гринуич+2, Vinícius Ferrão via Users
написа:
Hi again,
I had to shutdown everything because of a power outage in the office. When
trying to get the infra up again
e and move
the disk to that new storage.
Once you move all VM's disks you can get rid of the old Gluster volume and
reuse the space .
P.S.: Sadly I didn't have the time to look at your logs .
Best Regards,
Strahil Nikolov
В понеделник, 30 ноември 2020 г., 01:22:46 Гринуич+2,
написа:
smart and it doesn't expect any foreign data to reside there.
Of course, I could survive the downtime.
Best Regards,
Strahil Nikolov
В вторник, 1 декември 2020 г., 19:40:28 Гринуич+2, supo...@logicworks.pt
написа:
Thanks
Did you use the command cp to copy data between gluster volumes
Hey Dominik,
it was mentioned several times before why teaming is "better" than bonding ;)
Best Regards,
Strahil Nikolov
В сряда, 16 декември 2020 г., 16:59:20 Гринуич+2, Dominik Holler
написа:
On Fri, Dec 11, 2020 at 1:19 AM Carlos C wrote:
> Hi folks,
>
&g
Sadly no. I have used it on test Clusters with KVM VMs.
If you manage to use oVirt as a nested setup, fencing works quite well with
ovirt.
Best Regards,
Strahil Nikolov
В четвъртък, 17 декември 2020 г., 11:16:47 Гринуич+2, Alex K
написа:
Hi Strahil,
Do you have a working setup
/testfile
Best Regards,
Strahil Nikolov
В четвъртък, 17 декември 2020 г., 11:45:45 Гринуич+2, Ritesh Chikatwar
написа:
Hello,
Which version of ovirt are you using?
Can you make sure gluster service is running or not because i see an error as
Could not connect to storageServer.
Also
Did you mistype in the e-mail or you really put / ?
For Gluster , there should be a ":" character between Gluster Vol Server and
volume:
: and :/ are both valid ways to define the volume.
Best Regards,
Strahil Nikolov
В сряда, 16 декември 2020 г., 02:37:45 Гринуич+2, Ariez Ahit
h node can reach the
other nodes in the pool
Best Regards,
Strahil Nikolov
В 22:38 + на 21.12.2020 (пн), Charles Lam написа:
> Still not able to deploy Gluster on oVirt Node Hyperconverged - same
> error; upgraded to v4.4.4 and "kvdo not installed"
>
> T
Hi Latcho,
would you mind to open a bug in bugzilla.redhat.com next year ?
Best Regards,Strahil Nikolov
В 11:09 + на 24.12.2020 (чт), Latchezar Filtchev написа:
>
> Hello ,
>
> I think I resolved this issue. It is dig response when resolving the
> domain name!
>
stop complaining and then the agent will kick
in.
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Cond
Are you sure you have installed them with HE support ?
Best Regards,Strahil NikolovВ 19:06 +0200 на 23.12.2020 (ср), Gilboa
Davara написа:
> On Wed, Dec 23, 2020 at 6:28 PM Gilboa Davara
> wrote:
> > On Wed, Dec 23, 2020 at 6:20 PM Gilboa Davara
> > wrote:
> > > On Tue, Dec 22, 2020 at 11:45 AM
create the
> necessary /etc/ovirt-hosted-engine/hosted-engine.conf configuration
> on the new hosts, preventing ovirt-ha-* services from starting.
>
So , set the host into maintenance and
then select installation -> reinstall -> Hosted Engine -> Deploy
Best Regards,
Strahil Nikolov
"###"
; done
Best Regards,
Strahil Nikolov
В петък, 18 декември 2020 г., 22:44:48 Гринуич+2, Charles Lam
написа:
Dear friends,
Thanks to Donald and Strahil, my earlier Gluster deploy issue was resolved by
disabling multipath on the n
,
Strahil Nikolov
В понеделник, 21 декември 2020 г., 17:54:42 Гринуич+2, Charles Lam
написа:
Thanks so very much Strahil for your continued assistance!
[root@fmov1n1 conf.d]# gluster pool list
UUID Hostname State
16e921fb-99d3-4a2e-81e6
Fence_xvm requires a key is deployed on both the Host and the VMs in order to
succeed. What is happening when you use the cli on any of the VMs ?
Also, the VMs require an open tcp port to receive the necessary output of each
request.
Best Regards,
Strahil Nikolov
В понеделник, 14 декември
You can use OEL or any EL-based clone.
Best Regards,
Strahil Nikolov
В вторник, 22 декември 2020 г., 08:46:54 Гринуич+2, Jason Keltz
написа:
On 12/21/2020 8:22 AM, Sandro Bonazzola wrote:
>
oVirt 4.4.4 is now generally available
The oVirt project is excited to annou
I guess that you can use a direct rule for allowing any traffic to the
nested VM.
As far as I know the nested oVirt is not working nice and it's easier
to test with a single VM with KVM.
Best Regards,Strahil NikolovВ 01:07 +0100 на 23.12.2020 (ср), wodel
youchi написа:
> Hi,
>
> We have an HCI
side, the community of oVirt is quite active and willing to
assist (including RedHat engineers) and I have not seen a single issue not
solved.
Best Regards,
Strahil Nikolov
В четвъртък, 10 декември 2020 г., 22:03:45 Гринуич+2, tho...@hoberg.net
написа:
I came to oVirt thinking that it was
В четвъртък, 17 декември 2020 г., 22:32:14 Гринуич+2, Alex K
написа:
On Thu, Dec 17, 2020, 14:43 Strahil Nikolov wrote:
> Sadly no. I have used it on test Clusters with KVM VMs.
You mean clusters managed with pacemaker?
Yes, with pacemaker.
>
> If you manage to
h
VMs to be in the same VLAN.
For example:
VM1 - 192.168.0.1/24
VM2 - 192.168.0.1/24
Floating IP - 192.168.0.200/24
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Erm... cause it tries to do one thing - Virtualization.
If you need a DHCP server, deploy a VM and create the DHCP server.
Best Regards,
Strahil Nikolov
В вторник, 10 ноември 2020 г., 08:09:57 Гринуич+2, yam yam
написа:
Thanks for the reply!
Do you know why oVirt doesn't support
Usually this happens when the engine can no longer reach the host and of course
, there is no fencing mechanism to confirm that the node was rebooted.
It's interesting that all VMs are in such state ... I would restart the engine
itself.
Best Regards,
Strahil Nikolov
В неделя, 8 ноември
t"
path: "/etc/profile.d/proxy.sh"
delegate_to: localhost
And the result is:
[root@engine ~]# echo $http_proxy
http://myproxy.localdomain:3399
[root@engine ~]# echo $https_proxy
https://myproxy.localdomain:3340
[root@engine ~]# echo $no_proxy
localhost
Best Regards,
Strahil N
hen installing software would
look like ( I haven't tested it,though):
- name: Deploy_proxy
lineinfile:
line: "proxy=http://myproxy.localdomain:3399;
state: "present"
path: "/etc/dnf/dnf.conf"
delegate_to: localhost
Best Regards,
Strahil Nikolo
/roles/ovirt.hosted_engine_setup
I guess you can run a grep in that dir (use -R for recursive) to find the task
name that failed /it's in your previous e-mail/ and then I guess you can put it
somewhere before that.
Best Regards,
Strahil Nikolov
В вторник, 10 ноември 2020 г., 17:45:48
ite common in Enterprise environments .
Best Regards,
Strahil Nikolov
В вторник, 10 ноември 2020 г., 18:57:03 Гринуич+2, Strahil Nikolov
написа:
Simeon's proposal will be valid only for the deployment of the package - and it
should allow the deployment to pass.
The example from the prev
UI after the upload and notice the uuid of the disk (long string) and then you
can find it in Gluster and use sha256sum or md5sum to verify the upload.
P.S.: about that error, I would look first in the DB.
Best Regards,
Strahil Nikolov
В вторник, 10 ноември 2020 г., 12:20:50 Гринуич+2,
As oVirt 4.4.X is using EL8 , you just need to install it with CentOS8 and
check if everything goes well. I know that some old hardware was deprecated ,
but elrepo repository helps alot in such cases.
Best Regards,
Strahil Nikolov
В четвъртък, 12 ноември 2020 г., 12:14:56 Гринуич+2,
n
What happens when you change the default, but also add a "Match" directive
overriding that option for the engine ?
Best Regards,
Strahil Nikolov
В четвъртък, 12 ноември 2020 г., 10:01:12 Гринуич+2, Angus Clarke
написа:
Hello
Sharing for anyone who needs it, this w
Most probably a gluster bug.
Best Regards,
Strahil Nikolov
В неделя, 15 ноември 2020 г., 22:31:24 Гринуич+2, supo...@logicworks.pt
написа:
So, you think it's really a bug?
De: "Nir Soffer"
Para: supo...@logicworks.pt
Cc: "users&
-9f8f-e6ae68794051
/rhev/data-center/mnt/glusterSD/node1-teste.ac
loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b
Best Regards,
Strahil Nikolov
В събота, 14 ноември 2020 г., 16:46:37 Гринуич+2, supo
Can you reach the Engine system via ssh (as root) ?
Is it HostedEngine or a separate Engine ?
Best Regards,
Strahil Nikolov
В понеделник, 16 ноември 2020 г., 11:32:39 Гринуич+2, ilird...@live.com
написа:
Hello!
I'm new to oVirt, one of my customers has a 2 node (cluster) of oVirt
Can you try a live migration ?
I got a similar case and the live migration somehow triggered a fix.
Best Regards,
Strahil Nikolov
В петък, 20 ноември 2020 г., 21:04:13 Гринуич+2,
написа:
Hi,
I was trying to move a disk between glusters domain storage without success.
# gluster
I can recommend you to:
- enable debug level of gluster's bricks
- try to reproduce the issue
I had similar issue with gluster v6.6 and above.
Best Regards,
Strahil Nikolov
В петък, 20 ноември 2020 г., 23:35:14 Гринуич+2,
написа:
I tried to move the VM disk with the VM up
I also
What are you trying to achive ?
Best Regards,
Strahil Nikolov
В сряда, 18 ноември 2020 г., 13:40:18 Гринуич+2, ernestclydeac...@gmail.com
написа:
Hello Alex,
How do i prepare the gluster volume as the gluster volume is also hosted on the
1st baremetal, can you elaborate this setup
I would recommend you to check this one:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/chap-event_notifications
Best Regards,
Strahil Nikolov
В вторник, 17 ноември 2020 г., 22:00:08 Гринуич+2, Chris Adams
написа:
I just noticed
Hi Bradley,
usually this is not supposed to happen.
I can propose you a fast fix:
- Set a node into maintenance (via the UI) and then from the "Installation"
drop down menu (upper right) click "reinstall" There is a tab for the
HostedEngine and you have to mark it as deployed/installed
If it
Once the vm fails, you can check in the host's vdsm log the whole xml file.
Can you share that ?
Best Regards,
Strahil Nikolov
В сряда, 18 ноември 2020 г., 11:31:55 Гринуич+2, tiziano.paci...@par-tec.it
написа:
Hi,
I installed a new server, using the ovirt iso, with the target
HostedEngine-RECOVERY]# cat vdsm-ovirtmgmt.xml
vdsm-ovirtmgmt
8ded486e-e681-4754-af4b-5737c2b05405
And define that network via 'virsh net-define'.
If you manage to power up the HostedEngine, you will be able to revert your
changes.
Good luck, and better start making backups :D
Best Re
Actually you can import the NFS, but all VM disks should be on that NFS and the
VM must be stopped.Also , don't forget the template (if you used template) - it
also must be on the NFS.
Then importing is like a piece of cake.
Best Regards,
Strahil Nikolov
В понеделник, 16 ноември 2020 г
the first change and
ovirt-engine-dwhd + ovirt-engine-reportsd for the second one.
Best Regards,
Strahil Nikolov
В понеделник, 16 ноември 2020 г., 15:10:22 Гринуич+2, Nicolás
написа:
Hi,
We're running oVirt 4.3.8 and even if this is a problem we've had since
a lot of time (I
Hi Rob,
I would check the vdsm logs on the host where the HostedEngine VM was already
running (source).
Also , you can check the logs on the HE itself.
In the UI, check the cluster cpu settings and your hosts. Is it possible that
one node has newer CPU than the other ?
Best Regards,
Strahil
If this is an oVIrt 4.4 , then open a bug on bugzilla.redhat.com .
Best Regards,
Strahil Nikolov
В събота, 7 ноември 2020 г., 11:42:48 Гринуич+2, Rob Verduijn
написа:
Hi,
Found it,
The hardware is identical (3xhp microserver g10, identical disks, cpu and ram
procedure for HE
10.Add the other nodes from the oVirt cluster
11.Set EL7-based hosts to maintenance and power off
12.Repeat steps 4-8 for the second host (step 11)
...
In the end, you can bring the CLuster Level up to 4.4 and enjoy...
Yet, this is just theory :)
Best Regards,
Strahil Nikolov
Keep
atus - they should be
running.
The engine's logs might also help.
Best Regards,
Strahil Nikolov
В понеделник, 9 ноември 2020 г., 11:53:54 Гринуич+2, hjadavall...@ukaachen.de
написа:
Dear Strahil Nikolov,
Thank you for the quick response! Though I restarted teh engine twice from
hosted en
You can use vdsm hooks to do almost everything.
About the Floating IP, I keep it for VMs in the same VLAN.
Best Regards,
Strahil Nikolov
В понеделник, 9 ноември 2020 г., 10:35:47 Гринуич+2, yam yam
написа:
Hello everyone!
I'm wondering there is any feature like applying routing
, you have the option to create a new setup and migrate the VMs one
by one (when downtime allows you) from the 4.3 setup to 4.4 setup.
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@
.
Also, consider providing oVirt version, Gluster version and some details about
your setup - otherwise helping you is almost impossible.
Best Regards,
Strahil Nikolov
В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, supo...@logicworks.pt
написа:
With older gluster verison this does
Have you thought to use a vdsm hook that executes your logic once a VM is
removed ? This way users won't have the ability to alter the DNS records
themselves ,which is way more secure and reliable.
Best Regards,
Strahil Nikolov
В събота, 21 ноември 2020 г., 10:26:45 Гринуич+2, Nathanaël
No, but keep an eye on you "/var/log" as debug is providing a lot of info.
Usually when you got a failure to move the disk, you can disable and check the
logs.
Best Regards,
Strahil Nikolov
В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2,
написа:
Do I need to resta
Do the files really exist ?
Any heals pending ?
Best Regards,
Strahil Nikolov
В неделя, 15 ноември 2020 г., 16:24:48 Гринуич+2, supo...@logicworks.pt
написа:
Here it is:
# sudo -u vdsm /usr/bin/qemu-img convert -p -t none -T none -f raw -O raw
/rhev/data-center/mnt/glusterSD/node1
It clearly indicates the problem - enable SELINUX.
Best Regards,
Strahil Nikolov
В петък, 13 ноември 2020 г., 17:08:57 Гринуич+2,
написа:
Hello,
I try use Gluster deploymont. I've the message error:
failed: [llrovirttest02.in2p3.fr] (item={u'path': u'/gluster_bricks/engine
you can use 5 nodes like this:
nodeA - data, volume1
nodeB - data ,volume1
nodeC - arbiter, volume1 ; arbiter , volume2
nodeD - data, volume2
nodeE - data, volume2
Best Regards,
Strahil Nikolov
В неделя, 1 ноември 2020 г., 23:12:21 Гринуич+2, Simon Scott
написа:
Apologies Strahill,
I
The ansible playbook is expecting that the "/dev/sdb" (which you have defined)
to be without any partitions or data.
Just wipe it and try again.
Best Regards,
Strahil Nikolov
В понеделник, 2 ноември 2020 г., 17:59:13 Гринуич+2, garcialiang.a...@gmail.com
написа:
Hello,
Erm... noone ?
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 02:51:00 Гринуич+2, Strahil Nikolov via Users
написа:
Hello All,
I would like to learn more about OVN and especially the maximum MTU that I can
use in my environment.
Current Setup 4.3.10
Network
Any hint for the location of "Automatic Synchronization" in UI ?
Best Regards,
Strahil Nikolov
В петък, 30 октомври 2020 г., 20:16:13 Гринуич+2, Dominik Holler
написа:
On Fri, Oct 30, 2020 at 7:03 PM Strahil Nikolov wrote:
> in 4.3.10's UI it shows 1500 :)
>
Th
If you mean "Administration" -> "Providers" -> "Ovirt-provider-ovn" -> it is
enabled.
Best Regards,
Strahil Nikolov
В петък, 30 октомври 2020 г., 21:10:02 Гринуич+2, Strahil Nikolov
написа:
Any hint for the location of "Automatic Syn
in 4.3.10's UI it shows 1500 :)
В петък, 30 октомври 2020 г., 13:25:05 Гринуич+2, Dominik Holler
написа:
On Thu, Oct 29, 2020 at 9:36 PM Alex K wrote:
>
>
> On Tue, Oct 27, 2020, 02:49 Strahil Nikolov via Users wrote:
>> Hello All,
>>
>> I would l
The only one I know is RH318, but it is a paid one.
Best Regards,
Strahil Nikolov
В събота, 31 октомври 2020 г., 02:03:59 Гринуич+2, i...@worldhostess.com
написа:
Can someone recommend a training video of some kind of step by step document to
do the installation and administration
Check if qemu-guest-agent(s) is availabile and use that instead.
Best Regards,
Strahil Nikolov
В събота, 31 октомври 2020 г., 22:04:46 Гринуич+2,
написа:
What is the best way to install ovirt guest on Ubuntu 16.04.6?
What I did:
# apt-get install ovirt-guest-agent
I changed value
Where is that option ?
Best Regards,
Strahil Nikolov
В неделя, 1 ноември 2020 г., 08:56:44 Гринуич+2, Joris DEDIEU
написа:
Hi list,
I forgot to check "Discard after Delete" when creating a new volume. Is there a
way (other than to empty the volume) to reclaim free blocks.
rds,
Strahil Nikolov
В неделя, 1 ноември 2020 г., 12:44:40 Гринуич+2, si...@justconnect.ie
написа:
I have a 3 node HCI setup with 2 Replica 3 volumes using nodes 1, 2 & 3 and
have added 2 additional Compute nodes to this Cluster.
What is stopping me from adding nodes 4 & 5 as Gl
is quite important and missed.
Best Regards,
Strahil Nikolov
В сряда, 21 октомври 2020 г., 22:35:21 Гринуич+3, Alex McWhirter
написа:
In my experience, the ovirt optimized defaults are fairly sane. I may change a
few things like enabling read ahead or increasing the shard size
I might be wrong, but I think that the SAN LUN is used as a PV and then each
disk is a LV from the Host Perspective.
Of course , I could be wrong and someone can correct me. All my oVirt
experience is based on HCI (Gluster + oVirt).
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври
Hi Didi,
thanks for the info - I learned it the hard way (trial & error) and so far it
was working.
Do we have an entry about that topic in the documentation ?
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври 2020 г., 08:27:08 Гринуич+3, Yedidyah Bar David
написа:
On
that is separate
from test :)
Best Regards,
Strahil Nikolov
В четвъртък, 22 октомври 2020 г., 14:00:52 Гринуич+3, supo...@logicworks.pt
написа:
Hello,
For example, a window machine runs to slow, usually the disk is allways in 100%
The group virt settings is this?:
performance.quick-read
When you mount the gluster volume manually and run "df -h /gluster/mount/point"
, how much space does it show ?
Best Regards,
Strahil Nikolov
В сряда, 4 ноември 2020 г., 17:48:56 Гринуич+2, hjadavall...@ukaachen.de
написа:
Hello,
Good Day!
I'm Hariharan and I'm working
I think the minimum is 60G and it seems that your deployment has failed, so can
you cleanup the share and extend it to 65G ?
Best Regards,
Strahil Nikolov
В четвъртък, 5 ноември 2020 г., 11:17:48 Гринуич+2, hjadavall...@ukaachen.de
написа:
Dear Mr.Strahil Nikolov,
Thanks for yoor
://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/#hardware-requirements_SHE_cli_deploy
clearly states that 25G is enough.
Best Regards,
Strahil Nikolov
В четвъртък, 5 ноември 2020 г., 15:44:19 Гринуич+2, hjadavall...@ukaachen.de
написа
This is just a guess , but you might be able to install fence_xvm on all
Virtualized Hosts .
Best Regards,
Strahil Nikolov
В четвъртък, 5 ноември 2020 г., 16:00:40 Гринуич+2, jb
написа:
Hello,
I would like to build a hyperconverged gluster with hosted engine in a
virtual
Create a vdo device with 'emulate52' and use that for your LVM's PV.
Best Regards,
Strahil Nikolov
В четвъртък, 5 ноември 2020 г., 18:24:32 Гринуич+2, Rob Verduijn
написа:
Hello,
After a serious struggle I finally managed to get ovirt-hosted engine with the
hyperconverged setup
to rebuild it from scratch
but it is complaining as it has too few space on the volume - right ?
Best Regards,
Strahil Nikolov
В четвъртък, 5 ноември 2020 г., 20:44:48 Гринуич+2, Marcel
написа:
Moin,
Thats sounds interessing. How you can cleanup a gluster volume of the
hosted engine
What is the output of 'gluster volume info engine' & 'gluster volume status
engine' (where engine is the volume name) ?
Best Regards,
Strahil Nikolov
В четвъртък, 5 ноември 2020 г., 23:37:35 Гринуич+2, marcel d'heureuse
написа:
No, the edge was running and we have cra
Cleaning up is OK, but I got no idea why it fails.
What was the exact error ?
Best Regards,
Strahil Nikolov
В петък, 6 ноември 2020 г., 12:38:03 Гринуич+2, hjadavall...@ukaachen.de
написа:
Dear Mr.Strahil Nikolov,
Thank you once again!
I tried cleaning up the storage path
aintenance is
cancelled.Next it will set the host into maintenance and most probably (not
sure about this one) the engine will assign a new host as SPM.
Best Regards,
Strahil Nikolov
В сряда, 28 октомври 2020 г., 05:04:44 Гринуич+2, lifuqi...@sunyainfo.com
написа:
Hi, Strahil,
=' of mkfs.xfs).
Best Regards,
Strahil Nikolov
В сряда, 28 октомври 2020 г., 00:33:22 Гринуич+2, marcel d'heureuse
написа:
Hi Strahil,
where can I find some documents for the conversion to replica? works this also
for the engine brick?
br
marcel
Am 27. Oktober 2020 16:40:59 MEZ schrieb
/iSCSI
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 21:53:05 Гринуич+2, supo...@logicworks.pt
написа:
No, is not a replica gluster, is just one brick one volume, one single server
storage.
This is what I get:
# gluster volume set data group virt
volume set: failed
You can change it via UI -> Hosts -> select new SPM host -> Management ->
Select as SPM
Best Regards,
Strahil Nikolov
В сряда, 28 октомври 2020 г., 19:46:14 Гринуич+2,
написа:
I think I have a problem in a Nic of one host. This host is the SPM
That's probably why
You just need to get the bricks via:
gluster volume info engine
Then you need to go to each server and extend the mount point to at least 61GB.
Also, you need to mount and delete everything inside the content.
Last issue:
/usr/sbin/ovirt-hosted-engine-cleanup
Best Regards,
Strahil Nikolov
Yes,
replica volume size is the size of the smallest brick. If you have 3 hosts with
3 directories called /gluster_bricks/engine/engine , you need to extend every
block device that is used for mounting on /gluster_bricks/engine.
Best Regards,
Strahil Nikolov
В петък, 6 ноември 2020 г
can attach it to 4.4 and migrate those VMs. And don't forget the templates the
VMs were created from.
Best Regards,
Strahil Nikolov
В понеделник, 28 декември 2020 г., 13:10:01 Гринуич+2, Diggy Mc
написа:
Is it safe to attach my new 4.4 environment to an export domain at the same
time
Vinius,
does your storage provide dedpulication ? If yes, then you can provide a new
thin-provisioned LUN and migrate the data from the old LUN to the new one.
Best Regards,
Strahil Nikolov
В понеделник, 28 декември 2020 г., 18:27:38 Гринуич+2, Vinícius Ferrão via
Users написа:
Hi
Can you enable debug logs on the host hosting the Hosted Engine ?
Details can be found on
https://www.ovirt.org/develop/developer-guide/vdsm/log-files.html
Merry Christmas to all !
Best Regards,
Strahil Nikolov
В петък, 25 декември 2020 г., 07:24:32 Гринуич+2, ozme...@hotmail.com
.
It should be ,as Oracle has their own OLVM:
https://blogs.oracle.com/virtualization/announcing-oracle-linux-virtualization-manager-43
Merry Christmas !
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email
Any hints in vdsm logs on the affected host or on the
broker.log/agent.log ?
Happy Hollidays to everyone!
Best Regards,Strahil Nikolov
В 14:33 +0200 на 25.12.2020 (пт), Gilboa Davara написа:
> Hello,
>
> Reinstall w/ redeploy produced the same results.
>
> - Gilboa
>
>
&
There is some issue with the dns. Check the A/ and PTR records are correct
for the Hosted Engine .
Best Regards,
Strahil Nikolov
В понеделник, 28 декември 2020 г., 22:15:14 Гринуич+2, lejeczek via Users
написа:
hi chaps,
a newcomer here. I use cockpit to deploy hosted engine
and then restore that backup using the new
storage domain and the node that was in maintenance...
As I have never restored my oVirt Manager , I can't provide more help.
Best Regards,
Strahil Nikolov
В понеделник, 28 декември 2020 г., 19:40:15 Гринуич+2, Nur Imam Febrianto
написа
on one storage domain, but >the storage domain
>is not exlusively used by Hosted Engine, we use it too >for other VM. Are this
>OK or it have a side impact ?
Avoid using HostedEngine's storage domain for other VMs.You might get into
situation that you want to avoi
I'm not sure if the templates are automatically transferred , but it's worth
checking before detaching the storage.
Best Regards,
Strahil Nikolov
В понеделник, 28 декември 2020 г., 18:53:27 Гринуич+2, Diggy Mc
написа:
Templates? Aren't the VM's templates automatically copied
Imam Febrianto
написа:
What kind of situation is that ? If so, how can I migrate my hosted engine into
another storage domain ?
Regards,
Nur Imam Febrianto
From: Strahil Nikolov
Sent: 29 December 2020 0:31
To: oVirt Users; Nur Imam Febrianto
Subject: Re: [ovirt-users] New
Maybe there is a missing package that is preventing that.
Let's see what the devs will find out next year (thankfully you wpn't have to
wait much).
Best Regards,
Strahil Nikolov
В сряда, 30 декември 2020 г., 16:30:37 Гринуич+2, Gilboa Davara
написа:
Short update.
1. Ran ovirt
Are you uploading to 4.4 or to the old 4.3 ?
I'm asking as there should be an enhancement that makes a checksum on the
uploads in order to verify that the upload was successfull.
Best Regards,
Strahil Nikolov
В сряда, 30 декември 2020 г., 18:37:52 Гринуич+2, Jorge Visentini
написа
essible by that new cluster.
Then to migrate, you just need to power off the VM, Edit -> change
cluster, network, etc and power it up.
It will start on the hosts in the new cluster and then you just need to
verify that the application is working properly.
ighly Available?
High Availability and Host fencing are the 2 parts that you need to
ensure that the VM will be restarted after a failure.
If you storage domain goes bad, the VMs will be paused and
theoretically they will be resumed automatically when the storage
domain is back.
Best Regards,
Are you using E1000 on the VMs or on the Host ?
If it's the second , you should change the hardware .
I have never used e1000 for VMs as it is an old tech. Better to install the
virtio drivers and then use the virtio type of NIC.
Best Regards,
Strahil Nikolov
В четвъртък, 31 декември 2020
801 - 900 of 1577 matches
Mail list logo