Hello,
The ids file for sanlock is broken on one setup. The first host id in
the file is wrong.
From the logfile i have:
verify_leader 1 wrong space name 0924ff77-ef51-435b-b90d-50bfbf2e�ke7
0924ff77-ef51-435b-b90d-50bfbf2e8de7 /rhev/data-center/mnt/glusterSD/
Note the broken char in the spac
by running 'sanlock client status',
> 'sanlock client log_dump'.
>
> Regards,
> Maor
>
> On Thu, Jul 27, 2017 at 6:18 PM, Johan Bernhardsson
> wrote:
> >
> > Hello,
> >
> > The ids file for sanlock is broken on one setup. The fi
Hello,
We get this error message while moving or copying some of the disks on
our main cluster running 4.1.2 on centos7
This is shown in the engine:
VDSM vbgkvm02 command HSMGetAllTasksStatusesVDS failed: low level Image
copy failed
I can copy it inside the host. And i can use dd to copy. Have
rsions of vdsm, qemu, libvirt?
>
> On Sun, Jul 30, 2017 at 1:01 PM, Johan Bernhardsson
> wrote:
> > Hello,
> >
> > We get this error message while moving or copying some of the disks
> > on
> > our main cluster running 4.1.2 on centos7
> >
>
[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1458846
>
> On Sun, Jul 30, 2017 at 3:02 PM, Johan Bernhardsson
> wrote:
> > OS Version:
> > RHEL - 7 - 3.1611.el7.centos
> > OS Description:
> > CentOS Linux 7 (Core)
> > Kernel Version:
> > 3.10.0 - 514.16.1.el7.x86
e way to fix that while there are running VMs?
>
> Regards,
> Maor
>
> On Sun, Jul 30, 2017 at 11:58 AM, Johan Bernhardsson
> wrote:
> >
> > (First reply did not get to the list)
> >
> > From sanlock.log:
> >
> > 2017-07-30 10:49:31+
There is no point on doing that as azure is a cloud in itself and ovirt
is to build your own virtual environment to deploy on local hardware.
/Johan
On Mon, 2017-08-07 at 12:32 +0200, Grzegorz Szypa wrote:
> Hi.
>
> Did anyone try to install ovirt on Azure Environment?
>
> --
> G.Sz.
> ___
You attach the ssd as a hot tier with a gluster command. I don't think that
gdeploy or ovirt gui can do it.
The gluster docs and redhat docs explains tiering quite good.
/Johan
On August 8, 2017 07:06:42 Moacir Ferreira wrote:
Hi Devin,
Please consider that for the OS I have a RAID 1. No
ferences in
between GlusterFS and Ceph. Can you comment?
Moacir
________
From: Johan Bernhardsson
Sent: Tuesday, August 8, 2017 7:03 AM
To: Moacir Ferreira; Devin Acosta; users@ovirt.org
Subject: Re: [ovirt-users] Good practices
You attach the ssd as a hot tier with
It is a bug that is also present in 16.04. The log directory in
/var/log/ovirt-guest-agent has the wrong user (or permission) It
should have ovirtagent as user and group.
/Johan
On Tue, 2017-08-08 at 15:59 -0400, Wesley Stewart wrote:
> I am having trouble getting the ovirt agent working on Ubu
And it would have been good if i read the whole email :)
On Tue, 2017-08-08 at 22:04 +0200, Johan Bernhardsson wrote:
> It is a bug that is also present in 16.04. The log directory in
> /var/log/ovirt-guest-agent has the wrong user (or permission) It
> should have ovirtagent as user
If gluster drops in quorum so that it has less votes than it should it
will stop file operations until quorum is back to normal.If i rember it
right you need two bricks to write for quorum to be met and that the
arbiter only is a vote to avoid split brain.
Basically what you have is a raid5 solutio
On September 7, 2017 19:01:58 Christopher Cox wrote:
Any links or ideas appreciated,
oVirt is NOT VMware. But if you do things "well" oVirt works quite
well. Follow the list to see folks that didn't necessarily do things
"well" (sad, but true).
I inherited this oVirt... not ideal fo
Why are you stopping firewalld? A better solution is to actually add
firewall rules and open up what's needed.
/Johan
On September 19, 2017 22:17:51 Mat Gomes wrote:
Hi Guys,
I'm attempting to rebuild my environment for production testing, We've
multiple locations NY and CH with a 2+1arbi
Follow this guide if it is between minor releases
https://www.ovirt.org/documentation/upgrade-guide/chap-Updates_between_Minor_Releases/
Don't forget to send the hosted-engine to global maintenance
/Johan
On September 20, 2017 13:11:41 gabriel_skup...@o2.pl wrote:
In oVirt Engine Web Admin
interface and after that you
can upgrade with yum on the nodes. Don't forget to have the node in
maintence mode when you run yum update
/Johan
2017-09-20 14:39 skrev gabriel_skup...@o2.pl:
Thanks. What about the system itself?
Is yum update enough?
Dnia 20 września 2017 13:25
2017-09-21 13:08 skrev gabriel_skup...@o2.pl:
Dnia 20 września 2017 15:54 Kasturi Narra
napisał(a):
Hi,
upgrade HE (Hosted Engine ) by doing the steps below.
1) Move HE to global maintenance by running the command
'hosted-engine --set-maintenance --mode=global'
2) Add the required repos whi
Check on drbd. I have used that to build a cluster for two servers. It need
some more work than a three node gluster conf but works well.
I even think they have a white paper on how to do it for virtualization.
/Johan
On November 3, 2017 08:11:04 Artem Tambovskiy
wrote:
Looking for a des
145 ext. 5153
On 03/11/17 08:40, Johan Bernhardsson wrote:
Check on drbd. I have used that to build a cluster for two servers. It
need some more work than a three node gluster conf but works well.
I even think they have a white paper on how to do it for virtualization.
/Johan
On November 3
For it to work you need to have the bricks in replicate. Of brick on
each server.
If you only have two nodes. The quoum will be to low so it will set the
gluster to failsafe mode until the other brick comes online.
For it to work properly you need three nodes with one brick or two
nodes and a thi
y are reachable over the network.
>
> Jonathan
>
> 2017-11-09 11:39 GMT+01:00 Johan Bernhardsson :
> > For it to work you need to have the bricks in replicate. Of brick
> > on each server.
> >
> > If you only have two nodes. The quoum will be to low so it will set
We have had a similar issue that has been resolved with restarting the
engine vps.
Not ideal but it solves the problem for a about a month.
/JohanOn Fri, 2017-12-01 at 10:50 +0100, Luca 'remix_tj' Lorenzetto wrote:
> Hi all,
>
> since some days my hosted-engine environments (one RHEV 4.0.7, one
No it is not safe to only use two nodes as you can end up with split brain.
So two nodes and an arbiter node is needed. The arbiter doesn't need to be
that fancy.
Also the installer if installed with gluster as hosted storage (storage for
the engine) will complain if replica is less than 3.
ovirt node is a small minimal os and will wipe your manually installed
packages on an upgrade.
If you want local packages that are critical for you you should install
full centos/rhev server and use that as a virtualization node. (This is
what i did since i wanted more control of the virtulization
The differences are here:
https://www.ovirt.org/documentation/install-guide/chap-Introduction_to_
Hypervisor_Hosts/
And also the different guides on how to install them.
/Johan
On Tue, 2017-12-19 at 01:06 +0100, Johan Bernhardsson wrote:
> ovirt node is a small minimal os and will wipe y
You can't start the hosted engine storage bin anything less than replica 3
without changing the installer scripts manually.
For the third it can be pretty much anything capable of running as an arbiter.
/Johan
On January 8, 2018 21:33:25 carl langlois wrote:
I should have say replica 3+arb
Hi,
It's not entirely clear what you want to do.
Ovirt is an interface that will control hardware nodes that runs virtual
servers. It's similar to vmwares vsphere.
The engine need to be replicated so that if one goes down the other have
the exact same information.
/Johan
On April 3, 2018
The norm is to have a cluster with shared storage. So you have 3 to 5
hardware noed that shares storage for the hosted engine. That shared
storage is in sync. So you don't have one engine per physical node.
If one hardware node goes down the engine is restarted on another node with
the help of
Is storage working as it should? Does the gluster mount point respond as
it should? Can you write files to it? Does the physical drives say that
they are ok? Can you write (you shouldn't bypass gluster mount point but
you need to test the drives) to the physical drives?
For me this sounds li
Load like that is mostly io based either the machine is swapping or network
is to slow. Check I/o wait in top.
And the problem where you get oom killer to kill off gluster. That means
that you don't monitor ram usage on the servers? Either it's eating all
your ram and swap gets really io inten
r the os, 32GB ram, 2.67Ghz CPUs for about $720 delivered. I've
got to do something to improve my reliability; I can't keep going the way I
have been
--Jim
On Fri, Jul 6, 2018 at 9:13 PM, Johan Bernhardsson wrote:
Load like that is mostly io based either the machine is swapp
am, 2.67Ghz CPUs for about $720 delivered. I've
got to do something to improve my reliability; I can't keep going the way I
have been
Agreed. Thanks for continuing looking into this, we'll probably need some
Gluster logs to understand what's going on.
Y.
--Jim
On
e
got to do something to improve my reliability; I can't keep going the way I
have been
Agreed. Thanks for continuing looking into this, we'll probably need some
Gluster logs to understand what's going on.
Y.
--Jim
On Fri, Jul 6, 2018 at 9:13 PM, Johan Bernhardsson wro
You need replicated gluster storage for that. So that one part of the
storage can go down but two others still keep on running. And you need to
set the threshold so that a quorum of 2 is sufficient (if you have replica 3).
If the storage volume is offline. It would be the same as the same thing
Several mails today that is pure spam
/Johan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.o
Those alerts are also coming from hosted-engine that keeps ovirt manager
running.
I would rather have a filter in my email client for them than disabling all
of the alerting stuff
/Johan
On August 28, 2018 22:36:34 Douglas Duckworth wrote:
Hi
Can someone please help? I keep getting ovirt
36 matches
Mail list logo