Hi ,
Can you please check the following. Following could be one of the reason
why HE vm restarts every minute.
Check the error or engine health state. If it’s to do with Liveliness
check, then this is mostly an issue connecting to engine.
- Check if engine FQDN is reachable from all hosts
-
Hi,
Regarding the UI showing incorrect information about engine and data
volumes, can you please refresh the UI and see if the issue persists plus
any errors in the engine.log files ?
Thanks
kasturi
On Sat, Jul 22, 2017 at 11:43 AM, Ravishankar N
wrote:
>
> On
ltQuartzScheduler2)
> [b7590c4] FINISH, GlusterVolumesListVDSCommand, return: {d19c19e3-910d-437
> b-8ba7-4f2a23d17515=org.ovirt.engine.core.common.businessentities.gluste
> r.GlusterVolumeEntity@fdc91062, c7a5dfc9-3e72-4ea1-843e-c8275d
> 4a7c2d=org.ovirt.engine.core.common.businessentities.gluste
>
Hi,
This option appears in the host tab only when HostedEngine vm and
hosted_storage is present in the UI. Before adding another host make sure
that you add your first data domain to the UI which will automatically
import HostedEngine vm and hosted_storage. Once these two are imported you
Hi,
You can follow the steps below to do that.
1) Stop all the virtual machines.
2) Move all the storage domains other than hosted_storage to maintenance
which will unmount them from all the nodes.
3) Move HE to global maintenance 'hosted-engine --set-maintenance --mode
=global'
4)
shutdown a HA protected machine
> oVirt would then launch it back again. So in my thoughts I would do the
> step 6 before step 4. This said, am I missing something?
>
> Moacir
>
> ------
> *From:* Kasturi Narra <kna...@redhat.com>
> *Sent:* Wed
Hi,
Can you please check if you have vdsm-gluster package installed on the
system ?
Thanks
kasturi
On Wed, Aug 16, 2017 at 6:12 PM, Vadim wrote:
> Hi, All
>
> ovirt 4.1.4 fresh install
> Constantly seeing this message in the logs, how to fix this:
>
>
> VDSM kvm04 command
Hi ,
yes, you are right. Since arbiter brick has only metadata and data for
the vm has to be served from one of the other two replicas, read is slow.
Arbiter is a special subset of replica 3 volumes and is aimed at
preventing split-brains and providing same consistency as a normal
Hi,
can you check what 'hosted-engine --vm-status' reports and can you
check if you are able to ping the hostname of HE vm from your hosts and the
machine where you are trying to access the browser ?
Thanks
kasturi
On Tue, Sep 19, 2017 at 7:47 PM, Mat Gomes
Recommended would be creating a new storage domain with shard size as 64 MB
and migrating all the disks from 4MB storagedomain
On Mon, Sep 18, 2017 at 12:01 PM, Ravishankar N
wrote:
> Possibly. I don't think changing shard size on the fly is supported,
> especially when
Hi,
In the agent.log file i see that it fails to connect to storage server.
Can you please check the following .
1) can you please check 'gluster peer status' and make sure that all the
nodes in the cluster are connected ?
2) can you gluster volume status and make sure that at
least two of
Hi,
upgrade HE (Hosted Engine ) by doing the steps below.
1) Move HE to global maintenance by running the command 'hosted-engine
--set-maintenance --mode=global'
2) Add the required repos which has higher package versions.
3) Run 'yum update ovirt\*setup\*'
4) engine-setup
5) Once the setup
Hi Sean,
This error is expected and there is a bug to change this script so
that it can properly disable multipath devices. To continue you can simply
add 'ignore_script_errors=yes' under script3 which continues with this
failure. Please note that this script is used to disable
Hi,
If i understand right gdeploy script is failing at [1]. There could be
two possible reasons why that would fail.
1) can you please check if the disks what would be used for brick creation
does not have lables or any partitions on them ?
2) can you please check if the path [1] exists. If
ential information and is intended
> only for the individual named. If you are not the named addressee, you are
> hereby notified that disclosing, copying, distributing or taking any action
> in reliance on the contents of this e-mail is strictly prohibited. Please
> notify the sender immedi
contains confidential information and is intended
> only for the individual named. If you are not the named addressee, you are
> hereby notified that disclosing, copying, distributing or taking any action
> in reliance on the contents of this e-mail is strictly prohibited. Please
> notify
Hi,
This is a test email and please ignore the mail
Thanks
kasturi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
t; But you did somewhat answer my question, the answer seems to be no (as
> default) and I will have to use hosted-engine.conf and change the parameter
> as you list
>
> So I need to do something manual to create HA for engine on gluster? Yes?
>
> Thanks so much!
>
> On Thu, Aug 31, 2017
6
>>>
>>> On Thu, Aug 31, 2017 at 3:30 PM, Charles Kozler <ckozler...@gmail.com>
>>> wrote:
>>>
>>> So I've tested this today and I failed a node. Specifically, I setup a
>>> glusterfs domain and selected "host to use: node1". Set i
explain why it wasnt working :-). I guess I had a
> silly assumption that oVirt would have detected it and automatically taken
> up the redundancy that was configured inside the replica set / brick
> detection.
>
> I will test and let you know
>
> Thanks!
>
> On Fri, S
to specify backup-volfile-servers=: where
>>>>>>>>>> server2 and server3 also have bricks running. When server1 is down,
>>>>>>>>>> and the
>>>>>>>>>> volume is mounted again - server2 or server3 are qu
t;>>>>>>>>>
>>>>>>>>>>>> @ Jim - you have only two data volumes and lost quorum.
>>>>>>>>>>>> Arbitrator only stores metadata, no actual files. So yes, you were
>>>>>>>>>
Hi Mauro,
Creating distributed dispersed volumes are not supported from ovirt
UI yet but you should be able to sync them if cluster is imported into the
UI. same holds true for add / remove bricks on disperse and distribute
disperse volumes.
you wont be able to see bricks created
nks a lot,
> Mauro
>
>
> Il giorno 04 set 2017, alle ore 08:51, Kasturi Narra <kna...@redhat.com>
> ha scritto:
>
> Hi Mauro,
>
>Creating distributed dispersed volumes are not supported from ovirt
> UI yet but you should be able to sync them if cluster is
Hi,
During Hosted Engine setup question about glusterfs volume is being
asked because you have setup the volumes yourself. If cockpit+gdeploy
plugin would be have been used then that would have automatically detected
glusterfs replica 3 volume created during Hosted Engine deployment and this
On Sat, Sep 30, 2017 at 7:50 PM M R wrote:
> Hello!
>
> I have been using Ovirt for last four weeks, testing and trying to get
> things working.
>
> I have collected here the problems I have found and this might be a bit
> long but help to any of these or maybe to all of
Hi Dmitri,
If the vms are created on a Hyperconverged setup then the max disk
size recommended is 2TB.
Thanks
kasturi.
On Sat, Oct 7, 2017 at 12:54 AM, Dmitri Chebotarov wrote:
> Hello
>
> I'm trying to find any info on how much storage I can attach to a VM.
>
> Is
Hi,
You can run the command below which will remove these hosts from the
hosted-engine --vm-status output
‘hosted-engine --clean-metadata --host-id= --force-clean’
Thanks
kasturi
On Sun, Oct 8, 2017 at 12:01 AM, Maton, Brett
wrote:
> Hi,
>
> I've replaced
9 PM, TranceWorldLogic . <
> tranceworldlo...@gmail.com> wrote:
>
>> "That will take care of syncing the bricks in the UI of hosted-storage
>> gluster volume on gluster network."
>> Sorry, not understood, what do you mean UI here. Would you please explain
>&
Hi,
You can assign glusternw role to the newly created gluster network and
associate the interface with which you have configured gluster pool with
gluster network. That will take care of syncing the bricks in the UI of
hosted-storage gluster volume on gluster network.
Thanks
kasturi
On
Hi Alex,
Can you check if you have the following on your setup ?
1) gluster volume which will be used as ISO storage domain should have
bricks connected using glusternw.
2) NFS time out is caused due to not having nfs.disable off on the volume.
Can you try to set this and try again.
3) self
Hello Artem,
May i know how did you deploy the Hosted Engine and glusterfs
volumes? There is an easy way to do this using cockpit UI. You could log
into cockpit UI, click on Hosted Engine tab and there are two radio buttons
one for gluster deployment and another or HostedEngine
Hello Logan,
One reason the liveliness check fails is host cannot ping your hosted
engine vm. you can try connecting to HE vm using remote-viewer
vnc://hypervisor-ip:5900 and from the hosted-engine --vm-status output
looks like the HE vm is up and running fine.
- Please check internal dns
Hello Rudi,
Removing a brick from a replica 3 volume means that you are
reducing the replica count from 3 to 2. You are seeing the first error
because when you are trying to remove a brick from replica 3 volume you do
not need to migrate data as the same data is present in other two
> result is what is shown in my output below, so it’s set to true for
> v4.2. Is
> > it enough?
> > I’ll try restarting the engine. Is it really needed to stop all the VMs
> and
> > restart them all? Of course this is a test setup and I can do it, but for
> > production clusters in the
Hi Logan,
When i look at the hosted-engine --vm-status i see that vm is up but it
is health is bad. Can you try connecting to the vm using remote-viewer
using the command below ?
remote-viewer vnc://ovirttest1.wolfram.com:5900
Thanks
kasturi
On Fri, Nov 10, 2017 at 12:52 PM, Logan Kuhn
Hello,
Can you please let me know which is the script it is failing and
ansible and gdeploy versions?
Thanks
kasturi
On Mon, Nov 13, 2017 at 2:54 PM, Open tech wrote:
> Hi All,
>I am new to Ovirt. I am hitting the exact same error while trying a new
> install
r01.idc.hinet.net
>
> Installed Packages
>
> Name: gdeploy
>
> Arch: noarch
>
> Version : 2.0.2
>
> Release : 7
>
> Size: 2.7 M
>
> Repo: installed
>
> Summary : Tool to deploy and manage GlusterFS cluster
: 4.1
>
> restarted the engine, shutdown the vm completely and started it back up a
> short time later.
>
> I am using this command to check:
> ps ax | grep qemu | grep 'file=gluster\|file=/rhev'
>
> Output is
> file=gluster://10.20.102.181/gl-vm12/....
>
> Thanks
> B
Hello,
I have an environment with 3 hosts and gluster HCI on 4.1.3.
I'm following this link to take it to 4.1.7
https://www.ovirt.org/documentation/how-to/hosted-
engine/#upgrade-hosted-engine
The hosts and engine were at 7.3 prior of beginning the update.
All went ok for the engine that now is
Hi Florian,
Are you seeing these issues with gfapi or fuse access as well ?
Thanks
kasturi
On Fri, Nov 24, 2017 at 3:06 AM, Florian Nolden wrote:
> I have the same issue when I run backup tasks during the night.
>
> I have a Gluster setup with a 1TB SSD on each of
Hello Gabriel,
Can you copy paste the contents of centos-base and ovirt repos ? this
is simply a result of mixed repos pkgs. Remove the -ev package and install
the -rhev version, that should take care of it.
Thanks
kasturi
On Thu, Nov 23, 2017 at 1:25 PM, Gabriel Stein
Hello Gabriel,
Any specific reason you are looking to expand storage for
engine volume?
If you are running a HC setup recommended way of upgrading the
setup is to add three more hosts to the existing cluster and create volumes
out of the bricks carved from those additional
Hello Matt,
All the partitions will be persisted when gluster is installed on
the ovirt node since gluster recommends user not to create bricks in root
directory. If the gluster bricks are created in root partition then once
the update of the node is done, you will not be able to see any
st setup and I can do it, but
> for production clusters in the future it may be a problem.
> Thanks,
>
>Alessandro
>
> Il giorno 09 nov 2017, alle ore 07:23, Kasturi Narra <kna...@redhat.com>
> ha scritto:
>
>
> Hi ,
>
> The procedure to enable gfapi is below.
Hello,
Looks like there is a problem with the repo which is present in
your system. Can you please disable the repo and try installing the host
again? That should solve the problem.
Thanks
kasturi
On Thu, Dec 7, 2017 at 1:53 PM, M.I.S <1312121...@qq.com> wrote:
> hi,
>I
Hello Jarek,
As of today we cannot have gdeploy to work with different devices
on different nodes for deploying HC. Currently device name has to be same
on data and arbiter nodes.
Hope this helps !!
Thanks
kasturi
On Wed, Dec 20, 2017 at 2:29 PM, Jarek wrote:
> One
Hello Bill,
can you attach vdsm logs during the time period where migration
failed ? It would help us to see why the migration failed.
Thanks
kasturi
On Wed, Jan 17, 2018 at 5:24 AM, Bill James wrote:
> I have one node in our cluster that has problems when migrating
>> host-id=2
>>>> >> score=0
>>>> >> vm_conf_refresh_time=7530 (Fri Jan 12 16:10:12 2018)
>>>> >> conf_on_shared_storage=True
>>>> >> maintenance=False
>>>> >>
Hello Jayme,
Please find the responses inline.
On Fri, Jan 19, 2018 at 7:44 PM, Jayme wrote:
> I am attempting to narrow down choices for storage in a new oVirt build
> that will eventually be used for a mix of dev and production servers.
>
> My current space usage
Hi Carl,
Below are the steps to configure back end network which is glusternw.
1) create a new network called 'glusternw'
2) Now go to clusters tab and select Logical networks tab.
3) You should see the newly created network there.
4) select the network and click on 'Manage Networks'
5) In
Hello Artem,
Can you check if glusterd service is running on host1 and all the
peers are in connected state ? If yes, can you restart ovirt-ha-agent and
broker services and check if things are working fine ?
Thanks
kasturi
On Sat, Jan 13, 2018 at 12:33 AM, Artem Tambovskiy <
Hello,
Can you attach ovirt-ha-agent and ovirt-ha-broker logs ?
Thanks
kasturi
On Fri, Jan 12, 2018 at 9:38 PM, Artem Tambovskiy <
artem.tambovs...@gmail.com> wrote:
> Trying to fix one thing I broke another :(
>
> I fixed mnt_options for hosted engine storage domain and installed latest
>
Hi carl,
During deployment via cockpit+gdeploy plugin when you input the host,
host in the third text box will be considered as Arbiter host.
Thanks
kasturi
On Tue, Jan 9, 2018 at 11:26 PM, carl langlois
wrote:
> Some question about the arbiter box.
>
> 1- Lets
Hello sakhi,
Can you please let us know what is the script it is failing
at ?
Thanks
kasturi
On Tue, Feb 20, 2018 at 1:05 PM, Sakhi Hadebe wrote:
> I have 3 Dell R515 servers all installed with centOS 7, and trying to
> setup an oVirt Cluster.
>
> Disks
etwork but when i try to
> sync the network it always stay out of sync and if i ssh to that host i do
> not see the bridge but the network card is up and a ip is assign to it.
>
> Thanks
>
> Carl
>
> On Tue, Jan 23, 2018 at 2:38 AM, Kasturi Narra <kna...@redhat.
56 matches
Mail list logo