On 16/04/19 2:20 PM, Sahina Bose wrote:
On Tue, Apr 16, 2019 at 1:39 PM Leo David wrote:
Hi Everyone,
I have wrongly configured the main gluster volume ( 12 identical 1tb ssd disks,
replica 3 distributed-replicated, across 6 nodes - 2 per node ) with arbiter
one.
Oviously I am wasting
Hi,
Can you restart the self-heal daemon by doing a `gluster volume start
bgl-vms-gfs force` and then launch the heal again? If you are seeing
different entries and counts each time you run heal info, there is
likely a network issue (disconnect) between the (gluster fuse?) mount
and the
Hi,
"[2018-06-16 04:00:10.264690] E [MSGID: 108008]
[afr-self-heal-common.c:335:afr_gfid_split_brain_source]
0-engine-replicate-0: Gfid mismatch detected for
/hosted-engine.lockspace>,
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and
ef21a706-41cf-4519-8659-87ecde4bbfbf on
On 07/02/2018 02:15 PM, Krutika Dhananjay wrote:
Hi,
So it seems some of the files in the volume have mismatching gfids. I
see the following logs from 15th June, ~8pm EDT:
...
...
[2018-06-16 04:00:10.264690] E [MSGID: 108008]
[afr-self-heal-common.c:335:afr_gfid_split_brain_source]
, 2017 at 8:14 AM, Ravishankar N <ravishan...@redhat.com
<mailto:ravishan...@redhat.com>> wrote:
On 09/18/2017 10:08 AM, Alex K wrote:
Hi Ravishankar,
I am not referring to the arbiter volume(which is showing 0%
usage). I am referring to the other 2 volumes which
2017 07:00, "Ravishankar N" <ravishan...@redhat.com
<mailto:ravishan...@redhat.com>> wrote:
On 09/17/2017 08:41 PM, Alex K wrote:
Hi all,
I have replica 3 with 1 arbiter.
When checking the gluster volume bricks they are reported as
using different space, as
On 09/17/2017 08:41 PM, Alex K wrote:
Hi all,
I have replica 3 with 1 arbiter.
When checking the gluster volume bricks they are reported as using
different space, as per attached. How come they use different space?
One would expect to use exactly the same space since they are replica.
The
granular-entry-heal: on/
/auth.allow: */
server.allow-insecure: on
2017-07-21 19:13 GMT+02:00 yayo (j) <jag...@gmail.com
<mailto:jag...@gmail.com>>:
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishan...@redhat.com
<mailto:ravishan...@redhat.com>>:
On 07/21/2017 02:55 PM, yayo (j) wrote:
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishan...@redhat.com
<mailto:ravishan...@redhat.com>>:
But it does say something. All these gfids of completed heals in
the log below are the for the ones that you have given the
get
On 07/20/2017 03:42 PM, yayo (j) wrote:
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishan...@redhat.com
<mailto:ravishan...@redhat.com>>:
Could you check if the self-heal daemon on all nodes is connected
to the 3 bricks? You will need to check the gl
On 07/20/2017 02:20 PM, yayo (j) wrote:
Hi,
Thank you for the answer and sorry for delay:
2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishan...@redhat.com
<mailto:ravishan...@redhat.com>>:
1. What does the glustershd.log say on all 3 nodes when you run
the command? Does
On 07/19/2017 08:02 PM, Sahina Bose wrote:
[Adding gluster-users]
On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) > wrote:
Hi all,
We have an ovirt cluster hyperconverged with hosted engine on 3
full replicated node . This cluster have 2
-- Forwarded message --
From: *Khalid Jamal* >
Date: Sat, May 13, 2017 at 10:51 PM
Subject: [ovirt-users] can i Extend volume from replica 2 to replica 3
with arbiter
To: users@ovirt.org
.
-Ravi
2016-09-29 14:16 GMT+02:00 Ravishankar N <ravishan...@redhat.com
<mailto:ravishan...@redhat.com>>:
On 09/29/2016 05:18 PM, Sahina Bose wrote:
Yes, this is a GlusterFS problem. Adding gluster users ML
On Thu, Sep 29, 2016 at 5:11 PM, Davide Ferrari
<dav.
On 09/29/2016 05:18 PM, Sahina Bose wrote:
Yes, this is a GlusterFS problem. Adding gluster users ML
On Thu, Sep 29, 2016 at 5:11 PM, Davide Ferrari > wrote:
Hello
maybe this is more glustefs then ovirt related but since OVirt
@gluster infra - FYI.
On 04/27/2016 02:20 PM, Nadav Goldin wrote:
Hi,
The GlusterFS repository became unavailable this morning, as a result
all Jenkins jobs that use the repository will fail, the common error
would be:
erfs-3.7.11.
-Ravi
Thanks for the clarification.
Regards,
Roderick
On 06 Apr 2016, at 10:56 AM, Ravishankar N <ravishan...@redhat.com
<mailto:ravishan...@redhat.com>> wrote:
On 04/06/2016 02:08 PM, Roderick Mooi wrote:
Hi Ravi and colleagues
(apologies for hijacking this
. By the way 'gluster volume set help` should give you the list of
all options.
-Ravi
Cheers,
Roderick
On 12 Feb 2016, at 6:18 AM, Ravishankar N <ravishan...@redhat.com
<mailto:ravishan...@redhat.com>> wrote:
Hi Bill,
Can you enable virt-profile setting for your volume and see if that
oVirt storages go to
"disable " state ??? = disconnect VMs storages ?
No idea on this one...
-Ravi
regs.Pa.
On 3.3.2016 02:02, Ravishankar N wrote:
On 03/03/2016 12:43 AM, Nir Soffer wrote:
PS: # find /STORAGES -samefile
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70
On 03/03/2016 12:43 AM, Nir Soffer wrote:
PS: # find /STORAGES -samefile
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
-print
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
= missing "shadowfile" in " .gluster " dir.
How can
On 03/02/2016 12:02 PM, Sahina Bose wrote:
On 03/02/2016 03:45 AM, Nir Soffer wrote:
On Tue, Mar 1, 2016 at 10:51 PM, p...@email.cz > wrote:
>
> HI,
> requested output:
>
> # ls -lh /rhev/data-center/mnt/glusterSD/localhost:*/*/dom_md
>
>
-02-24
11:18:20,867::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-35634::DEBUG::2016-02-24
11:18:20,867::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-35634::DEBUG::2016-
Hi,
On 02/24/2016 06:43 AM, p...@email.cz wrote:
Hi,
I found the main ( maybe ) problem with IO error ( -5 ) for "ids" file
access
This file is not accessable via NFS, locally yes
How is NFS coming into the picture? Are you not using gluster fuse mount?
.
How can I fix it ??
Can you run
On 02/12/2016 09:11 PM, Bill James wrote:
wow, that made a whole lot of difference!
Thank you!
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile1 bs=1M
count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 20.2778 s, 51.7 MB/s
That's great. It was Vijay Bellur who noticed that it
ance.write-behind: off
performance.write-behind: off didn't help.
Neither did any other changes I've tried.
There is no VM traffic on this VM right now except my test.
On 02/10/2016 11:55 PM, Nir Soffer wrote:
On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N
<ravishan...@redhat.com>
wrote:
On 12/23/2015 11:00 PM, Steve Dainard wrote:
I've attached the client gluster log starting at the first log of the
same day as failure.
Nothing significant in the client log after the crash and subsequent
remount. The ENODATA warnings can be ignored. There was a patch
On 12/23/2015 11:44 AM, Sahina Bose wrote:
signal received: 6
time of crash:
2015-12-22 23:04:00
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.6.7
On 10/18/2015 07:27 PM, Nicolas LIENARD wrote:
Hey Nil
What about
https://gluster.readthedocs.org/en/release-3.7.0/Features/afr-arbiter-volumes/
?
Regards
Nico
Le 18 octobre 2015 15:12:23 GMT+02:00, Nir Soffer
a écrit :
On Sat, Oct 17, 2015 at 12:45 PM,
in another reporsitory, do you think I could try to
install this version on current ovirt 3.4 installation ?
- Jean-Michel
Le 27/09/2015 18:26, Jean-Michel FRANCOIS a écrit :
Le 27/09/2015 16:26, Atin Mukherjee a écrit :
On 09/25/2015 01:25 PM, Ravishankar N wrote:
On 09/25/2015 12:32 PM, Jean
On 09/25/2015 12:32 PM, Jean-Michel FRANCOIS wrote:
Hi Ovirt users,
I'm running ovirt hosted 3.4 with gluster data storage.
When I add a new host (Centos 6.6) the data storage (as a glsuterfs)
cannot be mount.
I have the following errors in gluster client log file :
[2015-09-24
Hi Chris,
Replies inline..
On 09/22/2015 09:31 AM, Sahina Bose wrote:
Forwarded Message
Subject:Re: [ovirt-users] urgent issue
Date: Wed, 9 Sep 2015 08:31:07 -0700
From: Chris Liebman
To: users
Ok - I think I'm
On 08/21/2015 04:32 PM, Sander Hoentjen wrote:
On 08/21/2015 11:30 AM, Ravishankar N wrote:
On 08/21/2015 01:21 PM, Sander Hoentjen wrote:
On 08/21/2015 09:28 AM, Ravishankar N wrote:
On 08/20/2015 02:14 PM, Sander Hoentjen wrote:
On 08/19/2015 09:04 AM, Ravishankar N wrote
On 08/21/2015 07:57 PM, Sander Hoentjen wrote:
Maybe I should formulate some clear questions:
1) Am I correct in assuming that an issue on of of 3 gluster nodes
should not cause downtime for VM's on other nodes?
From what I understand, yes. Maybe the ovirt folks can confirm. I can
tell you
On 08/21/2015 01:21 PM, Sander Hoentjen wrote:
On 08/21/2015 09:28 AM, Ravishankar N wrote:
On 08/20/2015 02:14 PM, Sander Hoentjen wrote:
On 08/19/2015 09:04 AM, Ravishankar N wrote:
On 08/18/2015 04:22 PM, Ramesh Nachimuthu wrote:
+ Ravi from gluster.
Regards,
Ramesh
On 08/20/2015 02:14 PM, Sander Hoentjen wrote:
On 08/19/2015 09:04 AM, Ravishankar N wrote:
On 08/18/2015 04:22 PM, Ramesh Nachimuthu wrote:
+ Ravi from gluster.
Regards,
Ramesh
- Original Message -
From: Sander Hoentjen san...@hoentjen.eu
To: users@ovirt.org
Sent: Tuesday
On 08/18/2015 04:22 PM, Ramesh Nachimuthu wrote:
+ Ravi from gluster.
Regards,
Ramesh
- Original Message -
From: Sander Hoentjen san...@hoentjen.eu
To: users@ovirt.org
Sent: Tuesday, August 18, 2015 3:30:35 PM
Subject: [ovirt-users] Ovirt/Gluster
Hi,
We are looking for some easy to
On 06/08/2015 02:38 AM, Юрий Полторацкий wrote:
Hi,
I have made a lab with a config listed below and have got unexpected
result. Someone, tell me, please, where did I go wrong?
I am testing oVirt. Data Center has two clusters: the first as a
computing with three nodes (node1, node2,
37 matches
Mail list logo