Hi,
When creating a geo-replication session is the gverify.sh used or ran
respectively? or is gverify.sh just an ad-hoc command to test manually if
creating a geo-replication creationg would succeed?
Best,
M.___
Gluster-users mailing list
Hello,
I created a small cluster with 5 servers on ubuntu 16.04 and everything is
working ok so far, but I experience a weird problem with glustershd. On all
servers I get following error:
Aug 18 22:13:55 ws1 glustershd[16078]: [2017-08-18 20:13:55.604910] E
[socket.c:3287:socket_connect]
Hi,
I have a gluster cluster running with geo-replication. The volume that is being
geo-replicated is running on a LVM thin pool. The thin pool overflowed causing
a crash. I extended the thin pool LV and remounted which brought the volume
back online and healthy. When I try to restart the
You're hitting a race here. By the time glusterd tries to resolve the
address of one of the remote bricks of a particular volume, the n/w
interface is not up by that time. We have fixed this issue in mainline and
3.12 branch through the following commit:
commit
On Wed, Aug 16, 2017 at 4:44 PM, Hatazaki, Takao wrote:
>> Note that "stripe" is not tested much and practically unmaintained.
>
> Ah, this was what I suspected. Understood. I'll be happy with "shard".
>
> Having said that, "stripe" works fine with transport=tcp. The
On Fri, Aug 18, 2017 at 1:38 PM, Atin Mukherjee wrote:
>
>
> On Fri, Aug 18, 2017 at 12:22 PM, Atin Mukherjee
> wrote:
>>
>> You're hitting a race here. By the time glusterd tries to resolve the
>> address of one of the remote bricks of a particular
On Fri, Aug 18, 2017 at 1:59 PM, Dmitry Melekhov wrote:
> 18.08.2017 12:21, Atin Mukherjee пишет:
>
>
> On Fri, 18 Aug 2017 at 13:45, Raghavendra Talur wrote:
>
>> On Fri, Aug 18, 2017 at 1:38 PM, Atin Mukherjee
>> wrote:
>> >
>> >
>> >
On Fri, 18 Aug 2017 at 13:45, Raghavendra Talur wrote:
> On Fri, Aug 18, 2017 at 1:38 PM, Atin Mukherjee
> wrote:
> >
> >
> > On Fri, Aug 18, 2017 at 12:22 PM, Atin Mukherjee
> > wrote:
> >>
> >> You're hitting a race here. By the
On Fri, Aug 18, 2017 at 2:01 PM, Niels de Vos wrote:
> On Fri, Aug 18, 2017 at 12:22:33PM +0530, Atin Mukherjee wrote:
> > You're hitting a race here. By the time glusterd tries to resolve the
> > address of one of the remote bricks of a particular volume, the n/w
> >
On Fri, Aug 18, 2017 at 12:22 PM, Atin Mukherjee
wrote:
> You're hitting a race here. By the time glusterd tries to resolve the
> address of one of the remote bricks of a particular volume, the n/w
> interface is not up by that time. We have fixed this issue in mainline and
On Fri, Aug 18, 2017 at 12:22:33PM +0530, Atin Mukherjee wrote:
> You're hitting a race here. By the time glusterd tries to resolve the
> address of one of the remote bricks of a particular volume, the n/w
> interface is not up by that time. We have fixed this issue in mainline and
> 3.12 branch
18.08.2017 12:21, Atin Mukherjee пишет:
On Fri, 18 Aug 2017 at 13:45, Raghavendra Talur > wrote:
On Fri, Aug 18, 2017 at 1:38 PM, Atin Mukherjee
> wrote:
>
>
> On Fri, Aug 18,
On Fri, Aug 11, 2017 at 5:50 PM, Ravishankar N
wrote:
>
>
> On 08/11/2017 04:51 PM, Niels de Vos wrote:
>
>> On Fri, Aug 11, 2017 at 12:47:47AM -0400, Raghavendra Gowdappa wrote:
>>
>>> Hi all,
>>>
>>> In a conversation between me, Milind and Csaba, Milind pointed out
>>>
hi fellas
I wonder if gluster with a peer connected via vpn tunnel is
something you would use for production?
@devel - is such a scenario even a valid(approved) one?
many thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
Hi,
I have followings in /etc/hosts on all hosts:
172.31.22.2 gluster-s1 #eno1 on the server #1 (hostname)
172.31.150.2 gluster-s1-fdr #ib0 on the server #1
172.31.22.3 gluster-s2 #eno1 on the server #2 (hostname)
172.31.150.3 gluster-s2-fdr #ib0 on the server #2
Peer probe was done for
15 matches
Mail list logo