Interesting, I run F29 for all development, and didn't see anything like
this.
Please share 'gluster volume info'. And also logs from mount process.
-Amar
On Wed, Jan 30, 2019 at 8:33 PM Dr. Michael J. Chudobiak <
m...@avtechpulse.com> wrote:
> I run Fedora 29 clients and servers, with user hom
Hi Atin,
This is the steps exactly I have done which caused failure. additional to
this node3 OS drive was running out of space when service failed. so I have
cleared some space in OS drive but still service failed to start.
Trying to simulate a situation. where volume stoped abnormally and
entir
On Fri, Jan 25, 2019 at 1:41 PM Jeevan Patnaik wrote:
> Hi,
>
> I'm just going through the concepts of quorum and split-brains with a
> cluster in general, and trying to understand GlusterFS quorums again which
> I previously found difficult to accurately understand.
>
> When we talk about server
I'm not very sure how did you end up into a state where in one of the node
lost information of one peer from the cluster. I suspect doing a replace
node operation you somehow landed into this situation by an incorrect step.
Until and unless you could elaborate more on what all steps you have
perfor
On Tue, Jan 29, 2019 at 8:52 PM David Spisla wrote:
> Hello Gluster Community,
>
> in glusterd.vol are parameters to define the port range for the bricks.
> They are commented out per default:
>
> # option base-port 49152
> # option max-port 65535
> I assume that glusterd is not using this range
On Thu, Jan 31, 2019 at 2:14 AM Artem Russakovskii
wrote:
> Also, not sure if related or not, but I got a ton of these "Failed to
> dispatch handler" in my logs as well. Many people have been commenting
> about this issue here https://bugzilla.redhat.com/show_bug.cgi?id=1651246.
>
https://review
Also, not sure if related or not, but I got a ton of these "Failed to
dispatch handler" in my logs as well. Many people have been commenting
about this issue here https://bugzilla.redhat.com/show_bug.cgi?id=1651246.
==> mnt-SITE_data1.log <==
> [2019-01-30 20:38:20.783713] W [dict.c:761:dict_ref]
I found a similar issue here:
https://bugzilla.redhat.com/show_bug.cgi?id=1313567. There's a comment from
3 days ago from someone else with 5.3 who started seeing the spam.
Here's the command that repeats over and over:
[2019-01-30 20:23:24.481581] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/
I run Fedora 29 clients and servers, with user home folders mounted on
gluster. This worked fine with Fedora 27 clients, but on F29 clients the
chrome and chromium browsers crash. The backtrace info (see below)
suggests problems with sqlite. Anyone else run into this? gluster and
sqlite have ha
Hi Atin,
yes, it worked out thank you.
what would be the cause of this issue?
On Fri, Jan 25, 2019 at 1:56 PM Atin Mukherjee wrote:
> Amudhan,
>
> So here's the issue:
>
> In node3, 'cat /var/lib/glusterd/peers/* ' doesn't show up node2's details
> and that's why glusterd wasn't able to reso
Hi,
a bit additional info inlineAm Montag, den 28.01.2019, 10:23 +0100 schrieb
Frank Ruehlemann:
> Am Montag, den 28.01.2019, 09:50 +0530 schrieb Nithya Balachandran:
> >
> > On Fri, 25 Jan 2019 at 20:51, Gudrun Mareike Amedick <
> > g.amed...@uni-luebeck.de> wrote:
> >
> > >
> > > Hi all,
> >
Hello Gluster Community,
today I got the same error messages in glusterd.log when setting volume
options of a freshly created volume. See the log entry:
[2019-01-30 10:15:55.597268] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xdad2a)
[0x7f08ce71ed2a]
-->/usr/li
12 matches
Mail list logo