On Wed, 27 Mar 2019 at 21:47, wrote:
> Hello Amar and list,
>
>
>
> I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the
> “Transport endpoint is not connected failures” for us.
>
>
>
> We did not have any of these failures in this past weekend backups cycle.
>
>
>
> Thank you
On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez wrote:
> On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez
>> wrote:
>>
>>> On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri <
>>>
On Wed, Mar 27, 2019 at 9:46 PM wrote:
> Hello Amar and list,
>
>
>
> I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the
> “Transport endpoint is not connected failures” for us.
>
What was the version you saw failures in? Were there any logs matching with
the pattern
I have this issue, for a few days with my new setup. I will have to get
back to you on versions but it was centos7.6 patched yesterday (27/3/2019).
On Thu, 28 Mar 2019 at 12:58, Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:
> On Wed, Mar 27, 2019 at 9:46 PM wrote:
> >
[This email was originally posted to the gluster-infra list. Since
that list is used for coordination between members who work on the
infrastructure of the project, I am redirecting it to gluster-users
for better visibility and responses.]
On Thu, Mar 28, 2019 at 1:13 AM Guy Boisvert
wrote:
>
>
On Wed, Mar 27, 2019 at 9:46 PM wrote:
>
> Hello Amar and list,
>
>
>
> I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the
> “Transport endpoint is not connected failures” for us.
>
>
>
> We did not have any of these failures in this past weekend backups cycle.
>
>
>
> Thank
On Wed, 27 Mar 2019, 18:26 Pranith Kumar Karampuri,
wrote:
>
>
> On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez
> wrote:
>
>> On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>> On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez
>>> wrote:
>>>
On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez wrote:
> On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez
>> wrote:
>>
>>> On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri <
>>>
Thanks all for the help! The cluster has been up for a few hours now
with no reported errors, so I guess replacement of the server went
ultimately fine ;-)
Ciao,
R
___
Gluster-users mailing list
Gluster-users@gluster.org
On Wed, 27 Mar 2019 07:53:55 -0700
Joe Julian wrote:
Ok Joe, this is the situation: I have a glusterfs cluster using R630
Dell servers with 256GB of memory, a bunch of 3.4TB SSD's and Intel Xeon
E5-2667 beasts. Using such power and seeing glusterfs taking 5
seconds for a simple "ls -alR" on a
Hello Amar and list,
I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the
"Transport endpoint is not connected failures" for us.
We did not have any of these failures in this past weekend backups cycle.
Thank you very much for fixing whatever was the problem.
I
On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri
wrote:
>
>
> On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez
> wrote:
>
>> On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>> On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez
>>> wrote:
First, your statement and subject is hyperbolic and combative. In general it's
best not to begin any approach for help with an uneducated attack on a
community.
GFS (Global File System) is an entirely different project but I'm going to
assume you're in the right place and actually asking about
This feature is not under active development as it was not used widely.
AFAIK its not supported feature.
+Nithya +Raghavendra for further clarifications.
Regards,
Poornima
On Wed, Mar 27, 2019 at 12:33 PM Lucian wrote:
> Oh, that's just what the doctor ordered!
> Hope it works, thanks
>
> On
On Wed, 27 Mar 2019 14:37:14 +
Marcelo Terres wrote:
> https://bugzilla.redhat.com/show_bug.cgi?id=1673058
Ok, thnx, I missed that one (not used the proper search arguments I
guess)
Hope this will resolve the problem. There is a 5.5-1 in Debian
experimental from the 25th of march, don't
https://bugzilla.redhat.com/show_bug.cgi?id=1673058
Regards,
Marcelo H. Terres
https://www.mundoopensource.com.br
https://twitter.com/mhterres
https://linkedin.com/in/marceloterres
On Wed, 27 Mar 2019 at 14:32, richard lucassen
wrote:
> Hello list,
>
> glusterfs 5.4-1 on Debian Buster (both
Hello list,
glusterfs 5.4-1 on Debian Buster (both servers and clients)
I'm quite new to GFS and it's an old problem I know. When running a
simple "ls -alR" on a local directory containing 50MB and 3468 files it
takes:
real0m0.567s
user0m0.084s
sys 0m0.168s
Same thing for a copy of
I have two clusters with dispersed volumes (2+1) with GEO replication
It works fine till I use glusterfs-fuse, but as even one file written over
nfs-ganesha replication goes to Fault and recovers after I remove this file
(sometimes after stop/start)
I think nfs-hanesha writes file in some way
The system is a 5 server, 20 brick distributed system with a hardware
configured RAID 6 underneath with xfs as filesystem. This client is a data
collection node which transfers data to specific directories within one of the
gluster volumes.
I have a client with submounted directories
The system is a 5 server, 20 brick distributed system with a hardware
configured RAID 6 underneath with xfs as filesystem. This client is a data
collection node which transfers data to specific directories within one of
the gluster volumes.
I have a client with submounted directories
On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez wrote:
> On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez
>> wrote:
>>
>>> On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa <
>>>
On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri
wrote:
>
>
> On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez
> wrote:
>
>> On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa <
>> rgowd...@redhat.com> wrote:
>>
>>>
>>>
>>> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez
>>> wrote:
>>>
- Original Message -
From: "Atin Mukherjee"
To: "Rafi Kavungal Chundattu Parambil" , "Riccardo Murri"
Cc: gluster-users@gluster.org
Sent: Wednesday, March 27, 2019 4:07:42 PM
Subject: Re: [Gluster-users] cannot add server back to cluster after
reinstallation
On Wed, 27 Mar 2019 at
On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez wrote:
> On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa
> wrote:
>
>>
>>
>> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez
>> wrote:
>>
>>> Hi Raghavendra,
>>>
>>> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa <
>>>
On Wed, Mar 27, 2019 at 11:54 AM Raghavendra Gowdappa
wrote:
>
>
> On Wed, Mar 27, 2019 at 4:22 PM Raghavendra Gowdappa
> wrote:
>
>>
>>
>> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez
>> wrote:
>>
>>> Hi Raghavendra,
>>>
>>> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa <
>>>
On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa
wrote:
>
>
> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez
> wrote:
>
>> Hi Raghavendra,
>>
>> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa
>> wrote:
>>
>>> All,
>>>
>>> Glusterfs cleans up POSIX locks held on an fd when the
All,
We're seeing some issues with the default provided logrotate configuration in
regards to the bitd.log files.
Logrotate has a postrotate-script to run "killall -HUP glusterfs", to make the
processes release the filehandles and create a new logfile, and using
"delaycompress".
Recently we
On Wed, Mar 27, 2019 at 4:22 PM Raghavendra Gowdappa
wrote:
>
>
> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez
> wrote:
>
>> Hi Raghavendra,
>>
>> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa
>> wrote:
>>
>>> All,
>>>
>>> Glusterfs cleans up POSIX locks held on an fd when the
On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez wrote:
> Hi Raghavendra,
>
> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa
> wrote:
>
>> All,
>>
>> Glusterfs cleans up POSIX locks held on an fd when the client/mount
>> through which those locks are held disconnects from bricks/server.
On Wed, 27 Mar 2019 at 16:02, Riccardo Murri
wrote:
> Hello Atin,
>
> > Check cluster.op-version, peer status, volume status output. If they are
> all fine you’re good.
>
> Both `op-version` and `peer status` look fine:
> ```
> # gluster volume get all cluster.max-op-version
> Option
Hello Atin,
> Check cluster.op-version, peer status, volume status output. If they are all
> fine you’re good.
Both `op-version` and `peer status` look fine:
```
# gluster volume get all cluster.max-op-version
Option Value
--
On Wed, 27 Mar 2019 at 15:24, Riccardo Murri
wrote:
> I managed to put the reinstalled server back into connected state with
> this procedure:
>
> 1. Run `for other_server in ...; do gluster peer probe $other_server;
> done` on the reinstalled server
> 2. Now all the peers on the reinstalled
+Sanju Rakonde & +Atin Mukherjee
adding
glusterd folks who can help here.
On Wed, Mar 27, 2019 at 3:24 PM Riccardo Murri
wrote:
> I managed to put the reinstalled server back into connected state with
> this procedure:
>
> 1. Run `for other_server in ...; do gluster peer probe $other_server;
I managed to put the reinstalled server back into connected state with
this procedure:
1. Run `for other_server in ...; do gluster peer probe $other_server;
done` on the reinstalled server
2. Now all the peers on the reinstalled server show up as "Accepted
Peer Request", which I fixed with the
On 3/27/19 12:55 PM, Xavi Hernandez wrote:
Hi Raghavendra,
On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa
mailto:rgowd...@redhat.com>> wrote:
All,
Glusterfs cleans up POSIX locks held on an fd when the client/mount
through which those locks are held disconnects from
Hello,
a couple days ago, the OS disk of one of the server of a local GlusterFS
cluster suffered a bad crash, and I had to reinstall everything from
scratch.
However, when I restart the GlusterFS service on the server that has
been reinstalled, I see that it sends back a "RJT" response to other
Hi Krutika, Leo,
Sounds promising. I will test this too, and report back tomorrow (or
maybe sooner, if corruption occurs again).
-- Sander
On 27-03-19 10:00, Krutika Dhananjay wrote:
> This is needed to prevent any inconsistencies stemming from buffered
> writes/caching file data during live
This is needed to prevent any inconsistencies stemming from buffered
writes/caching file data during live VM migration.
Besides, for Gluster to truly honor direct-io behavior in qemu's
'cache=none' mode (which is what oVirt uses),
one needs to turn on performance.strict-o-direct and disable
Hello,
following the announcement of GlusterFS 6, I tried to install the
package from the Ubuntu PPA on a 16.04 "xenial" machine, only to find
out that GlusterFS 6 is only packaged for Ubuntu "bionic" and up.
Is there an online page with a table or matrix detailing what versions
are packaged for
Hi Raghavendra,
On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa
wrote:
> All,
>
> Glusterfs cleans up POSIX locks held on an fd when the client/mount
> through which those locks are held disconnects from bricks/server. This
> helps Glusterfs to not run into a stale lock problem later (For
Oh, that's just what the doctor ordered!
Hope it works, thanks
On 27 March 2019 03:15:57 GMT, Vlad Kopylov wrote:
>I don't remember if it still in works
>NUFA
>https://github.com/gluster/glusterfs-specs/blob/master/done/Features/nufa.md
>
>v
>
>On Tue, Mar 26, 2019 at 7:27 AM Nux! wrote:
>
>>
41 matches
Mail list logo