al pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Thank you,
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:3
Anant
> ------
> *From:* Gluster-users on behalf of
> David Cunningham
> *Sent:* 23 February 2023 9:56 PM
> *To:* gluster-users
> *Subject:* Re: [Gluster-users] Big problems after update to 9.6
>
>
> *EXTERNAL: Do not click links or open attachments i
Is it possible that version 9.1 and 9.6 can't talk to each other? My
understanding was that they should be able to.
On Fri, 24 Feb 2023 at 10:36, David Cunningham
wrote:
> We've tried to remove "sg" from the cluster so we can re-install the
> GlusterFS node on it, but the follo
t; to just remove "sg" without trying to contact it?
On Fri, 24 Feb 2023 at 10:31, David Cunningham
wrote:
> Hello,
>
> We have a cluster with two nodes, "sg" and "br", which were running
> GlusterFS 9.1, installed via the Ubuntu package manager. We updated
b23c19-5114-4a20-9306-9ea6faf02d51-GRAPH_ID:0-PID:35568-HOST:br.m5voip.com-PC_NAME:gvol0-client-0-RECON_NO:-0
Thanks for your help,
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
Community Meeting Calendar:
Schedule
Hi Strahil and Ravi,
Thank you very much for your replies, that makes sense.
On Thu, 21 Oct 2021 at 03:25, Ravishankar N
wrote:
> Hi David,
>
> On Wed, Oct 20, 2021 at 6:23 AM David Cunningham <
> dcunning...@voisonics.com> wrote:
>
>> Hello,
>>
>&
state?
4. Is the outcome of conflict resolution at a file level the same whether
node3 is a full replica or just an arbiter?
Thank you very much for any advice,
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
Communi
Strahil Nikolov
>
> Sent from Yahoo Mail on Android
> <https://go.onelink.me/107872968?pid=InProduct=Global_Internal_YGrowth_AndroidEmailSig__AndroidUsers_wl=ym_sub1=Internal_sub2=Global_YGrowth_sub3=EmailSignature>
>
> On Sat, Aug 28, 2021 at 1:00, David Cunningham
> wrote:
>
> Sent from Yahoo Mail on Android
> <https://go.onelink.me/107872968?pid=InProduct=Global_Internal_YGrowth_AndroidEmailSig__AndroidUsers_wl=ym_sub1=Internal_sub2=Global_YGrowth_sub3=EmailSignature>
>
> On Fri, Aug 27, 2021 at 7:01, David Cunningham
> wrote:
>
>
&g
; [2021-08-25 20:10:44.803984 +] E [MSGID: 114031]
> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk] 0-gvol0-client-1: remote
> operation failed. [{path=(null)}, {errno=22}, {error=Invalid argument}]
> [2021-08-25 20:20:45.132601 +] E [MSGID: 114031]
> [client-rpc-fops_v2.c:214:
0] E [MSGID: 114031]
[client-rpc-fops_v2.c:214:client4_0_mkdir_cbk] 0-gvol0-client-0: remote
operation failed. [{path=(null)}, {errno=22}, {error=Invalid argument}]
... repeated...
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
_
.
> >
> > You can check if gluster issue #876 matches your case.
> >
> > Which version of gluster are you using ?
> >
> > Best Regards,
> > Strahil Nikolov
> >
> >
> > On Thu, Aug 12, 2021 at 6:34, David Cunningham
> > wro
he files are written
correctly, with the full file data, and there's no error. So it appears
that the problem does not occur if the destination file exists.
Does that give anyone a clue as to what's happening? Thanks.
On Thu, 12 Aug 2021 at 13:50, David Cunningham
wrote:
> Hi,
>
> Gilberto, s
> Those options you need put it in the NFS server options, generally in
> /etc/exports
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em ter., 10 de ago. de 2021 às 18:24, David Cunningham <
> dcunning...@voisonics.
USE, right?
>
> Best Regards,
> Strahil Nikolov
>
> On Wed, Aug 11, 2021 at 0:24, David Cunningham
> wrote:
> Hi Strahil and Gilberto,
>
> Thanks very much for your replies. SELinux is disabled on the NFS server
> (and the client too), and both have the same UID and GID
Hey David,
>>
>> can you give the volume info ?
>>
>> Also, I assume SELINUX is in permissive/disabled state.
>>
>> What about the uod of the user on the nfs client and the nfs server ? Is
>> it the same ?
>>
>> Best Regards,
>> Strahil
:32, Ravishankar N wrote:
>
>
> On Tue, Aug 10, 2021 at 3:23 PM David Cunningham <
> dcunning...@voisonics.com> wrote:
>
>> Thanks Ravi, so if I understand correctly latency to all the nodes
>> remains an issue on all file reads.
>>
>>
> Hi David, ye
Thanks Ravi, so if I understand correctly latency to all the nodes remains
an issue on all file reads.
On Tue, 10 Aug 2021 at 16:49, Ravishankar N wrote:
>
>
> On Tue, Aug 10, 2021 at 8:07 AM David Cunningham <
> dcunning...@voisonics.com> wrote:
>
>> Hi Gionatan,
5), client:
CTX_ID:8f69363a-f0f4-44e1-84e9-69dfa77a8164-GRAPH_ID:0-PID:2657-HOST:gfs1.company.com-PC_NAME:gvol0-client-0-RECON_NO:-0,
error-xlator: gvol0-access-control [Permission denied]
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28
ch can really be local (ie: the
> requested file is available) should not suffer from remote party
> latency.
> Is that correct?
>
> Thanks.
>
> --
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.da...@assyoma.it - i...@assyoma.it
> GPG
icks
> are running.
>
> Keep in mind that thin arbiter is less used. For example, I have never
> deployed a thin arbiter.
>
> Best Regards,
> Strahil Nikolov
>
> On Tue, Aug 3, 2021 at 7:40, David Cunningham
> wrote:
> Hi Strahil,
>
> I registered and
e is 'remote' , then you can give a try to gluster's thin arbiter (for
> the 'remote' node).
>
>
> Best Regards,
> Strahil Nikolov
>
> On Mon, Aug 2, 2021 at 5:02, David Cunningham
> wrote:
> Hi Ravi and Strahil,
>
> Thanks again for your responses. Having one brick
lendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
--
David Cunningham, Voisonics Lim
; https://lists.gluster.org/pipermail/gluster-users/2015-June/022288.html
>
>
> --
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.da...@assyoma.it - i...@assyoma.it
> GPG public key ID: FF5F32A8
>
--
David Cunningham, Voisonics Limited
the least outstanding read requests.
4 = brick having the least network ping latency.
Thanks again.
On Tue, 27 Jul 2021 at 19:16, Yaniv Kaul wrote:
>
>
> On Tue, Jul 27, 2021 at 9:50 AM David Cunningham <
> dcunning...@voisonics.com> wrote:
>
>> Hello,
>>
>>
up the read
by simply reading the file from the fastest node. This would be especially
beneficial if some of the other nodes have higher latency from the client.
Is it possible to do this? Thanks in advance for any assistance.
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1
a geo-replication "stop" and then "start" and are
pleased to see the two new master nodes are now in "Passive" status. Thank
you for your help!
On Tue, 1 Jun 2021 at 10:06, David Cunningham
wrote:
> Hi Aravinda,
>
> Thank you very much - we will give that a try.
>
Hi Aravinda,
Thank you very much - we will give that a try.
On Mon, 31 May 2021 at 20:29, Aravinda VK wrote:
> Hi David,
>
> On 31-May-2021, at 10:37 AM, David Cunningham
> wrote:
>
> Hello,
>
> We have a GlusterFS configuration with mirrored nodes on the maste
would normally have something like:
master A -> secondary A
master B -> secondary B
master C -> secondary C
so that any master or secondary node could go offline but geo-replication
would keep working.
Thank you very much in advance.
--
David Cunningham, Voisonics Limited
http://voison
Regards,
> Strahil Nikolov
>
> On Tue, Mar 30, 2021 at 4:13, David Cunningham
> wrote:
> Thank you Strahil. So if we take into account the depreciated options from
> all release notes then the direct upgrade should be okay.
>
>
> On Fri, 26 Mar 2021 at 02:01, Strahil
elease notes (usually '.0) as some options are deprecated like tiering.
>
> Best Regards,
> Strahil Nikolov
>
> On Tue, Mar 23, 2021 at 2:47, David Cunningham
> wrote:
> Hello,
>
> We ended up restoring the backup since it was easy on a test system.
>
> Does anyo
in between?
Thanks in advance.
On Sat, 20 Mar 2021 at 09:58, David Cunningham
wrote:
> Hi Strahil,
>
> It's as follows. Do you see anything unusual? Thanks.
>
> root@caes8:~# ls -al /var/lib/glusterd/vols/gvol0/
> total 52
> drwxr-xr-x 3 root root 4096 Mar 18 17:06 .
> drwxr-x
w your
> volfile again
>
> What is the content of :
>
> /var/lib/glusterd/vols/gvol0 ?
>
>
> Best Regards,
>
> Strahil Nikolov
>
> On Fri, Mar 19, 2021 at 3:02, David Cunningham
> wrote:
> Hello,
>
> We have a single node/brick GlusterFS test system which
;
> Regards,
>
> Xavi
>
>
> On Wed, Mar 10, 2021 at 5:10 AM David Cunningham <
> dcunning...@voisonics.com> wrote:
>
>> Hello,
>>
>> We have a GlusterFS 5.13 server which also mounts itself with the native
>> FUSE client. Recently the FUSE
d we can't see any error
message explaining exactly why. Would anyone have an idea of where to look?
Since the logs from the time of the upgrade and reboot are a bit lengthy
I've attached them in a text file.
Thank you in advance for any advice!
--
David Cunningham, Voisonics Limited
http://voi
not exist
Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: Please specify a mount
point
Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: Usage:
Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: man 8
/sbin/mount.glusterfs
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1
processes are supposed to have gsyncd.log open?
If so, how do we tell them to close and re-open their file handle?
Thanks in advance!
On Tue, 25 Aug 2020 at 15:24, David Cunningham
wrote:
> Hello,
>
> We're having an issue with the rotated gsyncd.log not being released.
> Here
it's geo-replication and not the server that's the problem.
Thanks in advance,
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
lems here, upgrade took between 10
> to 20 minutes (wait until healing is done) - but no geo replication,
> so i can't say anything about that part.
>
> Best regards,
> Hubert
>
> Am Di., 25. Aug. 2020 um 05:47 Uhr schrieb David Cunningham
> :
> >
> > Hello,
> &g
are, or if a complete re-install
is necessary for safety.
We have a maximum window of around 4 hours for this upgrade and would not
want any significant risk of an unsuccessful upgrade at the end of that
time.
Is version 8.0 considered stable?
Thanks in advance,
--
David Cunningham, Voisonics Limited
http
ile-server=localhost --volfile-id=gvol0 --client-pid=-1
/tmp/gsyncd-aux-mount-Tq_3sU
Perhaps the problem is that the kill -HUP in the logrotate script doesn't
act on the right process? If so, does anyone have a command to get the
right PID?
Thanks in advance for any help.
--
David Cunn
le debug on the python script ?
>
>
> Best Regards,
> Strahil Nikolov
>
>
> На 12 юни 2020 г. 6:49:57 GMT+03:00, David Cunningham <
> dcunning...@voisonics.com> написа:
> >Hi Strahil,
> >
> >Is there a trick to getting the .gfid directory to appear beside
the problem ?
>
>
> Best Regards,
> Strahil Nikolov
>
> На 11 юни 2020 г. 3:15:36 GMT+03:00, David Cunningham <
> dcunning...@voisonics.com> написа:
> >Hi Strahil,
> >
> >Thanks for that. I did search for a file with the gfid in the name, on
> >both
&g
t; Once you have the full path to the file , test:
> - Mount with FUSE
> - Check file exists ( no '??' for permissions, size, etc) and can be
> manipulated (maybe 'touch' can be used ?)
> - Find (on all replica sets ) the file and check the gfid
> - Check for heals pending for tha
was "normal" for the push node (which could be
> another one) .
>
> As this script is python, I guess you can put some debug print
> statements in it.
>
> Best Regards,
> Strahil Nikolov
>
> На 9 юни 2020 г. 5:07:11 GMT+03:00, David Cunningham <
> dcunning.
age of CPU if it was not doing so
> previously.
>
> On Mon, 8 Jun 2020 at 05:29, David Cunningham
> wrote:
> >
> > Hi Strahil,
> >
> > The CPU is still quite high, with "top" regularly showing 100% CPU usage
> by that process. However it's not clear w
lave logs.
>
> Does the issue still occurs ?
>
> Best Regards,
> Strahil Nikolov
>
> На 6 юни 2020 г. 1:21:55 GMT+03:00, David Cunningham <
> dcunning...@voisonics.com> написа:
> >Hi Sunny and Strahil,
> >
> >Thanks again for your responses. We don't have a
e CPU hog.
> >
> > Sadly, I can't find an instruction for increasing the log level of the
> geo rep log .
> >
> >
> > Best Regards,
> > Strahil Nikolov
> >
> >
> > На 2 юни 2020 г. 6:14:46 GMT+03:00, David Cunningham <
> dcunning...@v
ile on both source and destination,
> do they really match or they are different ?
>
> What happens when you move away the file from the slave , does it fixes
> the issue ?
>
> Best Regards,
> Strahil Nikolov
>
> На 30 май 2020 г. 1:10:56 GMT+03:00, David Cunningham &l
ent in 5.1, and we're running 5.12 on the master nodes
and slave. A couple of GlusterFS clients connected to the master nodes are
running 5.13.
Would anyone have any suggestions? Thank you in advance.
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28
native client.
BTW, under normal circumstances when the client checks all bricks, does
that include checking an arbiter? Or are arbiters not checked?
On Sat, 25 Apr 2020 at 19:41, Strahil Nikolov wrote:
> On April 25, 2020 9:00:30 AM GMT+03:00, David Cunningham <
> dcunning...@vois
are talking about replica volumes, in which case the read
> does happen from only one of the replica bricks. The client only sends
> lookups to all the bricks to figure out which are the good copies. Post
> that, the reads themselves are served from only one of the good copies.
>
> -Ravi
to be available, but it's actually
not critical if we end up with an old version of the file in the case of a
server down or net-split etc. Significantly improved read performance would
be desirable instead.
Thanks in advance for any help.
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1
for? We're running
GlusterFS 5.12.
Thanks in advance,
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
Community Meeting Calendar:
Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/44185096
:00, David Cunningham <
> dcunning...@voisonics.com> wrote:
> >Hi Hu.
> >
> >Just to clarify, what should we be looking for with "df -i"?
> >
> >
> >On Fri, 6 Mar 2020 at 18:51, Hu Bert wrote:
> >
> >> Hi,
> >>
>
Hi Hu.
Just to clarify, what should we be looking for with "df -i"?
On Fri, 6 Mar 2020 at 18:51, Hu Bert wrote:
> Hi,
>
> just a guess and easy to test/try: inodes? df -i?
>
> regards,
> Hubert
>
> Am Fr., 6. März 2020 um 04:42 Uhr schrieb David
reporting for brick’s `df` output?
>
> ```
> df /nodirectwritedata/gluster/gvol0
> ```
>
> —
> regards
> Aravinda Vishwanathapura
> https://kadalu.io
>
> On 06-Mar-2020, at 2:52 AM, David Cunningham
> wrote:
>
> Hello,
>
> A major concern we have is that "
ng on here?
Thanks in advance.
On Thu, 5 Mar 2020 at 21:35, David Cunningham
wrote:
> Hi Aravinda,
>
> Thanks for the reply. This test server is indeed the master server for
> geo-replication to a slave.
>
> I'm really surprised that geo-replication simply keeps writing logs
rfs/issues/833#issuecomment-594436009
>
> If Changelogs files are causing issue, you can use archival tool to remove
> processed changelogs.
> https://github.com/aravindavk/archive_gluster_changelogs
>
> —
> regards
> Aravinda Vishwanathapura
> https://kadalu.io
>
>
>
-check: on
changelog.changelog: on
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
Community Meeting Calendar:
Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing
io
>
> On 04-Mar-2020, at 4:13 AM, David Cunningham
> wrote:
>
> Hi Strahil,
>
> The B cluster are communicating with each other via a LAN, and it seems
> the A cluster has got B's LAN addresses (which aren't accessible from the
> internet including the A cluster) through
to replicate
using public addresses instead of the LAN.
Thank you.
On Tue, 3 Mar 2020 at 18:07, Strahil Nikolov wrote:
> On March 3, 2020 4:13:38 AM GMT+02:00, David Cunningham <
> dcunning...@voisonics.com> wrote:
> >Hello,
> >
> >Thanks for that. When we re-t
> Please try with push-pem option during Geo-rep create command.
>
> —
> regards
> Aravinda Vishwanathapura
> https://kadalu.io
>
>
> On 02-Mar-2020, at 6:03 AM, David Cunningham
> wrote:
>
> Hello,
>
> We've set up geo-replication but it isn't actuall
On Tue, 25 Feb 2020 at 15:46, David Cunningham
wrote:
> Hi Aravinda and Sunny,
>
> Thank you for the replies. We have 3 replicating nodes on the master side,
> and want to geo-replicate their data to the remote slave side. As I
> understand it if the master node which had the g
master nodes if one of them goes down.
Thank you!
On Tue, 25 Feb 2020 at 14:32, Aravinda VK wrote:
> Hi David,
>
>
> On 25-Feb-2020, at 3:45 AM, David Cunningham
> wrote:
>
> Hello,
>
> I've a couple of questions on geo-replication that hopefully som
.With regard to copying SSH keys, presumably the SSH key of all master
nodes should be authorized on the geo-replication client side?
Thanks for your help.
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
Community Meeting
mount of packages that the kernel has to
> process - but requires infrastructure to support that too. You can test by
> setting MTU on both sides to 9000 and then run 'tracepath remote-ip'. Also
> run a ping with large size without do not fragment flag -> 'ping -M do
> -s 89
; Are you using Jumbo frames (MTU 9000)?
> What is yoir brick's I/O scheduler ?
>
> Best Regards,
> Strahil Nikolov
> On Jan 7, 2020 01:34, David Cunningham wrote:
>
> Hi Strahil,
>
> We may have had a heal since the GFS arbiter node wasn't accessible from
> the GFS clients
e type as current one), the get the options for
> that volume and put them in a file and then bulk deploy via 'gluster volume
> setgroup custom-group' , where the file is located
> on every gluster server in the '/var/lib/gluster/groups' directory.
> Last , get rid of the
~]# gluster volume get all cluster.op-version
Option Value
-- -
cluster.op-version 5
On Fri, 27 Dec 2019 at 14:22, David Cunningham
wrote:
> Hi Strahil,
>
> Our volume options are as belo
Hi David,
>
> On Dec 24, 2019 02:47, David Cunningham wrote:
> >
> > Hello,
> >
> > In testing we found that actually the GFS client having access to all 3
> nodes made no difference to performance. Perhaps that's because the 3rd
> node that wasn't accessible from
' which can be used for setting up a systemd
>> service to do that for you on shutdown.
>>
>> Best Regards,
>> Strahil Nikolov
>> On Dec 20, 2019 23:49, David Cunningham
>> wrote:
>>
>> Hi Stahil,
>>
>> Ah, that is an important point. One
Regards,
> Strahil Nikolov
> On Dec 20, 2019 23:49, David Cunningham wrote:
>
> Hi Stahil,
>
> Ah, that is an important point. One of the nodes is not accessible from
> the client, and we assumed that it only needed to reach the GFS node that
> was mounted so didn't
Best Regards,
> Strahil Nikolov
>
> В петък, 20 декември 2019 г., 01:49:56 ч. Гринуич+2, David Cunningham <
> dcunning...@voisonics.com> написа:
>
>
> Hi Strahil,
>
> The chart attached to my original email is taken from the GFS server.
>
> I'm not sure what y
>
> В четвъртък, 19 декември 2019 г., 02:28:55 ч. Гринуич+2, David Cunningham <
> dcunning...@voisonics.com> написа:
>
>
> Hi Raghavendra and Strahil,
>
> We are using GFS version 5.6-1.el7 from the CentOS repository.
> Unfortunately we can't modify the application
n the client mounts? As it
> is mostly static content it would help to use the kernel caching and
> read-ahead mechanisms.
>
> I think the default is enabled.
>
> Regards,
>
> Jorick Astrego
> On 12/19/19 1:28 AM, David Cunningham wrote:
>
> Hi Raghavendra and
id=1393419
>
> On Wed, Dec 18, 2019 at 2:50 AM David Cunningham <
> dcunning...@voisonics.com> wrote:
>
>> Hello,
>>
>> We switched a production system to using GFS instead of NFS at the
>> weekend, however it didn't go well on Monday when full load hit. The
it? NFS traffic doesn't exceed 4MBps, so 120MBps
for GFS seems awfully high.
It would also be good to have faster read performance from GFS, but that's
another issue.
Thanks in advance for any assistance.
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New
er/glusterfs/issues/763
>
> Looks like there are some more minor issues in v7.0. I am planning to send
> fixes soon, so these can be fixed in v7.1
>
> -Amar
>
>
> On Tue, Nov 19, 2019 at 2:49 AM David Cunningham <
> dcunning...@voisonics.com> wrote:
>
>&g
rg/#/c/glusterfs/+/22992/ - release 7
> https://review.gluster.org/#/c/glusterfs/+/22612/ - master
>
>
> I am trying to come up with modified document/blog for this asap.
>
> ---
> Ashish
>
>
> --
> *From: *"David Cunningham"
&
itting in master won't be enough for it to make it to a release. If
>> it has to be a part of release 6 then after being committed into master we
>> have to back port it to the release 6 branch and it should get committed in
>> that particular branch as well. Only then it will be
t has to be a part of release 6 then after being committed into master we
> have to back port it to the release 6 branch and it should get committed in
> that particular branch as well. Only then it will be a part of the package
> released for that branch.
>
>
> On Wed, 19 Jun, 2019
s been back ported to the particular release branch before tagging, then
> it will be a part of the tagging. And this tag is the one used for creating
> packaging. This is the procedure for CentOS, Fedora and Debian.
>
> Regards,
> Hari.
>
> On Tue, 18 Jun, 2019, 4:06 AM Dav
oon as we are in last phase of patch reviews. You
> can follow this patch - https://review.gluster.org/#/c/glusterfs/+/22612/
>
> ---
> Ashish
>
> ------
> *From: *"David Cunningham"
> *To: *"Ashish Pandey"
> *Cc: *"gluster-users&
Hi Ashish and Amar,
Is there any news on when thin-arbiter might be in the regular GlusterFS,
and the CentOS packages please?
Thanks for your help.
On Mon, 6 May 2019 at 20:34, Ashish Pandey wrote:
>
>
> --
> *From: *"David Cunningham"
, 3 юни 2019 г., 18:16:00 ч. Гринуич-4, David Cunningham <
> dcunning...@voisonics.com> написа:
>
>
> Hello all,
>
> We confirmed that the network provider blocking port 49152 was the issue.
> Thanks for all the help.
>
>
> On Thu, 30 May 2019 at 16:11, Strahil
> it's definately a firewall.
>
> Best Regards,
> Strahil Nikolov
> On May 30, 2019 01:33, David Cunningham wrote:
>
> Hi Ravi,
>
> I think it probably is a firewall issue with the network provider. I was
> hoping to see a specific connection failure message we cou
see a "Connected to gvol0-client-1" in the log. Perhaps a
> firewall issue like the last time? Even in the earlier add-brick log from
> the other email thread, connection to the 2nd brick was not established.
>
> -Ravi
> On 29/05/19 2:26 PM, David Cunningham wrote:
>
> Hi Ravi
on on gfs2N/A N/AY
7634
Task Status of Volume gvol0
--
There are no active volume tasks
On Wed, 29 May 2019 at 16:26, Ravishankar N wrote:
>
> On 29/05/19 6:21 AM, David Cunningham wrote:
>
:435:gf_client_unref] 0-gvol0-server: Shutting down connection
CTX_ID:30d74196-fece-4380-adc0-338760188b81-GRAPH_ID:0-PID:7718-HOST:gfs2.xxx.com-PC_NAME:gvol0-client-2-RECON_NO:-0
Thanks in advance for any assistance.
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
ll along.
Thanks for all your help.
On Sat, 25 May 2019 at 01:49, Ravishankar N wrote:
> Hi David,
> On 23/05/19 3:54 AM, David Cunningham wrote:
>
> Hi Ravi,
>
> Please see the log attached.
>
> When I grep -E "Connected to |disconnected from"
> gvo
brick and attach the
> gvol0-add-brick-mount.log here. After that, you can change the
> client-log-level back to INFO.
>
> -Ravi
> On 22/05/19 11:32 AM, Ravishankar N wrote:
>
>
> On 22/05/19 11:23 AM, David Cunningham wrote:
>
> Hi Ravi,
>
> I'd already done ex
gt; 4. `gluster volume add-brick gvol0 replica 3 arbiter 1
> gfs3:/nodirectwritedata/gluster/gvol0` from gfs1.
>
> 5. Check that the files are getting healed on to the new brick.
> Thanks,
> Ravi
> On 22/05/19 6:50 AM, David Cunningham wrote:
>
> Hi Ravi,
>
> Certainly. O
9 at 12:43, Ravishankar N wrote:
> Hi David,
> Could you provide the `getfattr -d -m. -e hex
> /nodirectwritedata/gluster/gvol0` output of all bricks and the output of
> `gluster volume info`?
>
> Thanks,
> Ravi
> On 22/05/19 4:57 AM, David Cunningham wrote:
>
> Hi
rote:
>
>>
>>
>> On Fri, 17 May 2019 at 06:01, David Cunningham
>> wrote:
>>
>>> Hello,
>>>
>>> We're adding an arbiter node to an existing volume and having an issue.
>>> Can anyone help? The root cause error appears to be
>>>
8 May 2019 at 22:34, Strahil wrote:
> Just run 'gluster volume heal my_volume info summary'.
>
> It will report any issues - everything should be 'Connected' and show '0'.
>
> Best Regards,
> Strahil Nikolov
> On May 18, 2019 02:01, David Cunningham wrote:
>
> Hi Ra
;
> On 17/05/19 5:59 AM, David Cunningham wrote:
>
> Hello,
>
> We're adding an arbiter node to an existing volume and having an issue.
> Can anyone help? The root cause error appears to be
> "----0001: failed to resolve (Transport
> endp
eplicate*.node-uuid=2069cfb3-c798-47e3-8cf8-3c584cf7c412 --process-name
glustershd
root 16856 16735 0 21:21 pts/000:00:00 grep --color=auto gluster
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand:
ing thin-arbiter support on glusted however,
> it is not available right now.
> https://review.gluster.org/#/c/glusterfs/+/22612/
>
> ---
> Ashish
>
> --
> *From: *"David Cunningham"
> *To: *gluster-users@gluster.org
> *Sent: *Friday, May 3,
been sent and it only requires reviews. I hope it
> should be completed in next 1 month or so.
> https://review.gluster.org/#/c/glusterfs/+/22612/
>
> ---
> Ashish
>
> ------
> *From: *"David Cunningham"
> *To: *"Ashish Pandey"
>
1 - 100 of 102 matches
Mail list logo