Hi Peter,
Have you disabled Gluster-NFS ACLs .
Please check the option value -
#gluster v info | grep nfs.acl
nfs.acl: ON
Also please provide the details of the nfs-client you are using.
Typically, nfs-clients seem to issue getxattr before doing
setxattr/removexattr operations and return
volume
info' only if we explicitly modify its value to ON/OFF.
Can you please verify if the filesystem where your Gluster bricks have
been created has been mounted with ACLs enabled.
Thanks,
Soumya
Thanks
-Peter
From: Soumya Koduri [skod...@redhat.com
Hi Peter,
Can you please try manually mounting those volumes using any/other nfs
client and check if you are able to perform write operations. Also
please collect the gluster nfs log while doing so.
Thanks,
Soumya
On 01/22/2015 08:18 AM, Peter Auyeung wrote:
Hi,
We have been having 5
:afr_log_self_heal_completion_status]
0-sas02-replicate-2: metadata self heal is successfully completed, metadata
self heal from source sas02-client-4 to sas02-client-5, metadata - Pending
matrix: [ [ 0 0 ] [ 0 0 ] ], on /RepDBSata02
Thanks
Peter
From: Soumya Koduri
We are sorry for the inconvenience caused during the hangout session.
There is a network outage at our place. We shall do the recording again
and share the link sometime next week.
Thanks,
Soumya
On 04/30/2015 06:08 PM, Niels de Vos wrote:
On Wed, Apr 29, 2015 at 11:20:20AM -0400, Meghana
!
Cheers,
Alessandro
Il giorno 08/giu/2015, alle ore 13:53, Soumya Koduri skod...@redhat.com ha
scritto:
Hi,
Please find the slides of the demo video at [1]
We recommend to have a distributed replica volume as a shared volume for better
data-availability.
Size of the volume depends
09/giu/2015, alle ore 08:06, Soumya Koduri
skod...@redhat.com mailto:skod...@redhat.com ha scritto:
On 06/09/2015 01:31 AM, Alessandro De Salvo wrote:
OK, I found at least one of the bugs.
The /usr/libexec/ganesha/ganesha.sh has the following lines:
if [ -e /etc/os-release
On 06/09/2015 02:06 PM, Alessandro De Salvo wrote:
Hi Soumya,
Il giorno 09/giu/2015, alle ore 08:06, Soumya Koduri skod...@redhat.com ha
scritto:
On 06/09/2015 01:31 AM, Alessandro De Salvo wrote:
OK, I found at least one of the bugs.
The /usr/libexec/ganesha/ganesha.sh has
,
Alessandro
Il giorno 08/giu/2015, alle ore 19:30, Soumya Koduri skod...@redhat.com ha
scritto:
On 06/08/2015 08:20 PM, Alessandro De Salvo wrote:
Sorry, just another question:
- in my installation of gluster 3.7.1 the command gluster features.ganesha
enable does not work
Hi,
Please find the slides of the demo video at [1]
We recommend to have a distributed replica volume as a shared volume for
better data-availability.
Size of the volume depends on the workload you may have. Since it is
used to maintain states of NLM/NFSv4 clients, you may calculate the
[mailto:achir...@redhat.com]
Sent: Tuesday, June 02, 2015 11:18 PM
To: benchu...@iii.org.tw; gluster-users@gluster.org
Cc: Meghana Madhusudhan; Soumya Koduri
Subject: Re: [Gluster-users] gluster-3.7 cannot start volume ganesha feature
cannot turn on problem
Can you please attach the glusterd logs here? You
'Volume (null)' failed on localhost
By the way, does glusterfs HA for ganesha support pnfs?
Many thanks,
Ben
-Original Message-
From: Soumya Koduri [mailto:skod...@redhat.com]
Sent: Wednesday, June 03, 2015 3:35 PM
To: 莊尚豪; 'Anoop C S'; gluster-users@gluster.org
Cc: 'Meghana Madhusudhan
with nfs-ganesha 2.1.0, so it must be
something introduced with 2.2.0.
Cheers,
Alessandro
On Tue, 2015-06-09 at 15:16 +0530, Soumya Koduri wrote:
On 06/09/2015 02:48 PM, Alessandro De Salvo wrote:
Hi,
OK, the problem with the VIPs not starting is due to the ganesha_mon
heartbeat script
Thanks,
Alessandro
Il giorno 09/giu/2015, alle ore 18:37, Soumya Koduri skod...@redhat.com ha
scritto:
On 06/09/2015 09:47 PM, Alessandro De Salvo wrote:
Another update: the fact that I was unable to use vol set ganesha.enable
was due to another bug in the ganesha scripts. In short
...@lists.sourceforge.net, Soumya
Koduri skod...@redhat.com
Sent: Thursday, June 18, 2015 7:24:55 PM
Subject: Re: [Nfs-ganesha-devel] Problems in /usr/libexec/ganesha/dbus-send.sh
and ganesha dbus interface when disabling exports from gluster
Hi Meghana,
Il giorno 18/giu/2015, alle ore 07:04, Meghana
On 06/18/2015 07:39 PM, Malahal Naineni wrote:
I still have not looked at the log messages, but I see the dbus thread
waiting for the upcall thread to complete when an export is removed. Is
there is a time limit on how the upcall thread gets blocked?
A variable called 'destroy_mode' is used
On 06/17/2015 10:57 PM, Alessandro De Salvo wrote:
Hi,
when disabling exports from gluster 3.7.1, by using gluster vol set volume
ganesha.enable off, I always get the following error:
Error: Dynamic export addition/deletion failed. Please see log file for details
This message is produced by
, Soumya Koduri skod...@redhat.com ha
scritto:
Hi Alessandro,
Response inline.
On 06/17/2015 08:07 PM, Alessandro De Salvo wrote:
Hi,
I do not seem to be able to find an option to automatically add the gluster
volume restrictions I can put with auth.allow into a ganesha export.
As far as I have
On 06/02/2015 04:38 PM, Anoop C S wrote:
On 06/02/2015 01:42 PM, 莊尚豪 wrote:
Hi all,
I have two question for glusterfs-3.7 on fedora-22
I used to have a glusterfs cluster version 3.6.2.
The following configuration can be work in version-3.6.2, but not
in version-3.7
There is 2 node for
? That would be
wrong.
Thanks,
Alessandro
On Wed, 2015-06-10 at 15:28 +0530, Soumya Koduri wrote:
On 06/10/2015 05:49 AM, Alessandro De Salvo wrote:
Hi,
I have enabled the full debug already, but I see nothing special. Before
exporting any volume the log shows no error, even when I do
Hi Louis,
AFAIK, we never tested nfs-ganesha+glusterfs using windows client. May
be it would be good if you can collect and provide packet trace/cores or
logs (with nfs-ganesha atleast in NIV_DEBUG level) on the server side
while you run these tests to debug futher.
Thanks,
Soumya
On
It depends on the workload. Like native NFS, even with NFS-Ganesha, data is
routed through the server where its mounted from. In addition NFSv4.x protocol
adds more complexity and cannot be directly compared with NFSv3 traffic.
However with pNFS, I/O is routed to data servers directly by the
On 08/19/2015 09:53 PM, shacky wrote:
Hi Atin,
thank you very much for your answer.
It seems like your NFS kernel module is not disabled. Please try disabling
it and re mount.
I tried but it did not solve my problem:
# service nfs-common stop
Stopping NFS common utilities: idmapd statd.
absolute path names
# file: mnt/dir5
# owner: root
# group: root
user::rwx
group::r-x
other::r-x
#
Though these ACLs are displayed when done getfacl using brick-path directly.
Thanks,
Soumya
On 31 Jul 2015, at 09:35, Soumya Koduri skod...@redhat.com wrote:
I have tested it using the gluster-NFS
this, along with ACCESS ACL, please set an inherit/default ACL
on that directory and check getfacl output.
Thanks,
Soumya
On 07/31/2015 05:51 PM, Soumya Koduri wrote:
On 07/31/2015 05:33 PM, Jüri Palis wrote:
Playing around with my GlusterFS test setup I discovered following
anomaly
On volume
I have tested it using the gluster-NFS server with GlusterFS version
3.7.* running on a RHEL7 machine and RHEL 6.7 as NFS client. ACLs with
named groups got properly set on the directory.
Could you please provide us the packet trace (better taken on the server
side so that we can check
On 07/30/2015 03:36 PM, Pranith Kumar Karampuri wrote:
On 07/30/2015 03:34 PM, Soumya Koduri wrote:
On 07/30/2015 01:07 PM, Pranith Kumar Karampuri wrote:
Added folks who work on nfs on gluster. Let's see.
Pranith
On 07/30/2015 09:25 AM, Ryan Clough wrote:able
Okay, I think this has
From the following errors,
[2015-07-21 14:36:30.495321] I [MSGID: 114020] [client.c:2118:notify]
0-vol_shared-client-0: parent translators are ready, attempting connect
on transport
[2015-07-21 14:36:30.498989] W [socket.c:923:__socket_keepalive]
0-socket: failed to set TCP_USER_TIMEOUT 0 on
On 07/21/2015 02:40 PM, Geoffrey Letessier wrote:
Dears,
Is it exist a way to modify GlusterFS volumes transport-type settings?
Indeed, I’ve previously set the transport-type parameter to tcp for my
main volume and I would like to re-set it from tcp to rdma,tcp.
[1] captures most of the
On 11/05/2015 08:43 PM, Surya K Ghatty wrote:
All... I need your help! I am trying to setup Highly available
Active-Active Ganesha configuration on two glusterfs nodes based on
instructions here:
https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/
are sufficient -
'glfs_h_lookupat' - to get 'glfs_object' given file path.
'glfs_h_anonymous_write' - takes above handle as one of the inputs.
Thanks,
Soumya
Ivica
On 24 Aug 2015, at 20:02, Soumya Koduri skod...@redhat.com wrote:
On 08/24/2015 11:24 PM, Ivica Siladic wrote:
Hi,
I'm doing a lot
Hi,
If you would like to contribute to GlusterFS, one of the easiest ways
which shall help you to analyze the code is by fixing defects reported
by Coverity Scan tool.
The detailed process is mentioned in [1]. To summarize,
* Signup as a member of https://scan.coverity.com/projects/987
*
you can download the same using the repos listed here
http://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/2.2.0/
Thanks,
Soumya
On 09/04/2015 05:22 AM, Patrick Riehecky wrote:
I'm fairly close to a working ganesha/glusterfs setup, but I can't seem to
install glusterfs-ganesha. Any
On 09/13/2015 09:38 AM, Yaroslav Molochko wrote:
I wish this could be that simple:
root@PSC01SERV008:/var/lib# netstat -nap | grep 38465
root@PSC01SERV008:/var/lib# ss -n | grep 38465
root@PSC01SERV008:/var/lib#
2015-09-13 1:34 GMT+08:00 Atin Mukherjee
ng I could find in the google and this doesn't work.
Please, lets move on with something more sophisticated than restart
glusterfs... I would not contact you if I had not tried to restart it
dozen of time.
Do you have any debugging to see what is really happening?
2015-09-15 1:55 GMT+08:00 Soumya Kodur
Small correction in the file I provided earlier. pmap_set returns 0 in
case of failure.
On 09/16/2015 12:08 AM, Soumya Koduri wrote:
/* pmap_set() returns 0 for FAIL and 1 for SUCCESS */
if (!(pmap_set (newprog->prognum, newprog->progver, IPPRO
Could you try
* disabling iptables (& firewalld if enabled)
* restart rpcbind service
* restart glusterd
If this doesn't work, (mentioned in one of the forums)
Add below line in '/etc/hosts.allow' file.
ALL: 127.0.0.1 : ALLOW
Restart rpcbind and glusterd services.
Thanks,
Soumya
On
On 09/25/2015 04:04 PM, Pierre Léonard wrote:
Hi all,
I have 14 nodes with a replicated 2 volume. I want to suppress the
replication function. I that possible without losing data. Or Do I need
to make a save of the data before.
The only way I can imagine to avoid replication in case of
On 09/23/2015 07:34 AM, Premkumar Mani wrote:
Team,
We would like to implement gluster in my environment. could you please
help me to get more information about this product ?
May be good to start with the below links -
http://www.gluster.org/
http://gluster.readthedocs.org/en/latest/
[2015-09-21 08:18:51.504694] E [MSGID: 106062]
[glusterd-op-sm.c:3698:glusterd_op_ac_unlock] 0-management: Unable
to acquire volname
I have put the hostnames of all servers in my /etc/hosts file,
including the arbiter node.
On 18 September 2015 at 16:52, Soumya Koduri <skod...@redh
Hi,
This doc may come in handy for you to configure HA NFS -
https://github.com/soumyakoduri/glusterdocs/blob/ha_guide
/Administrator%20Guide/Configuring%20HA%20NFS%20Server.md
Thanks,
Soumya
On 09/21/2015 11:24 PM, Gluster Admin wrote:
Hi,
Can someone point me to the howto/docs on setting
; --
> > With Regards,
> > Jiffin
> >
> >
> >> [2015-09-21 07:59:48.653912] E [MSGID: 106123]
> >> [glusterd-syncop.c:1404:gd_commit_op_phase] 0-management:
Commit
> >> of operation 'Volu
safe, after disabling nfs-ganesha,
run the below script command
# ./usr/libexec/ganesha/ganesha-ha.sh --cleanup /etc/ganesha
Thanks,
Soumya
On 22 September 2015 at 09:04, Soumya Koduri <skod...@redhat.com
<mailto:skod...@redhat.com>> wrote:
Hi Tiemen,
Have added the step
Hi Tiemen,
One of the pre-requisites before setting up nfs-ganesha HA is to create
and mount shared_storage volume. Use below CLI for that
"gluster volume set all cluster.enable-shared-storage enable"
It shall create the volume and mount in all the nodes (including the
arbiter node). Note
On 12/10/2015 02:51 AM, Marco Antonio Carcano wrote:
Hi Kaleb,
thank you very much for the quick reply
I tried what you suggested, but I got the same error
I tried both
HA_CLUSTER_NODES="glstr01.carcano.local,glstr02.carcano.local"
VIP_glstr01.carcano.local="192.168.65.250"
re ~1.8M files on this test volume.
On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote:
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote:
Another addition: it seems to be GlusterFS API library memory leak
because NFS-Ganesha also consumes huge amount of memory while doing
ordinary "
On 12/28/2015 02:32 PM, Soumya Koduri wrote:
- Original Message -
From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
To: "Oleksandr Natalenko" <oleksa...@natalenko.name>, "Soumya Koduri"
<skod...@redhat.com>
Cc: gluster-users@glu
ave pasted
below apply to gfapi/nfs-ganesha applications.
Also, to resolve the nfs-ganesha issue which I had mentioned below (in
case if Entries_HWMARK option gets changed), I have posted below fix -
https://review.gerrithub.io/#/c/258687
Thanks,
Soumya
Ideas?
05.01.2016 12:31, Sou
2016 р. 22:52:25 EET Soumya Koduri wrote:
On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote:
Unfortunately, both patches didn't make any difference for me.
I've patched 3.7.6 with both patches, recompiled and installed patched
GlusterFS package on client side and mounted volume with ~2M of files
On 12/25/2015 08:56 PM, Oleksandr Natalenko wrote:
What units Cache_Size is measured in? Bytes?
Its actually (Cache_Size * sizeof_ptr) bytes. If possible, could you
please run ganesha process under valgrind? Will help in detecting leaks.
Thanks,
Soumya
25.12.2015 16:58, Soumya Koduri
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote:
Another addition: it seems to be GlusterFS API library memory leak
because NFS-Ganesha also consumes huge amount of memory while doing
ordinary "find . -type f" via NFSv4.2 on remote client. Here is memory
usage:
===
root 5416 34.2 78.5
.
https://gist.github.com/e4602a50d3c98f7a2766
One may see GlusterFS-related leaks here as well.
On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote:
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote:
Another addition: it seems to be GlusterFS API library memory leak
because NFS
- Original Message -
> From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> To: "Oleksandr Natalenko" <oleksa...@natalenko.name>, "Soumya Koduri"
> <skod...@redhat.com>
> Cc: gluster-users@gluster.org, gluster-de...@gluste
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
+gluster-users
On 12/22/2015 06:03 PM, Hari Gowtham wrote:
Hi all,
There was a poll conducted to find the timing that suits best for the people
who want to participate
in the weekly Gluster bug triage meeting. The result for the poll is yet to be
announced but we would
like to get more
Hi,
Please find the minutes of today's Gluster Community Bug Triage meeting
below. Thanks to everyone who have attend the meeting.
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-12-22/gluster_bug_triage.2015-12-22-12.00.html
Minutes (text):
Hi,
But are you telling me that in a 3-node cluster,
quorum is lost when one of the nodes ip is down?
yes. Its the limitation with Pacemaker/Corosync. If the nodes
participating in cluster cannot communicate with majority of them
(quorum is lost), then the cluster is shut down.
However i
On 11/30/2015 03:26 PM, Soumya Koduri wrote:
Hi,
But are you telling me that in a 3-node cluster,
quorum is lost when one of the nodes ip is down?
yes. Its the limitation with Pacemaker/Corosync. If the nodes
participating in cluster cannot communicate with majority of them
(quorum is lost
Hi,
On 11/27/2015 01:58 PM, ml wrote:
Dear All,
I am trying to get a nfs-ganesha ha cluster running, with 3, CentOS
Linux release 7.1.1503 nodes. I use the package glusterfs-ganesha-3.7.6
-1.el7.x86_64 to get the HA scripts. So far it works fine when i stop
the nfs-ganesha service on one of
On 11/17/2015 08:50 PM, Pierre Léonard wrote:
Hi all,
Y have a cluster with 14 nodes. I have build a volume stripe 7 with the
14 nodes. Underlying I use XFS.
Locally I mount the global volume with nfs :
mount -t nfs 127.0.0.1:gvExport /glusterfs/gvExport -o
_netdev,nosuid,bg,exec
then I
On 11/17/2015 10:21 PM, Surya K Ghatty wrote:
Hi:
I am trying to understand if it is technically feasible to have gluster
nodes on one machine, and export a volume from one of these nodes using
a nfs-ganesha server installed on a totally different machine? I tried
the below and showmount -e
On 01/08/2016 05:04 PM, Soumya Koduri wrote:
I could reproduce while testing deep directories with in the mount
point. I root caus'ed the issue & had discussion with Pranith to
understand the purpose and recommended way of taking nlookup on inodes.
I shall make changes to my existing fix and
as lost. But most of the inodes should have got purged when
we drop vfs cache. Did you do drop vfs cache before exiting the process?
I shall add some log statements and check that part
Thanks,
Soumya
12.01.2016 08:24, Soumya Koduri написав:
For fuse client, I tried vfs drop_caches
On 01/13/2016 04:08 PM, Soumya Koduri wrote:
On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote:
Just in case, here is Valgrind output on FUSE client with 3.7.6 +
API-related patches we discussed before:
https://gist.github.com/cd6605ca19734c1496a4
Thanks for sharing the results. I made
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-12 7:24 GMT+01:00 Soumya Koduri <skod...@redhat.com
<mailto:skod...@redhat.com>>:
On 01/11/2016 05:11 PM, Oleksandr Natalenko wrote:
Brief test shows that Ganesha stopped leaking a
11.01.2016 12:26, Soumya Koduri написав:
I have made changes to fix the lookup leak in a different way (as
discussed with Pranith) and uploaded them in the latest patch set #4
- http://review.gluster.org/#/c/13096/
Please check if it resolves the mem leak and hopefully doesn't result
in any
b2902bba1
[10] https://gist.github.com/385bbb95ca910ec9766f
[11] https://gist.github.com/685c4d3e13d31f597722
10.02.2016 15:37, Oleksandr Natalenko написав:
Hi, folks.
Here go new test results regarding client memory leak.
I use v3.7.8 with the following patches:
===
Soumya Koduri (2):
USE client may be a good option
for us as well, but I can't seem to get speeds higher than 30 MB/s using the
Gluster FUSE client (I posted more details on that earlier today to this group
as well, looking for advice there).
-Kris
____
From: Soumya Ko
ix in fuse-bridge, revisited
Pranith Kumar K
(1):
mount/fuse: Fix use-after-free crash
Soumya Koduri (3):
gfapi: Fix inode nlookup counts
inode: Retire the inodes from the lru
list in inode_table_destroy
upcall: free the xdr* allocations
===
With those patches we got API leaks fix
-level, client/server/both.
Thanks,
Soumya
01.02.2016 09:54, Soumya Koduri написав:
On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote:
Unfortunately, this patch doesn't help.
RAM usage on "find" finish is ~9G.
Here is statedump before drop_caches: https://gist.github.com/
fc1647de09
On 02/01/2016 02:48 PM, Xavier Hernandez wrote:
Hi,
On 01/02/16 09:54, Soumya Koduri wrote:
On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote:
Wait. It seems to be my bad.
Before unmounting I do drop_caches (2), and glusterfs process CPU usage
goes to 100% for a while. I haven't waited
On 01/27/2016 09:39 PM, Kris Laib wrote:
Hi all,
We're getting ready to roll out Gluster using standard NFS from the
clients, and CTDB and RRDNS to help facilitate HA. I thought we were
good to know, but recently had an issue where there wasn't enough memory
on one of the gluster nodes in a
On 02/12/2016 11:27 AM, Soumya Koduri wrote:
On 02/11/2016 08:33 PM, Oleksandr Natalenko wrote:
And "API" test.
I used custom API app [1] and did brief file manipulations through it
(create/remove/stat).
Then I performed drop_caches, finished API [2] and got the following
Valgr
On 02/16/2016 08:06 PM, Oleksandr Natalenko wrote:
Hmm, OK. I've rechecked 3.7.8 with the following patches (latest
revisions):
===
Soumya Koduri (3):
gfapi: Use inode_forget in case of handle objects
inode: Retire the inodes from the lru list in inode_table_destroy
rpc
On 02/20/2016 02:08 AM, Takemura, Won wrote:
I am new user / admin of Gluster.
I am involved in a support issue where access to a gluster mount fails
and the error is displayed when users attempt to access the mount:
-bash: cd: /app/: Transport endpoint is not connected
This error is
Hi,
On 03/14/2016 04:06 AM, ML Wong wrote:
Running CentOS Linux release 7.2.1511, glusterfs 3.7.8
(glusterfs-server-3.7.8-2.el7.x86_64),
nfs-ganesha-gluster-2.3.0-1.el7.x86_64
1) Ensured the connectivity between gluster nodes by using PING
2) Disabled NetworkManager (Loaded: loaded
Hi,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in
console during
cluster setup.
Please give a try and let me know if you see any errors.
Thanks,
Soumya
Testing Environment: Running CentOS Linux release 7.2.1511, glusterfs
3.7.8 (glusterfs-server-3.7.8-2.el7.x86_64),
nfs-ganesha-gluster-2.3.0-1.el7.x86_64
On Mon, Mar 14, 2016 at 2:05 AM, S
The log file didn't have any errors logged. Please check the NFS client
logs in '/var/log/messages' or using dmesg and brick logs as well.
Probably strace or packet trace could help too. You could use the below
command to capture the pkt trace while running the I/Os on the node
where
Hi,
Please find the minutes of today's Gluster Community Bug Triage meeting
below. Thanks to everyone who have attended the meeting.
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-17/gluster_bug_triage.2016-05-17-12.01.html
Minutes (text):
Even on my setup if I change nfs.port, all the other services also
started registering on those ports. Can you please file a bug for it.
That seems like a bug (or is it intentional..Niels?).
153 tcp 2049 mountd
151 tcp 2049 mountd
133 tcp 2049
e it.
Regards,
Abhishek
On Wed, May 4, 2016 at 11:33 AM, Soumya Koduri <skod...@redhat.com
<mailto:skod...@redhat.com>> wrote:
Hi Abhishek,
Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must
be the reason client is not able set ACLs.
per
103 udp111 portmapper
102 udp111 portmapper
153 tcp 2049 mountd
151 tcp 2049 mountd
133 tcp 2049 nfs
1002273 tcp 2049
On Wed, May 4, 2016 at 12:09 PM, Soumya Koduri <skod...@
Hi Abhishek,
Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the
reason client is not able set ACLs. Could you please check the log file
'/var/lib/glusterfs/nfs.log' if there are any errors logged with respect
protocol registration failures.
Thanks,
Soumya
On 05/04/2016
Inode stored in the shard xlator local is NULL. CCin Kruthika to comment.
Thanks,
Soumya
(gdb) bt
#0 0x7f196acab210 in pthread_spin_lock () from /lib64/libpthread.so.0
#1 0x7f196be7bcd5 in fd_anonymous (inode=0x0) at fd.c:804
#2 0x7f195deb1787 in shard_common_inode_write_do
On 08/09/2016 09:06 PM, Mahdi Adnan wrote:
Hi,
Thank you for your reply.
The traffic is related to GlusterFS;
18:31:20.419056 IP 192.168.208.134.49058 > 192.168.208.134.49153: Flags
[.], ack 3876, win 24576, options [nop,nop,TS val 247718812 ecr
247718772], length 0
18:31:20.419080 IP
Hi,
Thanks to everyone who joined the meeting. Please find the minutes of
today's Gluster Community Bug Triage meeting at the below links.
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-12/gluster_bug_triage.2016-07-12-12.00.html
Minutes (text):
Hi all,
This meeting is scheduled for anyone who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
On 08/09/2016 03:33 PM, Mahdi Adnan wrote:
Hi,
Im using NFS-Ganesha to access my volume, it's working fine for now but
im seeing lots of traffic on the Loopback interface, in fact it's the
same amount of traffic on the bonding interface, can anyone please
explain to me why is this happening ?
Hi,
We have noticed that many of the bugs (esp., in the recent past the ones
filed against 'tests' component) which are being actively worked upon do
not have either 'Triaged' keyword set or bug status(/assignee) updated
appropriately. Sometimes even many of the active community members fail
Hi,
On 02/03/2017 07:52 AM, ML Wong wrote:
Hello All,
Any pointers will be very-much appreciated. Thanks in advance!
Environment:
Running centOS 7.2.511
Gluster: 3.7.16, with nfs-ganesha on 2.3.0.1 from centos-gluster37 repo
sha1: cab5df4064e3a31d1d92786d91bd41d91517fba8 ganesha-ha.sh
we
On 02/27/2017 07:19 PM, Kevin Lemonnier wrote:
Hi,
I have a simple glusterFS configured on a VM, with a single volume on a single
brick. It's
setup that way to replicate the production conditions as close as possible, but
with no
replica as it's just for dev.
Every few hours, the NFS
I am not sure if there are any outstanding issues with exposing shard
volume via gfapi. CCin Krutika.
On 02/28/2017 01:29 PM, Mahdi Adnan wrote:
Hi,
We have a Gluster volume hosting VMs for ESXi exported via Ganesha.
Im getting the following messages in ganesha-gfapi.log and ganesha.log
CCin gluster-devel & users ML. Somehow they got missed in my earlier reply.
Thanks,
Soumya
On 09/06/2016 12:19 PM, Soumya Koduri wrote:
On 09/03/2016 12:44 AM, Pranith Kumar Karampuri wrote:
hi,
Did you get a chance to decide on the nfs-ganesha integrations
tests that need to be
On 10/05/2016 07:32 PM, Pranith Kumar Karampuri wrote:
On Wed, Oct 5, 2016 at 2:00 PM, Soumya Koduri <skod...@redhat.com
<mailto:skod...@redhat.com>> wrote:
Hi,
With http://review.gluster.org/#/c/15051/
<http://review.gluster.org/#/c/15051/>, performace
Hi,
With http://review.gluster.org/#/c/15051/, performace/client-io-threads
is enabled by default. But with that we see regression caused to
nfs-ganesha application trying to un/re-export any glusterfs volume.
This shall be the same case with any gfapi application using glfs_fini().
More
Hi,
Please find the minutes of today's Gluster Community Bug Triage meeting
at the links posted below. We had very few participants today as many
are traveling. Thanks to hgowtham and ankitraj for joining.
Minutes:
Hi all,
This meeting is scheduled for anyone who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
On 09/23/2016 02:34 AM, Dung Le wrote:
Hello,
I have a pretty straight forward configuration as below:
3 storage nodes running version 3.7.11 with replica of 3 and it using
native gluster NFS.
corosync version 1.4.7 and pacemaker version 1.1.12
I have DNS round-robin on 3 VIPs living on the
Hi,
On 09/28/2016 11:24 AM, Muthu Vigneshwaran wrote:
> +- Component GlusterFS
> |
> |
> | +Subcomponent nfs
Maybe its time to change it to 'gluster-NFS/native NFS'. Niels/Kaleb?
+- Component gdeploy
| |
| +Subcomponent sambha
| +Subcomponent hyperconvergence
| +Subcomponent RHSC
1 - 100 of 142 matches
Mail list logo