Re: [Gluster-users] "Solving" a recurrent "performing entry selfheal on [...]" on my bricks

2018-10-09 Thread Vlad Kopylov
isn't it trying to heal your dovecot-uidlist? try updating, restarting and
initiating heal again

-v

On Sun, Oct 7, 2018 at 12:54 PM Hoggins!  wrote:

> Hello list,
>
> My Gluster cluster has a condition, I'd like to know how to cure it.
>
> The setup: two bricks, replicated, with an arbiter.
> On brick 1, the /var/log/glusterfs/glustershd.log is quite empty, not
> much activity, everything looks fine.
> On brick 2, /var/log/glusterfs/glustershd.log shows a lot of these:
> [MSGID: 108026] [afr-self-heal-entry.c:887:afr_selfheal_entry_do]
> 0-mailer-replicate-0: performing entry selfheal on
> 9df5082b-d066-4659-91a4-5f2ad943ce51
> [MSGID: 108026] [afr-self-heal-entry.c:887:afr_selfheal_entry_do]
> 0-mailer-replicate-0: performing entry selfheal on
> ba8c0409-95f5-499d-8594-c6de15d5a585
>
> These entries are repeated everyday, every ten minutes or so.
>
> Now if we list the contents of the directory represented by file ID
> 9df5082b-d066-4659-91a4-5f2ad943ce51:
> On brick 1:
> drwx--. 2 1005 users 102400 13 sept. 17:03 cur
> -rw---. 2 1005 users 22 14 mars   2016 dovecot-keywords
> -rw---. 2 1005 users  0  6 janv.  2015 maildirfolder
> drwx--. 2 1005 users  6 30 juin   2015 new
> drwx--. 2 1005 users  6  4 oct.  17:46 tmp
>
> On brick 2:
> drwx--. 2 1005 users 102400 25 mai   11:00 cur
> -rw---. 2 1005 users 22 14 mars   2016 dovecot-keywords
> -rw---. 2 1005 users  80559 25 mai   11:00 dovecot-uidlist
> -rw---. 2 1005 users  0  6 janv.  2015 maildirfolder
> drwx--. 2 1005 users  6 30 juin   2015 new
> drwx--. 2 1005 users  6  4 oct.  17:46 tmp
>
> (note the "dovecot-uidlist" file present on brick 2 but not on brick 1)
>
> Also, checking directory sizes fur the cur/ directory:
> On brick 1:
> 165872cur/
>
> On brick 2:
> 161516cur/
>
> BUT the number of files is the same on the two bricks for the cur/
> directory:
> $~ ls -l cur/ | wc -l
> 1135
>
> So now you've got it: it's inconsistent between the two data bricks.
>
> On the arbiter, all seems good, the directory listing looks like what is
> on brick 2.
> Same kind of situation happens for file ID
> ba8c0409-95f5-499d-8594-c6de15d5a585.
>
> I'm sure that having this situation is not good and needs to be sorted
> out, so what can I do?
>
> Thanks for your help!
>
> Hoggins!
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Quick and small file read/write optimization

2018-10-09 Thread Vlad Kopylov
It also matters how you mount it:
glusterfs
defaults,_netdev,negative-timeout=10,attribute-timeout=30,fopen-keep-cache,direct-io-mode=enable,fetch-attempts=5
0 0


Options Reconfigured:
performance.io-thread-count: 8
server.allow-insecure: on
cluster.shd-max-threads: 12
performance.rda-cache-limit: 128MB
cluster.readdir-optimize: on
cluster.read-hash-mode: 0
performance.strict-o-direct: on
cluster.lookup-unhashed: auto
performance.nl-cache: on
performance.nl-cache-timeout: 600
cluster.lookup-optimize: on
client.event-threads: 4
performance.client-io-threads: on
performance.md-cache-timeout: 600
server.event-threads: 4
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on
network.inode-lru-limit: 9
performance.cache-refresh-timeout: 10
performance.enable-least-priority: off
performance.cache-size: 2GB
cluster.nufa: on
cluster.choose-local: on
server.outstanding-rpc-limit: 128
disperse.eager-lock: off
nfs.disable: on
transport.address-family: inet


On Tue, Oct 9, 2018 at 2:33 PM Pedro Costa  wrote:

> Hi,
>
>
>
> I’ve a  1 x 3 replicated glusterfs 4.1.5 volume, that mounts using fuse on
> each server into /www for various Node apps that are proxied with nginx.
> Servers are then load balanced to split traffic. Here’s the gvol1
> configuration at the moment:
>
>
>
> Volume Name: gvol1
>
> Type: Replicate
>
> Volume ID: 384acec2--40da--5c53d12b3ae2
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 1 x 3 = 3
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: vm0:/srv/brick1/gvol1
>
> Brick2: vm1:/srv/brick1/gvol1
>
> Brick3: vm2:/srv/brick1/gvol1
>
> Options Reconfigured:
>
> cluster.strict-readdir: on
>
> client.event-threads: 4
>
> cluster.lookup-optimize: on
>
> network.inode-lru-limit: 9
>
> performance.md-cache-timeout: 600
>
> performance.cache-invalidation: on
>
> performance.cache-samba-metadata: on
>
> performance.stat-prefetch: on
>
> features.cache-invalidation-timeout: 600
>
> features.cache-invalidation: on
>
> transport.address-family: inet
>
> nfs.disable: on
>
> performance.client-io-threads: on
>
> storage.fips-mode-rchecksum: on
>
> features.utime: on
>
> storage.ctime: on
>
> server.event-threads: 4
>
> performance.cache-size: 500MB
>
> performance.read-ahead: on
>
> cluster.readdir-optimize: on
>
> cluster.shd-max-threads: 6
>
> performance.strict-o-direct: on
>
> server.outstanding-rpc-limit: 128
>
> performance.enable-least-priority: off
>
> cluster.nufa: on
>
> performance.nl-cache: on
>
> performance.nl-cache-timeout: 60
>
> performance.cache-refresh-timeout: 10
>
> performance.rda-cache-limit: 128MB
>
> performance.readdir-ahead: on
>
> performance.parallel-readdir: on
>
> disperse.eager-lock: off
>
> network.ping-timeout: 5
>
> cluster.background-self-heal-count: 20
>
> cluster.self-heal-window-size: 2
>
> cluster.self-heal-readdir-size: 2KB
>
>
>
> On each restart the apps delete a particular folder and rebuild it from
> internal packages. On one such operation on a particular client to the
> volume I get repeated logs hundreds of times for the same guid even,
> sometimes:
>
>
>
> [2018-10-09 13:40:40.579161] W [MSGID: 114061]
> [client-common.c:2658:client_pre_flush_v2] 0-gvol1-client-2:
> (7955fd7a-3147-48b3-bf6a-5306ac97e10d) remote_fd is -1. EBADFD [File
> descriptor in bad state]
>
> [2018-10-09 13:40:40.579313] W [MSGID: 114061]
> [client-common.c:2658:client_pre_flush_v2] 0-gvol1-client-2:
> (0ac67ee4-a31e-4989-ba1e-e4f513c1f757) remote_fd is -1. EBADFD [File
> descriptor in bad state]
>
> [2018-10-09 13:40:40.579707] W [MSGID: 114061]
> [client-common.c:2658:client_pre_flush_v2] 0-gvol1-client-2:
> (7ea6106d-29f4-4a19-8eb6-6515ffefb9d3) remote_fd is -1. EBADFD [File
> descriptor in bad state]
>
> [2018-10-09 13:40:40.579911] W [MSGID: 114061]
> [client-common.c:2658:client_pre_flush_v2] 0-gvol1-client-2:
> (7ea6106d-29f4-4a19-8eb6-6515ffefb9d3) remote_fd is -1. EBADFD [File
> descriptor in bad state]
>
>
>
> I assume this is probably because the client didn’t catch up with the
> previous delete? I do control the server (client to the gluster volume)
> that the restart occurs, and I prevent having more than one rebuilding the
> same app at the same time, which makes these logs odd.
>
>
>
> I’ve implemented the volume options above after reading most of the
> entries in the archive here over the last few weeks, but I’m not sure what
> else to tweak because other than the restart of the apps it is working
> pretty well.
>
>
>
> If there’s any input you may have to help on this particular scenario I’d
> be much appreciated.
>
>
>
> Thanks,
>
> P.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GCS 0.1 release!

2018-10-09 Thread Atin Mukherjee
== Overview

Today, we are announcing the availability of GCS (Gluster Container
Storage) 0.1. This initial release is designed to provide a platform for
community members to try out and provide feedback on the new Gluster
container storage stack. This new stack is a collaboration across a number
of repositories, currently including the main GCS repository [1], core
glusterfs [2], glusterd2 [3], and gluster-csi-driver [4].

== Getting started

The GCS repository provides a VM-based (Vagrant) environment that makes it
easy to install and take GCS for a test-drive. See
https://github.com/gluster/gcs/tree/master/deploy#local-cluster-using-vagrant
for a set of instructions to bring up a multi-node cluster with GCS
installed. The Ansible-based deploy scripts create a Kubernetes cluster
using kubespray, then deploy the GCS components. These playbooks can also
be used to bring up GCS on other Kubernetes clusters as well.

== Current features

This is the initial release of the GCS stack. It allows dynamic
provisioning of persistent volumes using the CSI interface. Supported
features include:

   -

   1x3 (3-way replicated) volumes
   -

   GCS should be able to recover from restarts of any individual
   GCS-related pod. Since this is the initial version, bugs or feedback on
   improvements will be appreciated in a form of github issue in the
   respective repos.


== Next steps

   -

   Adding e2e testing for nightly validation of entire system
   -

   Will be adding gluster-prometheus for metrics. The work under this can
   be tracked at the gluster-prometheus repo [5]
   -

   Starting work on operator to deploy and manage the stack through anthill
   [6]
   -

   Bi-weekly update to the community on the progress made on GCS.


== GCS project management

   -

   GCS and the other associated repos are coordinated via waffle.io for
   planning and tracking deliverables over sprints.


   -

   Cross-repo coordination of milestones and sprints will be tracked
   through a common set of labels, prefixed with “GCS/”. For example, we
   already have labels for major milestones defined like ‘GCS/alpha1’ ,
   ‘GCS/beta0’. Additional labels like 'GCS/0.2'/'GCS/0.3'/... will be created
   for each sprints/releases so that the respective teams can tag planned
   deliverables in a common way.


== Collaboration opportunities

   -

   Improving install experience
   -

   Helping w/ E2E testing framework
   -

   Testing and opening bug reports


== Relationship to Heketi and glusterd (the legacy stack)

While GCS is shaping the future stack for Gluster in containers, the
traditional method for deploying container-based storage with Gluster (and
current GlusterD) and Heketi is still available, and it remains the
preferred method for production usage. To find out more about Heketi and
this production-ready stack, visit the gluster-kubernetes repo [7].

Regards,

Team GCS

[1] https://github.com/gluster/gcs

[2] https://github.com/gluster/glusterfs

[3] https://github.com/gluster/glusterd2

[4] https://github.com/gluster/gluster-csi-driver/

[5] https://github.com/gluster/gluster-prometheus

[6] https://github.com/gluster/anthill

[7] https://github.com/gluster/gluster-kubernetes
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Community Meeting, 15:00 UTC, October 10

2018-10-09 Thread Amye Scavarda
Community Meeting!
Topics currently on the board:
What is the best way to update community on progress across different
project?
Should we accept a project only if there is at least once in a month update
on the project to mailing list?
How to not miss out backports of critical fixes on release branches [Topic
for next meeting]

Add your items at: https://bit.ly/gluster-community-meetings
We'll see you in #gluster-meeting on freenode at 15:00 UTC.

- amye


-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Quick and small file read/write optimization

2018-10-09 Thread Pedro Costa
Hi,

I've a  1 x 3 replicated glusterfs 4.1.5 volume, that mounts using fuse on each 
server into /www for various Node apps that are proxied with nginx. Servers are 
then load balanced to split traffic. Here's the gvol1 configuration at the 
moment:

Volume Name: gvol1
Type: Replicate
Volume ID: 384acec2--40da--5c53d12b3ae2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vm0:/srv/brick1/gvol1
Brick2: vm1:/srv/brick1/gvol1
Brick3: vm2:/srv/brick1/gvol1
Options Reconfigured:
cluster.strict-readdir: on
client.event-threads: 4
cluster.lookup-optimize: on
network.inode-lru-limit: 9
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.cache-samba-metadata: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
storage.fips-mode-rchecksum: on
features.utime: on
storage.ctime: on
server.event-threads: 4
performance.cache-size: 500MB
performance.read-ahead: on
cluster.readdir-optimize: on
cluster.shd-max-threads: 6
performance.strict-o-direct: on
server.outstanding-rpc-limit: 128
performance.enable-least-priority: off
cluster.nufa: on
performance.nl-cache: on
performance.nl-cache-timeout: 60
performance.cache-refresh-timeout: 10
performance.rda-cache-limit: 128MB
performance.readdir-ahead: on
performance.parallel-readdir: on
disperse.eager-lock: off
network.ping-timeout: 5
cluster.background-self-heal-count: 20
cluster.self-heal-window-size: 2
cluster.self-heal-readdir-size: 2KB

On each restart the apps delete a particular folder and rebuild it from 
internal packages. On one such operation on a particular client to the volume I 
get repeated logs hundreds of times for the same guid even, sometimes:

[2018-10-09 13:40:40.579161] W [MSGID: 114061] 
[client-common.c:2658:client_pre_flush_v2] 0-gvol1-client-2:  
(7955fd7a-3147-48b3-bf6a-5306ac97e10d) remote_fd is -1. EBADFD [File descriptor 
in bad state]
[2018-10-09 13:40:40.579313] W [MSGID: 114061] 
[client-common.c:2658:client_pre_flush_v2] 0-gvol1-client-2:  
(0ac67ee4-a31e-4989-ba1e-e4f513c1f757) remote_fd is -1. EBADFD [File descriptor 
in bad state]
[2018-10-09 13:40:40.579707] W [MSGID: 114061] 
[client-common.c:2658:client_pre_flush_v2] 0-gvol1-client-2:  
(7ea6106d-29f4-4a19-8eb6-6515ffefb9d3) remote_fd is -1. EBADFD [File descriptor 
in bad state]
[2018-10-09 13:40:40.579911] W [MSGID: 114061] 
[client-common.c:2658:client_pre_flush_v2] 0-gvol1-client-2:  
(7ea6106d-29f4-4a19-8eb6-6515ffefb9d3) remote_fd is -1. EBADFD [File descriptor 
in bad state]

I assume this is probably because the client didn't catch up with the previous 
delete? I do control the server (client to the gluster volume) that the restart 
occurs, and I prevent having more than one rebuilding the same app at the same 
time, which makes these logs odd.

I've implemented the volume options above after reading most of the entries in 
the archive here over the last few weeks, but I'm not sure what else to tweak 
because other than the restart of the apps it is working pretty well.

If there's any input you may have to help on this particular scenario I'd be 
much appreciated.

Thanks,
P.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] samba client gets mount error(5): Input/output error

2018-10-09 Thread Christos Tsalidis
Hi,

I have moved a step further as now I can mount the volume on client
machine. It was selinux problem.

setsebool -P samba_load_libgfapi 1

However when I try to write a file on it,
I am receiving  'Permission denied'


[root@workstation smbdata]# df
Filesystem1K-blocksUsed Available Use%
Mounted on
/dev/mapper/centos-root 6486016 1191072   5294944  19% /
devtmpfs 495892   0495892   0% /dev
tmpfs507736   0507736   0%
/dev/shm
tmpfs5077366836500900   2% /run
tmpfs507736   0507736   0%
/sys/fs/cgroup
/dev/sda1   1038336  161940876396  16% /boot
tmpfs101548   0101548   0%
/run/user/1000
//servera.lab.local/gluster-mastervol   2076672   66720   2009952   4%
/mnt/smbdata
[root@workstation smbdata]# ls
file00  file01  file02  file03  file04  file05  file06  file07  file08
file09  file10
[root@workstation smbdata]# touch file11
touch: cannot touch ‘file11’: Permission denied

Any idea how can I solve this problem?

Thanks in advance!

Στις Τρί, 9 Οκτ 2018 στις 3:21 μ.μ., ο/η Diego Remolina 
έγραψε:

> Per:
> https://www.samba.org/samba/docs/current/man-html/vfs_glusterfs.8.html
>
> Does adding: kernel share modes = no
> to smb.conf and restarting samba helps?
>
> FWIW, I have had many recent problems using the samba vfs plugins on
> Centos 7.5 (latest) against a 3.10.x glusterfs server. When exporting
> via samba and using vfs objects = glusterfs many different programs
> have i/o errors specially when trying to save files.
>
> Some specific files (Autodesk Revit files) present problem reading.
>
> My current workaround for regular operation has been using a fuse
> mount and sharing directly from it via samba. So commented out all the
> vfs objects and gluster related configurations and exported the
> locally fuse mounted directory.
> On Tue, Oct 9, 2018 at 9:12 AM Christos Tsalidis 
> wrote:
> >
> > Hi all,
> >
> > I am testing the samba client in glusterfs 3.12.14 version on CentOS
> Linux release 7.5.1804 and getting a mount error(5): Input/output error.
> >
> >
> > [root@workstation ~]# mount /mnt/smbdata
> > mount error(5): Input/output error
> > Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
> > [root@workstation ~]# cat /etc/fstab | grep smbdata
> > //servera.lab.local/gluster-mastervol /mnt/smbdata cifs
> user=smbuser,pass=redhat 0 0
> > [root@workstation ~]# rpm -q cifs-utils
> > cifs-utils-6.2-10.el7.x86_64
> >
> > cat /var/log/messages
> >
> > Oct  9 13:50:33 workstation kernel: CIFS VFS: cifs_mount failed w/return
> code = -5
> > Oct  9 13:50:53 workstation kernel: CIFS VFS: cifs_mount failed w/return
> code = -5
> > Oct  9 13:51:49 workstation kernel: CIFS VFS: cifs_mount failed w/return
> code = -5
> > Oct  9 13:52:06 workstation kernel: CIFS VFS: cifs_mount failed w/return
> code = -5
> > Oct  9 14:01:02 workstation systemd: Created slice User Slice of root.
> > Oct  9 14:01:02 workstation systemd: Starting User Slice of root.
> > Oct  9 14:01:02 workstation systemd: Started Session 4 of user root.
> > Oct  9 14:01:02 workstation systemd: Starting Session 4 of user root.
> > Oct  9 14:01:02 workstation systemd: Removed slice User Slice of root.
> > Oct  9 14:01:02 workstation systemd: Stopping User Slice of root.
> > Oct  9 14:34:54 workstation kernel: CIFS VFS: cifs_mount failed w/return
> code = -5
> > Oct  9 14:36:02 workstation kernel: CIFS VFS: cifs_mount failed w/return
> code = -5
> >
> >
> >
> > [root@servera ~]# systemctl status smb
> > ● smb.service - Samba SMB Daemon
> >Loaded: loaded (/usr/lib/systemd/system/smb.service; enabled; vendor
> preset: disabled)
> >Active: active (running) since Tue 2018-10-09 13:28:48 CEST; 53min ago
> >  Main PID: 18707 (smbd)
> >Status: "smbd: ready to serve connections..."
> >CGroup: /system.slice/smb.service
> >├─18707 /usr/sbin/smbd --foreground --no-process-group
> >├─18709 /usr/sbin/smbd --foreground --no-process-group
> >├─18710 /usr/sbin/smbd --foreground --no-process-group
> >└─18711 /usr/sbin/smbd --foreground --no-process-group
> >
> > Oct 09 13:35:49 servera.lab.local smbd[18986]: [2018/10/09
> 13:35:49.774640,  0] ../source3/modules/vfs_glusterfs.c:345(vfs...nnect)
> > Oct 09 13:35:49 servera.lab.local smbd[18986]:   mastervol: Failed to
> initialize volume (Transport endpoint is not connected)
> > Oct 09 13:36:46 servera.lab.local smbd[18997]: [2018/10/09
> 13:36:46.098201,  0] ../source3/modules/vfs_glusterfs.c:345(vfs...nnect)
> > Oct 09 13:36:46 servera.lab.local smbd[18997]:   mastervol: Failed to
> initialize volume (Transport endpoint is not connected)
> > Oct 09 13:37:03 servera.lab.local smbd[19036]: [2018/10/09
> 13:37:03.470317,  0] 

Re: [Gluster-users] glusterfs 4.1.5 - SSL3_GET_RECORD:wrong version number

2018-10-09 Thread Davide Obbi
Hi,

after running volume stop/start the error disappeared and the volume can be
mounted from the server.

Regards

On Tue, Oct 9, 2018 at 3:27 PM Davide Obbi  wrote:

>
> Hi,
>
> i have enabled SSL/TLS on a cluster of 3 nodes, the server to server
> communication seems working since gluster volume status returns the three
> bricks while we are unable to mount from the client and the client can be
> also one of the gluster nodes iteself.
> Options:
> /var/lib/glusterd/secure-acceess
>   option transport.socket.ssl-cert-depth 3
>
> ssl.cipher-list:
> HIGH:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1:TLSv1.2:!3DES:!RC4:!aNULL:!ADH
> auth.ssl-allow:
> localhost,glusterserver-1005,glusterserver-1008,glusterserver-1009
> server.ssl: on
> client.ssl: on
> auth.allow: glusterserver-1005,glusterserver-1008,glusterserver-1009
> ssl.certificate-depth: 3
>
> We noticed the following in glusterd logs, the .18 address is the client
> and one of the cluster nodes glusterserver-1005:
> [2018-10-09 13:12:10.786384] D [socket.c:354:ssl_setup_connection]
> 0-tcp.management: peer CN = glusterserver-1005
>
> [2018-10-09 13:12:10.786401] D [socket.c:357:ssl_setup_connection]
> 0-tcp.management: SSL verification succeeded (client: 10.10.0.18:49149)
> (server: 10.10.0.18:24007)
> [2018-10-09 13:12:10.956960] D [socket.c:354:ssl_setup_connection]
> 0-tcp.management: peer CN = glusterserver-1009
>
> [2018-10-09 13:12:10.956977] D [socket.c:357:ssl_setup_connection]
> 0-tcp.management: SSL verification succeeded (client: 10.10.0.27:49150)
> (server: 10.10.0.18:24007)
> [2018-10-09 13:12:11.322218] D [socket.c:354:ssl_setup_connection]
> 0-tcp.management: peer CN = glusterserver-1008
>
> [2018-10-09 13:12:11.322248] D [socket.c:357:ssl_setup_connection]
> 0-tcp.management: SSL verification succeeded (client: 10.10.0.23:49150)
> (server: 10.10.0.18:24007)
> [2018-10-09 13:12:11.368753] D [socket.c:354:ssl_setup_connection]
> 0-tcp.management: peer CN = glusterserver-1005
>
> [2018-10-09 13:12:11.368770] D [socket.c:357:ssl_setup_connection]
> 0-tcp.management: SSL verification succeeded (client: 10.10.0.18:49149)
> (server: 10.10.0.18:24007)
> [2018-10-09 13:12:13.535081] E [socket.c:364:ssl_setup_connection]
> 0-tcp.management: SSL connect error (client: 10.10.0.18:49149) (server:
> 10.10.0.18:24007)
> [2018-10-09 13:12:13.535102] E [socket.c:203:ssl_dump_error_stack]
> 0-tcp.management:   error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong
> version number
> [2018-10-09 13:12:13.535129] E [socket.c:2677:socket_poller]
> 0-tcp.management: server setup failed
>
> I believe that something has changed since version 4.1.3 cause using that
> version we were able to mount on the client and we did not get that SSL
> error. Also the cipher volume option was not set in that version. At this
> point i can't understand if node to node is actually using SSL or not and
> why the client is unable to mount
>
> thanks
> Davide
>


-- 
Davide Obbi
System Administrator

Booking.com B.V.
Vijzelstraat 66-80 Amsterdam 1017HL Netherlands
Direct +31207031558
[image: Booking.com] 
The world's #1 accommodation site
43 languages, 198+ offices worldwide, 120,000+ global destinations,
1,550,000+ room nights booked every day
No booking fees, best price always guaranteed
Subsidiary of Booking Holdings Inc. (NASDAQ: BKNG)
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] glusterfs 4.1.5 - SSL3_GET_RECORD:wrong version number

2018-10-09 Thread Davide Obbi
Hi,

i have enabled SSL/TLS on a cluster of 3 nodes, the server to server
communication seems working since gluster volume status returns the three
bricks while we are unable to mount from the client and the client can be
also one of the gluster nodes iteself.
Options:
/var/lib/glusterd/secure-acceess
  option transport.socket.ssl-cert-depth 3

ssl.cipher-list:
HIGH:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1:TLSv1.2:!3DES:!RC4:!aNULL:!ADH
auth.ssl-allow:
localhost,glusterserver-1005,glusterserver-1008,glusterserver-1009
server.ssl: on
client.ssl: on
auth.allow: glusterserver-1005,glusterserver-1008,glusterserver-1009
ssl.certificate-depth: 3

We noticed the following in glusterd logs, the .18 address is the client
and one of the cluster nodes glusterserver-1005:
[2018-10-09 13:12:10.786384] D [socket.c:354:ssl_setup_connection]
0-tcp.management: peer CN = glusterserver-1005

[2018-10-09 13:12:10.786401] D [socket.c:357:ssl_setup_connection]
0-tcp.management: SSL verification succeeded (client: 10.10.0.18:49149)
(server: 10.10.0.18:24007)
[2018-10-09 13:12:10.956960] D [socket.c:354:ssl_setup_connection]
0-tcp.management: peer CN = glusterserver-1009

[2018-10-09 13:12:10.956977] D [socket.c:357:ssl_setup_connection]
0-tcp.management: SSL verification succeeded (client: 10.10.0.27:49150)
(server: 10.10.0.18:24007)
[2018-10-09 13:12:11.322218] D [socket.c:354:ssl_setup_connection]
0-tcp.management: peer CN = glusterserver-1008

[2018-10-09 13:12:11.322248] D [socket.c:357:ssl_setup_connection]
0-tcp.management: SSL verification succeeded (client: 10.10.0.23:49150)
(server: 10.10.0.18:24007)
[2018-10-09 13:12:11.368753] D [socket.c:354:ssl_setup_connection]
0-tcp.management: peer CN = glusterserver-1005

[2018-10-09 13:12:11.368770] D [socket.c:357:ssl_setup_connection]
0-tcp.management: SSL verification succeeded (client: 10.10.0.18:49149)
(server: 10.10.0.18:24007)
[2018-10-09 13:12:13.535081] E [socket.c:364:ssl_setup_connection]
0-tcp.management: SSL connect error (client: 10.10.0.18:49149) (server:
10.10.0.18:24007)
[2018-10-09 13:12:13.535102] E [socket.c:203:ssl_dump_error_stack]
0-tcp.management:   error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong
version number
[2018-10-09 13:12:13.535129] E [socket.c:2677:socket_poller]
0-tcp.management: server setup failed

I believe that something has changed since version 4.1.3 cause using that
version we were able to mount on the client and we did not get that SSL
error. Also the cipher volume option was not set in that version. At this
point i can't understand if node to node is actually using SSL or not and
why the client is unable to mount

thanks
Davide
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] samba client gets mount error(5): Input/output error

2018-10-09 Thread Diego Remolina
Per: https://www.samba.org/samba/docs/current/man-html/vfs_glusterfs.8.html

Does adding: kernel share modes = no
to smb.conf and restarting samba helps?

FWIW, I have had many recent problems using the samba vfs plugins on
Centos 7.5 (latest) against a 3.10.x glusterfs server. When exporting
via samba and using vfs objects = glusterfs many different programs
have i/o errors specially when trying to save files.

Some specific files (Autodesk Revit files) present problem reading.

My current workaround for regular operation has been using a fuse
mount and sharing directly from it via samba. So commented out all the
vfs objects and gluster related configurations and exported the
locally fuse mounted directory.
On Tue, Oct 9, 2018 at 9:12 AM Christos Tsalidis  wrote:
>
> Hi all,
>
> I am testing the samba client in glusterfs 3.12.14 version on CentOS Linux 
> release 7.5.1804 and getting a mount error(5): Input/output error.
>
>
> [root@workstation ~]# mount /mnt/smbdata
> mount error(5): Input/output error
> Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
> [root@workstation ~]# cat /etc/fstab | grep smbdata
> //servera.lab.local/gluster-mastervol /mnt/smbdata cifs 
> user=smbuser,pass=redhat 0 0
> [root@workstation ~]# rpm -q cifs-utils
> cifs-utils-6.2-10.el7.x86_64
>
> cat /var/log/messages
>
> Oct  9 13:50:33 workstation kernel: CIFS VFS: cifs_mount failed w/return code 
> = -5
> Oct  9 13:50:53 workstation kernel: CIFS VFS: cifs_mount failed w/return code 
> = -5
> Oct  9 13:51:49 workstation kernel: CIFS VFS: cifs_mount failed w/return code 
> = -5
> Oct  9 13:52:06 workstation kernel: CIFS VFS: cifs_mount failed w/return code 
> = -5
> Oct  9 14:01:02 workstation systemd: Created slice User Slice of root.
> Oct  9 14:01:02 workstation systemd: Starting User Slice of root.
> Oct  9 14:01:02 workstation systemd: Started Session 4 of user root.
> Oct  9 14:01:02 workstation systemd: Starting Session 4 of user root.
> Oct  9 14:01:02 workstation systemd: Removed slice User Slice of root.
> Oct  9 14:01:02 workstation systemd: Stopping User Slice of root.
> Oct  9 14:34:54 workstation kernel: CIFS VFS: cifs_mount failed w/return code 
> = -5
> Oct  9 14:36:02 workstation kernel: CIFS VFS: cifs_mount failed w/return code 
> = -5
>
>
>
> [root@servera ~]# systemctl status smb
> ● smb.service - Samba SMB Daemon
>Loaded: loaded (/usr/lib/systemd/system/smb.service; enabled; vendor 
> preset: disabled)
>Active: active (running) since Tue 2018-10-09 13:28:48 CEST; 53min ago
>  Main PID: 18707 (smbd)
>Status: "smbd: ready to serve connections..."
>CGroup: /system.slice/smb.service
>├─18707 /usr/sbin/smbd --foreground --no-process-group
>├─18709 /usr/sbin/smbd --foreground --no-process-group
>├─18710 /usr/sbin/smbd --foreground --no-process-group
>└─18711 /usr/sbin/smbd --foreground --no-process-group
>
> Oct 09 13:35:49 servera.lab.local smbd[18986]: [2018/10/09 13:35:49.774640,  
> 0] ../source3/modules/vfs_glusterfs.c:345(vfs...nnect)
> Oct 09 13:35:49 servera.lab.local smbd[18986]:   mastervol: Failed to 
> initialize volume (Transport endpoint is not connected)
> Oct 09 13:36:46 servera.lab.local smbd[18997]: [2018/10/09 13:36:46.098201,  
> 0] ../source3/modules/vfs_glusterfs.c:345(vfs...nnect)
> Oct 09 13:36:46 servera.lab.local smbd[18997]:   mastervol: Failed to 
> initialize volume (Transport endpoint is not connected)
> Oct 09 13:37:03 servera.lab.local smbd[19036]: [2018/10/09 13:37:03.470317,  
> 0] ../source3/modules/vfs_glusterfs.c:345(vfs...nnect)
> Oct 09 13:37:03 servera.lab.local smbd[19036]:   mastervol: Failed to 
> initialize volume (Transport endpoint is not connected)
> Oct 09 14:19:51 servera.lab.local smbd[19075]: [2018/10/09 14:19:51.273307,  
> 0] ../source3/modules/vfs_glusterfs.c:345(vfs...nnect)
> Oct 09 14:19:51 servera.lab.local smbd[19075]:   mastervol: Failed to 
> initialize volume (Transport endpoint is not connected)
> Oct 09 14:20:59 servera.lab.local smbd[19085]: [2018/10/09 14:20:59.726227,  
> 0] ../source3/modules/vfs_glusterfs.c:345(vfs...nnect)
> Oct 09 14:20:59 servera.lab.local smbd[19085]:   mastervol: Failed to 
> initialize volume (Transport endpoint is not connected)
> Hint: Some lines were ellipsized, use -l to show in full.
> [root@servera ~]# gluster volume info mastervol
>
> Volume Name: mastervol
> Type: Distribute
> Volume ID: f6bfe62f-068d-4ade-8d47-ee2e61418804
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: servera:/bricks/brick-a1/brick
> Brick2: serverb:/bricks/brick-b1/brick
> Options Reconfigured:
> storage.batch-fsync-delay-usec: 0
> server.allow-insecure: on
> performance.stat-prefetch: off
> transport.address-family: inet
> nfs.disable: off
> [root@servera ~]# gluster volume status mastervol
> Status of volume: mastervol
> Gluster process TCP Port  RDMA Port  Online  Pid
> 

[Gluster-users] samba client gets mount error(5): Input/output error

2018-10-09 Thread Christos Tsalidis
Hi all,

I am testing the samba client in glusterfs 3.12.14 version on CentOS Linux
release 7.5.1804 and getting a mount error(5): Input/output error.


[root@workstation ~]# mount /mnt/smbdata
mount error(5): Input/output error
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
[root@workstation ~]# cat /etc/fstab | grep smbdata
//servera.lab.local/gluster-mastervol /mnt/smbdata cifs
user=smbuser,pass=redhat 0 0
[root@workstation ~]# rpm -q cifs-utils
cifs-utils-6.2-10.el7.x86_64

cat /var/log/messages

Oct  9 13:50:33 workstation kernel: CIFS VFS: cifs_mount failed w/return
code = -5
Oct  9 13:50:53 workstation kernel: CIFS VFS: cifs_mount failed w/return
code = -5
Oct  9 13:51:49 workstation kernel: CIFS VFS: cifs_mount failed w/return
code = -5
Oct  9 13:52:06 workstation kernel: CIFS VFS: cifs_mount failed w/return
code = -5
Oct  9 14:01:02 workstation systemd: Created slice User Slice of root.
Oct  9 14:01:02 workstation systemd: Starting User Slice of root.
Oct  9 14:01:02 workstation systemd: Started Session 4 of user root.
Oct  9 14:01:02 workstation systemd: Starting Session 4 of user root.
Oct  9 14:01:02 workstation systemd: Removed slice User Slice of root.
Oct  9 14:01:02 workstation systemd: Stopping User Slice of root.
Oct  9 14:34:54 workstation kernel: CIFS VFS: cifs_mount failed w/return
code = -5
Oct  9 14:36:02 workstation kernel: CIFS VFS: cifs_mount failed w/return
code = -5



[root@servera ~]# systemctl status smb
● smb.service - Samba SMB Daemon
   Loaded: loaded (/usr/lib/systemd/system/smb.service; enabled; vendor
preset: disabled)
   Active: active (running) since Tue 2018-10-09 13:28:48 CEST; 53min ago
 Main PID: 18707 (smbd)
   Status: "smbd: ready to serve connections..."
   CGroup: /system.slice/smb.service
   ├─18707 /usr/sbin/smbd --foreground --no-process-group
   ├─18709 /usr/sbin/smbd --foreground --no-process-group
   ├─18710 /usr/sbin/smbd --foreground --no-process-group
   └─18711 /usr/sbin/smbd --foreground --no-process-group

Oct 09 13:35:49 servera.lab.local smbd[18986]: [2018/10/09
13:35:49.774640,  0] ../source3/modules/vfs_glusterfs.c:345(vfs...nnect)
Oct 09 13:35:49 servera.lab.local smbd[18986]:   mastervol: Failed to
initialize volume (Transport endpoint is not connected)
Oct 09 13:36:46 servera.lab.local smbd[18997]: [2018/10/09
13:36:46.098201,  0] ../source3/modules/vfs_glusterfs.c:345(vfs...nnect)
Oct 09 13:36:46 servera.lab.local smbd[18997]:   mastervol: Failed to
initialize volume (Transport endpoint is not connected)
Oct 09 13:37:03 servera.lab.local smbd[19036]: [2018/10/09
13:37:03.470317,  0] ../source3/modules/vfs_glusterfs.c:345(vfs...nnect)
Oct 09 13:37:03 servera.lab.local smbd[19036]:   mastervol: Failed to
initialize volume (Transport endpoint is not connected)
Oct 09 14:19:51 servera.lab.local smbd[19075]: [2018/10/09
14:19:51.273307,  0] ../source3/modules/vfs_glusterfs.c:345(vfs...nnect)
Oct 09 14:19:51 servera.lab.local smbd[19075]:   mastervol: Failed to
initialize volume (Transport endpoint is not connected)
Oct 09 14:20:59 servera.lab.local smbd[19085]: [2018/10/09
14:20:59.726227,  0] ../source3/modules/vfs_glusterfs.c:345(vfs...nnect)
Oct 09 14:20:59 servera.lab.local smbd[19085]:   mastervol: Failed to
initialize volume (Transport endpoint is not connected)
Hint: Some lines were ellipsized, use -l to show in full.
[root@servera ~]# gluster volume info mastervol

Volume Name: mastervol
Type: Distribute
Volume ID: f6bfe62f-068d-4ade-8d47-ee2e61418804
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: servera:/bricks/brick-a1/brick
Brick2: serverb:/bricks/brick-b1/brick
Options Reconfigured:
storage.batch-fsync-delay-usec: 0
server.allow-insecure: on
performance.stat-prefetch: off
transport.address-family: inet
nfs.disable: off
[root@servera ~]# gluster volume status mastervol
Status of volume: mastervol
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick servera:/bricks/brick-a1/brick49153 0  Y
18904
Brick serverb:/bricks/brick-b1/brick49153 0  Y
9654
NFS Server on localhost 2049  0  Y
19024
NFS Server on serverb.lab.local 2049  0  Y
9675

Task Status of Volume mastervol
--
There are no active volume tasks


[root@servera ~]# firewall-cmd --list-services
ssh dhcpv6-client glusterfs nfs rpc-bind samba

[root@servera ~]# cat /etc/passwd | grep smbuser
smbuser:x:1001:1001::/home/smbuser:/sbin/nologin


[root@servera ~]# cat /etc/samba/smb.conf
# See smb.conf.example for a more detailed config file or
# read the smb.conf manpage.
# Run 'testparm' to verify the config is correct after
# you modified it.

[global]
workgroup = SAMBA
security = user

passdb 

Re: [Gluster-users] Hot Tier exceeding watermark-hi

2018-10-09 Thread Amar Tumballi
Hi David,

Just for your information, as a project, we are not currently taking any
development focus on Tiering feature!

Please refer to email thread @
https://lists.gluster.org/pipermail//gluster-devel/2018-July/055017.html

It is recommended to use 'dmcache' on your disks to get best performance
out of your backend instead! Also note that this may get retired in next
upcoming release: https://review.gluster.org/21331

Hope this email will save you lot of time!

Regards,
Amar


On Sun, Sep 30, 2018 at 6:04 PM David Brown  wrote:

> Just found this in the tierd.log Not sure what it means or how to fix it
> tho, but I assume it may be the cause of my problem with files not being
> demoted from the hot tier..
>
>
> [2018-09-30 12:25:56.438821] E [MSGID: 114031]
> [client-rpc-fops.c:233:client3_3_mknod_cbk] 0-FFPrimary-client-5: remote
> operation failed. Path: 
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.440940] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //720ee8d5-1667-11e8-a5dc-902b3450f388
> (10fb1bd9-b962-415c-8751-f0ef8bf06473). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.444633] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //e7df9e17-b62f-4668-a4c1-dc5d86dcae6e
> (32eff7dc-dcda-4488-8464-9eace06e1b69). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.448347] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //9758be45-c466-45a2-9cd6-572f80c54da9
> (389acc57-d205-4022-acea-d0f400c2ad89). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.451919] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //test-march-3-bc-file-501
> (41ee3e27-40be-4f64-af01-e18cc63065e3). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.456198] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //krishna
> (49657a82-8c64-43c0-94df-e1e78840aa1d). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.459702] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //test1
> (5ac7caba-f2c3-4bf1-bb38-cf6ed940dac0). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.463164] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //c7947fa1-a496-400c-b6a4-b4e084b8f316
> (5e909f4e-6263-4091-8378-26479496e715). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.466601] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //ea37891d-1ab8-40f8-95a3-eee822c7040a
> (6dfe1d97-34f4-440b-9502-5eab172de58a). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.470129] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //a42d1e12-fc11-4a51-a744-8e6c3b11be0a
> (7a081218-3cc1-442c-be4b-43bd7dd01724). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.473758] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //b0acb442-fe60-4022-bee2-d11d49422f20
> (8788d650-9800-47ab-bf07-87f9dcd0392c). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.477237] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //f7357147-c2ea-4abe-9c59-136f049bfccb
> (91ecb7b5-84fb-48d2-af2b-440ab6f25cfa). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.480696] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //35949c80-5496-445d-b2d6-e7d2061e9135
> (972256c3-8eb8-49d5-a4ab-cca34abc7b0a). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.484354] W [MSGID: 114031]
> [client-rpc-fops.c:1080:client3_3_getxattr_cbk] 0-FFPrimary-client-8:
> remote operation failed. Path: //13660ae8-4138-47f2-a858-8880d97b4e8d
> (a6027333-b269-4810-a188-3af51c04fdcb). Key: trusted.glusterfs.node-uuid
> [Transport endpoint is not connected]
> [2018-09-30 12:25:56.487884] I [MSGID: 109038]
> [tier.c:1122:tier_migrate_using_query_file] 0-FFPrimary-tier-dht: Demotion
> failed for 

[Gluster-users] Snapshot size

2018-10-09 Thread matt

Hi list,

I was wondering if anyone knows if it's possible to get the size of a 
snapshot, ideally a list, but I'd take the size of just one?


I'm aware that you can use lvs and the Data% column gives you an idea.  
But it's not really very neat, so I was wondering if anyone knows of a 
better way?


Cheers,

Matt

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs client gets connection refused

2018-10-09 Thread Niels de Vos
On Tue, Oct 09, 2018 at 11:16:30AM +0200, Christos Tsalidis wrote:
> Hi all,
> 
> I am testing the nfs client in glusterfs 3.12.14 version on CentOS Linux
> release 7.5.1804 and getting a connection refused message.

...

> [root@servera ~]# gluster volume status mastervol
> Status of volume: mastervol
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
...
> NFS Server on localhost N/A   N/AN   N/A
> NFS Server on serverb.lab.local N/A   N/AN   N/A

The NFS-server is not running. Do you have the glusterfs-gnfs package
installed?

We recommend migrating to NFS-Ganesha in the near future. The old
Gluster/NFS service is deprecated and will not be provided in the common
repositories with upcoming Gluster versions.

HTH,
Niels
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] nfs client gets connection refused

2018-10-09 Thread Christos Tsalidis
Hi all,

I am testing the nfs client in glusterfs 3.12.14 version on CentOS Linux
release 7.5.1804 and getting a connection refused message.

[root@workstation ~]# mount /mnt/nfs
mount.nfs: Connection refused

[root@workstation ~]# cat /etc/fstab | grep servera
servera:/mastervol /mnt/nfs nfs rw 0 0
[root@workstation ~]#

nfs-utils have been installed on client machine

[root@workstation ~]# rpm -q nfs-utils
nfs-utils-1.3.0-0.54.el7.x86_64

Here some information about my gluster cluster


[root@servera ~]# gluster volume info mastervol

Volume Name: mastervol
Type: Distribute
Volume ID: f6bfe62f-068d-4ade-8d47-ee2e61418804
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: servera:/bricks/brick-a1/brick
Brick2: serverb:/bricks/brick-b1/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: off

[root@servera ~]# firewall-cmd --list-services
ssh dhcpv6-client glusterfs nfs rpc-bind


[root@servera ~]# gluster peer status
Number of Peers: 1

Hostname: serverb.lab.local
Uuid: e88e454a-f85a-472a-8920-a541c8615d03
State: Peer in Cluster (Connected)

[root@servera ~]# gluster volume status mastervol
Status of volume: mastervol
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick servera:/bricks/brick-a1/brick49152 0  Y
10966
Brick serverb:/bricks/brick-b1/brick49152 0  Y
9138
NFS Server on localhost N/A   N/AN   N/A
NFS Server on serverb.lab.local N/A   N/AN   N/A

Task Status of Volume mastervol
--
There are no active volume tasks


Do you have any idea how can I solve this?

Thanks in advance!
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users