Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-15 Thread Yaroslav Molochko
In my original message I've mentioned that:
==
root@PSC01SERV008:/var/lib/glusterd/nfs# rpcinfo -p
   program vers proto   port  service
104   tcp111  portmapper
103   tcp111  portmapper
102   tcp111  portmapper
104   udp111  portmapper
103   udp111  portmapper
102   udp111  portmapper
==
will try rpcbind in debug mode approach.

2015-09-16 4:37 GMT+08:00 Niels de Vos :

> On Sun, Sep 13, 2015 at 04:30:43PM +0530, Soumya Koduri wrote:
> >
> >
> > On 09/13/2015 09:38 AM, Yaroslav Molochko wrote:
> > >I wish this could be that simple:
> > >root@PSC01SERV008:/var/lib# netstat -nap | grep 38465
> > >root@PSC01SERV008:/var/lib# ss -n  | grep 38465
> > >root@PSC01SERV008:/var/lib#
> > >
> > >2015-09-13 1:34 GMT+08:00 Atin Mukherjee  > >>:
> > >
> > >By any chance is your Gluster NFS server is already running? Output
> > >of netstat -nap | grep 38465 might give some clue?
> > >
> > >-Atin
> > >Sent from one plus one
> > >
> > >On Sep 12, 2015 10:54 PM, "Yaroslav Molochko"  > >> wrote:
> > >
> > >Hello,
> > >
> > >I have a problem reported in logs:
> > >==
> > >[2015-09-12 13:56:06.271644] I [MSGID: 100030]
> > >[glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
> > >/usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs -s
> > >localhost --volfile-id gluster/nfs -p
> > >/var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
> > >-S /var/run/gluster/cb186678589f28e74c67da70fd06e736.socket)
> > >[2015-09-12 13:56:06.277921] I [MSGID: 101190]
> > >[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> > >thread with index 1
> > >[2015-09-12 13:56:07.284888] I
> > >[rpcsvc.c:2215:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service:
> > >Configured rpc.outstanding-rpc-limit with value 16
> > >[2015-09-12 13:56:07.292484] W [MSGID: 112153]
> > >[mount3.c:3910:mnt3svc_init] 0-nfs-mount: Exports auth has been
> > >disabled!
> > >[2015-09-12 13:56:07.294357] E
> > >[rpcsvc.c:1370:rpcsvc_program_register_portmap] 0-rpc-service:
> > >Could not register with portmap 15 3 38465
> >
> > Port registration failed. Could you check '/var/log/messages' and dmesg
> to
> > see if there are any errors logged? Is firewalld running on your system.
> > Verify if the port is open to be used.
>
> Registration of services at portmap can also fail because there is a
> service with that program number and version registered already. Could
> you check if that is the case?
>
> $ rpcinfo -p | grep 15
> 153   tcp  38465  mountd
> 151   tcp  38466  mountd
>
> If you see a similar output, check if you have standard NFS services
> running. The rpc.mountd process will also register itself at the
> portmapper, but it will conflict with the NFS-services that Gluster
> provides. Make sure all NFS services (server and client) have been
> disabled and stopped. After that, check with the 'rpcinfo' command if
> any of nlockmgr, mount, status or nfs are registered. If that is the
> case, you can unregister them one-by-one with commands like this:
>
> # rpcinfo -d 15 1
> # rpcinfo -d 15 3
> ...
>
> After unregistering the services at the portmapper, you should be able
> to start the Gluster-NFS service by restarting glusterd.
>
> HTH,
> Niels
>
> >
> > Thanks,
> > Soumya
> > >[2015-09-12 13:56:07.294398] E [MSGID: 112088]
> > >[nfs.c:341:nfs_init_versions] 0-nfs: Required program  MOUNT3
> > >registration failed
> > >[2015-09-12 13:56:07.294413] E [MSGID: 112109] [nfs.c:1482:init]
> > >0-nfs: Failed to initialize protocols
> > >[2015-09-12 13:56:07.294426] E [MSGID: 101019]
> > >[xlator.c:428:xlator_init] 0-nfs-server: Initialization of
> > >volume 'nfs-server' failed, review your volfile again
> > >[2015-09-12 13:56:07.294438] E
> > >[graph.c:322:glusterfs_graph_init] 0-nfs-server: initializing
> > >translator failed
> > >[2015-09-12 13:56:07.294448] E
> > >[graph.c:661:glusterfs_graph_activate] 0-graph: init failed
> > >[2015-09-12 13:56:07.294781] W
> > >[glusterfsd.c:1219:cleanup_and_exit]
> > >(-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x11a) [0x7fbe9c754b7a]
> > >-->/usr/sbin/glusterfs(glusterfs_process_volfp+0x123)
> > >[0x7fbe9c74fcb3] -->/usr/sbin/glusterfs(cleanup_and_exit+0x59)
> > >[0x7fbe9c74f329] ) 0-: received signum (0), shutting down
> > >===
> > >
> > >I've 

[Gluster-users] Hi new to Gluster

2015-09-15 Thread M.Tarkeshwar Rao
Hi all,
We have a product which is written in c++ on Red hat.
In production our customers using our product with Veritas cluster file
system for HA and as sharded storage(EMC).
Initially this product was run on only single node. In our last release we
make it Scalable(more than one nodes).
Due to excessive locking(CFS) we are not getting the performance.
Can you please suggest Gluster will resolve our problem as it is
distributed file system.
is Gluster POSIX complined?
Can we use it in Production? Pls suggest.
If any other file system please suggest.
Regards
Tarkeshwar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] autosnap feature?

2015-09-15 Thread Rajesh Joseph


- Original Message -
> From: "Alastair Neil" 
> To: "gluster-users" 
> Sent: Friday, September 11, 2015 2:24:32 AM
> Subject: [Gluster-users] autosnap feature?
> 
> Wondering if there were any plans for a fexible and easy to use snapshotting
> feature along the lines of zfs autosnap scipts. I imagine at the least it
> would need the ability to rename snapshots.
> 

Are you looking for something like this ?
http://www.gluster.org/community/documentation/index.php/Features/Scheduling_of_Snapshot

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Setting up geo replication with GlusterFS 3.6.5

2015-09-15 Thread Saravanakumar Arumugam

Hi,
You are right,   This tool may not be compatible with 3.6.5.

I have tried myself with 3.6.5, but faced this error.
==
georepsetup tv1 gfvm3 tv2
Geo-replication session will be established between tv1 and gfvm3::tv2
Root password of gfvm3 is required to complete the setup. NOTE: Password 
will not be stored.


root@gfvm3's password:
[OK] gfvm3 is Reachable(Port 22)
[OK] SSH Connection established root@gfvm3
[OK] Master Volume and Slave Volume are compatible (Version: 3.6.5)
[OK] Common secret pub file present at 
/var/lib/glusterd/geo-replication/common_secret.pem.pub

[OK] common_secret.pem.pub file copied to gfvm3
[OK] Master SSH Keys copied to all Up Slave nodes
[OK] Updated Master SSH Keys to all Up Slave nodes authorized_keys file
[NOT OK] Failed to Establish Geo-replication Session
Command type not found while handling geo-replication options
[root@gfvm3 georepsetup]#
==
So, some more changes required in this tool.


Coming back to your question:

I have setup geo-replication using the commands in 3.6.5.
Please recheck all the commands (with necessary changes at your end).


[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# cat /etc/redhat-release
Fedora release 21 (Twenty One)
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# rpm -qa | grep glusterfs
glusterfs-devel-3.6.5-1.fc21.x86_64
glusterfs-3.6.5-1.fc21.x86_64
glusterfs-rdma-3.6.5-1.fc21.x86_64
glusterfs-fuse-3.6.5-1.fc21.x86_64
glusterfs-server-3.6.5-1.fc21.x86_64
glusterfs-debuginfo-3.6.5-1.fc21.x86_64
glusterfs-libs-3.6.5-1.fc21.x86_64
glusterfs-extra-xlators-3.6.5-1.fc21.x86_64
glusterfs-geo-replication-3.6.5-1.fc21.x86_64
glusterfs-api-3.6.5-1.fc21.x86_64
glusterfs-api-devel-3.6.5-1.fc21.x86_64
glusterfs-cli-3.6.5-1.fc21.x86_64
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# service glusterd start
Redirecting to /bin/systemctl start  glusterd.service
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# service glusterd status
Redirecting to /bin/systemctl status  glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled)
   Active: active (running) since Tue 2015-09-15 12:19:32 IST; 4s ago
  Process: 2778 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
(code=exited, status=0/SUCCESS)

 Main PID: 2779 (glusterd)
   CGroup: /system.slice/glusterd.service
   └─2779 /usr/sbin/glusterd -p /var/run/glusterd.pid
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# ps aux | grep glus
root  2779  0.0  0.4 448208 17288 ?Ssl  12:19   0:00 
/usr/sbin/glusterd -p /var/run/glusterd.pid

[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume create tv1 
gfvm3:/opt/volume_test/tv_1/b1 gfvm3:/opt/volume_test/tv_1/b2 force

volume create: tv1: success: please start the volume to access data
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume create tv2 
gfvm3:/opt/volume_test/tv_2/b1 gfvm3:/opt/volume_test/tv_2/b2 force

volume create: tv2: success: please start the volume to access data
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#  gluster volume start tv1
volume start: tv1: success
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume start tv2
volume start: tv2: success
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# mount -t glusterfs gfvm3:/tv1 /mnt/master/
[root@gfvm3 georepsetup]# mount -t glusterfs gfvm3:/tv2 /mnt/slave/
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster system:: execute gsec_create
Common secret pub file present at 
/var/lib/glusterd/geo-replication/common_secret.pem.pub

[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume geo-replication tv1 gfvm3::tv2 
create push-pem
Creating geo-replication session between tv1 & gfvm3::tv2 has been 
successful

[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume geo-replication tv1 gfvm3::tv2 
start
Starting geo-replication session between tv1 & gfvm3::tv2 has been 
successful

[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume geo-replication tv1 gfvm3::tv2 
status


MASTER NODEMASTER VOLMASTER BRICK SLAVE 
STATUS CHECKPOINT STATUSCRAWL STATUS


gfvm3  tv1   /opt/volume_test/tv_1/b1 gfvm3::tv2
Initializing...N/A N/A
gfvm3  tv1   /opt/volume_test/tv_1/b2 gfvm3::tv2
Initializing...N/A N/A

[root@gfvm3 georepsetup]#

Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-15 Thread Soumya Koduri


/* pmap_set() returns 0 for FAIL and 1 for SUCCESS */
if (!(pmap_set (newprog->prognum, newprog->progver, IPPROTO_TCP,
port))) {
gf_log (GF_RPCSVC, GF_LOG_ERROR, "Could not register with"
" portmap %d %d %u", newprog->prognum, 
newprog->progver, port);

goto out;
}


The error you got shows that portmap registration of mountd service 
failed. You could start rpcbind in debug mode to print error messages on 
the console

1) create/edit '/etc/sysconfig/rpcbind' with following contents.
#
# Optional arguments passed to rpcbind. See rpcbind(8)
RPCBIND_ARGS="-d"
2) restart rpcbind service. Now instead of starting in daemon mode, 
rpcbind now prints syslog messages and waits on the console.
3) Now on another console, either restart glusterd or I have written 
small c program to register mountd service with portmap (attached). You 
could run it and look at below syslog messages printed by rpcbind services.


>> PMAP_SET request for (15, 3) : Checking caller's adress (port = 832)
succeeded

That's all I could think of. CCin Niels. He may be able to provide more 
information on how to debug this issue.


Thanks,
Soumya


On 09/15/2015 05:27 PM, Yaroslav Molochko wrote:

I have two identical hosts managed by configuration managers, it was
working with 3.5 and stopped to work with 3.7 on ONE host. Okay, I've
done what you requested me, and here is result:
==
root@PSC01SERV008:~# systemctl restart rpcbind
root@PSC01SERV008:~# /etc/init.d/glusterfs-server restart
Restarting glusterfs-server (via systemctl): glusterfs-server.service.
root@PSC01SERV008:~# iptables -nvL
Another app is currently holding the xtables lock. Perhaps you want to
use the -w option?
root@PSC01SERV008:~# iptables -nvL
Chain INPUT (policy ACCEPT 3223K packets, 1760M bytes)
  pkts bytes target prot opt in out source
destination

Chain FORWARD (policy ACCEPT 1478K packets, 1926M bytes)
  pkts bytes target prot opt in out source
destination

Chain OUTPUT (policy ACCEPT 4697K packets, 1354M bytes)
  pkts bytes target prot opt in out source
destination
root@PSC01SERV008:~# cat /etc/hosts.allow
# /etc/hosts.allow: list of hosts that are allowed to access the system.
#   See the manual pages hosts_access(5) and
hosts_options(5).
#
# Example:ALL: LOCAL @some_netgroup
# ALL: .foobar.edu  EXCEPT
terminalserver.foobar.edu 
#
# If you're going to protect the portmapper use the name "rpcbind" for the
# daemon name. See rpcbind(8) and rpc.mountd(8) for further information.
#
ALL: 127.0.0.1 : ALLOW
root@PSC01SERV008:~# gluster volume status
Status of volume: discover-music-prod-music-app-logs
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 10.116.254.17:/srv/data/glusterfs/dis
cover-music-prod/music-app-logs 49152 0  Y
17125
Brick 10.116.254.18:/srv/data/glusterfs/dis
cover-music-prod/music-app-logs 49152 0  Y
24663
NFS Server on localhost N/A   N/AN
N/A
Self-heal Daemon on localhost   N/A   N/AY
17693
NFS Server on 10.116.254.17 2049  0  Y
17146
Self-heal Daemon on 10.116.254.17   N/A   N/AY
17151

Task Status of Volume discover-music-prod-music-app-logs
--
There are no active volume tasks



For the protocol, I've reinstalled, restarted, anything I could, I've
checked anything I could find in the google and this doesn't work.
Please, lets move on with something more sophisticated than restart
glusterfs... I would not contact you if I had not tried to restart it
dozen of time.

Do you have any debugging to see what is really happening?


2015-09-15 1:55 GMT+08:00 Soumya Koduri >:

Could you try
* disabling iptables (& firewalld if enabled)
* restart rpcbind service
* restart glusterd

If this doesn't work, (mentioned in one of the forums)
Add below line in '/etc/hosts.allow' file.

 ALL: 127.0.0.1 : ALLOW

Restart rpcbind and glusterd services.

Thanks,
Soumya


On 09/14/2015 10:39 PM, Yaroslav Molochko wrote:

Could not register with portmap


#include 
#include 
#include 

int main() {

int ret = -1;

ret = pmap_set (15, 3, IPPROTO_TCP, 38465);
if (ret) {
printf("pmap_set failed with errno(%d)\n", errno);
} else
printf("pmap_set is successful\n");

return 0;
}
___
Gluster-users mailing list

Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-15 Thread Soumya Koduri
Small correction in the file I provided earlier. pmap_set returns 0 in 
case of failure.


On 09/16/2015 12:08 AM, Soumya Koduri wrote:


 /* pmap_set() returns 0 for FAIL and 1 for SUCCESS */
 if (!(pmap_set (newprog->prognum, newprog->progver, IPPROTO_TCP,
 port))) {
 gf_log (GF_RPCSVC, GF_LOG_ERROR, "Could not register with"
 " portmap %d %d %u", newprog->prognum,
newprog->progver, port);
 goto out;
 }


The error you got shows that portmap registration of mountd service
failed. You could start rpcbind in debug mode to print error messages on
the console
1) create/edit '/etc/sysconfig/rpcbind' with following contents.
#
# Optional arguments passed to rpcbind. See rpcbind(8)
RPCBIND_ARGS="-d"
2) restart rpcbind service. Now instead of starting in daemon mode,
rpcbind now prints syslog messages and waits on the console.
3) Now on another console, either restart glusterd or I have written
small c program to register mountd service with portmap (attached). You
could run it and look at below syslog messages printed by rpcbind services.

 >> PMAP_SET request for (15, 3) : Checking caller's adress (port =
832)
succeeded

That's all I could think of. CCin Niels. He may be able to provide more
information on how to debug this issue.

Thanks,
Soumya


On 09/15/2015 05:27 PM, Yaroslav Molochko wrote:

I have two identical hosts managed by configuration managers, it was
working with 3.5 and stopped to work with 3.7 on ONE host. Okay, I've
done what you requested me, and here is result:
==
root@PSC01SERV008:~# systemctl restart rpcbind
root@PSC01SERV008:~# /etc/init.d/glusterfs-server restart
Restarting glusterfs-server (via systemctl): glusterfs-server.service.
root@PSC01SERV008:~# iptables -nvL
Another app is currently holding the xtables lock. Perhaps you want to
use the -w option?
root@PSC01SERV008:~# iptables -nvL
Chain INPUT (policy ACCEPT 3223K packets, 1760M bytes)
  pkts bytes target prot opt in out source
destination

Chain FORWARD (policy ACCEPT 1478K packets, 1926M bytes)
  pkts bytes target prot opt in out source
destination

Chain OUTPUT (policy ACCEPT 4697K packets, 1354M bytes)
  pkts bytes target prot opt in out source
destination
root@PSC01SERV008:~# cat /etc/hosts.allow
# /etc/hosts.allow: list of hosts that are allowed to access the system.
#   See the manual pages hosts_access(5) and
hosts_options(5).
#
# Example:ALL: LOCAL @some_netgroup
# ALL: .foobar.edu  EXCEPT
terminalserver.foobar.edu 
#
# If you're going to protect the portmapper use the name "rpcbind" for
the
# daemon name. See rpcbind(8) and rpc.mountd(8) for further information.
#
ALL: 127.0.0.1 : ALLOW
root@PSC01SERV008:~# gluster volume status
Status of volume: discover-music-prod-music-app-logs
Gluster process TCP Port  RDMA Port
Online  Pid
--

Brick 10.116.254.17:/srv/data/glusterfs/dis
cover-music-prod/music-app-logs 49152 0  Y
17125
Brick 10.116.254.18:/srv/data/glusterfs/dis
cover-music-prod/music-app-logs 49152 0  Y
24663
NFS Server on localhost N/A   N/AN
N/A
Self-heal Daemon on localhost   N/A   N/AY
17693
NFS Server on 10.116.254.17 2049  0  Y
17146
Self-heal Daemon on 10.116.254.17   N/A   N/AY
17151

Task Status of Volume discover-music-prod-music-app-logs
--

There are no active volume tasks



For the protocol, I've reinstalled, restarted, anything I could, I've
checked anything I could find in the google and this doesn't work.
Please, lets move on with something more sophisticated than restart
glusterfs... I would not contact you if I had not tried to restart it
dozen of time.

Do you have any debugging to see what is really happening?


2015-09-15 1:55 GMT+08:00 Soumya Koduri >:

Could you try
* disabling iptables (& firewalld if enabled)
* restart rpcbind service
* restart glusterd

If this doesn't work, (mentioned in one of the forums)
Add below line in '/etc/hosts.allow' file.

 ALL: 127.0.0.1 : ALLOW

Restart rpcbind and glusterd services.

Thanks,
Soumya


On 09/14/2015 10:39 PM, Yaroslav Molochko wrote:

Could not register with portmap





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

#include 
#include 
#include 

int main() {

int ret = -1;

ret = pmap_set 

Re: [Gluster-users] Hi new to Gluster

2015-09-15 Thread Nagaprasad Sathyanarayana
Hello Tarakeshwar,

Firstly, welcome to the Gluster community.  

Please visit 
http://www.gluster.org/community/documentation/index.php/GlusterFS_General_FAQ, 
which answers some of your queries about GlusterFS capabilities.
If you could share with us the nature of I/O workload your application is 
generating, the performance need of your application, type of client access 
(NFS, CIFS etc.,) 
that users of your application need etc, we will be in a better position to 
guide.

Regards
Nagaprasad

- Original Message -
From: "M.Tarkeshwar Rao" 
To: gluster-users@gluster.org
Sent: Tuesday, 15 September, 2015 12:15:23 PM
Subject: [Gluster-users] Hi new to Gluster

Hi all, 
We have a product which is written in c++ on Red hat. 
In production our customers using our product with Veritas cluster file system 
for HA and as sharded storage(EMC). 
Initially this product was run on only single node. In our last release we make 
it Scalable(more than one nodes). 
Due to excessive locking(CFS) we are not getting the performance. 
Can you please suggest Gluster will resolve our problem as it is distributed 
file system. 
is Gluster POSIX complined? 
Can we use it in Production? Pls suggest. 
If any other file system please suggest. 
Regards 
Tarkeshwar 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Hi new to Gluster

2015-09-15 Thread M.Tarkeshwar Rao
Hi Nagaprasad,

Thanks for reply.


*Nature of I/O workload your application is generating:*
It is very high. Our product woking on files. It is collecting data(multi
process and multi threaded) from remote nodes and then processing it and
then it sending it to remote locations.
our Execution engine runs the processing busines logic. So huge no of
open,read,write,rename calls in the our code per second.

recently we made it scalable as well, so our business logic runs
horigontally from multiple nodes. we are collecting it in common directory
and reading it from same directory for processing.

We are using Veritas cluster file system. It is locking on directory level.
Since same directory accessed from multiple nodes. There is a delay in
processing. further it reducess performance drastically.

For improving performance we made some changes in our application by
breaking the direcotories for collection and processing.
By this we got performance improvement.

We feel if we change our file system we will get more improvement. Please
suggest.


Regards
Tarkeshwar

On Tue, Sep 15, 2015 at 2:27 PM, Nagaprasad Sathyanarayana <
nsath...@redhat.com> wrote:

> Hello Tarakeshwar,
>
> Firstly, welcome to the Gluster community.
>
> Please visit
> http://www.gluster.org/community/documentation/index.php/GlusterFS_General_FAQ,
> which answers some of your queries about GlusterFS capabilities.
> If you could share with us the nature of I/O workload your application is
> generating, the performance need of your application, type of client access
> (NFS, CIFS etc.,)
> that users of your application need etc, we will be in a better position
> to guide.
>
> Regards
> Nagaprasad
>
> - Original Message -
> From: "M.Tarkeshwar Rao" 
> To: gluster-users@gluster.org
> Sent: Tuesday, 15 September, 2015 12:15:23 PM
> Subject: [Gluster-users] Hi new to Gluster
>
> Hi all,
> We have a product which is written in c++ on Red hat.
> In production our customers using our product with Veritas cluster file
> system for HA and as sharded storage(EMC).
> Initially this product was run on only single node. In our last release we
> make it Scalable(more than one nodes).
> Due to excessive locking(CFS) we are not getting the performance.
> Can you please suggest Gluster will resolve our problem as it is
> distributed file system.
> is Gluster POSIX complined?
> Can we use it in Production? Pls suggest.
> If any other file system please suggest.
> Regards
> Tarkeshwar
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Rebalance failures

2015-09-15 Thread Davy Croonen
Hi all

After expanding our cluster we are facing failures while rebalancing. In my 
opinion this doesn’t look good, so can anybody maybe explain how these failures 
could arise, how you can fix them or what the consequences can be?

$gluster volume rebalance public status
Node Rebalanced-files  size 
   scanned  failures   skipped   status
run time in secs
   ---- 
------   --- --- 
   --
   localhost0   
   0Bytes 49496 23464 0in progress  
  3821.00
   gfs01b-dcg.intnet.be0   
0Bytes  49496 0 0   
 in progress3821.00
   gfs02a-dcg.intnet.be0   
 0Bytes 49497 0 0   
 in progress3821.00
   gfs02b-dcg.intnet.be0   
 0Bytes 49495 0 0   
 in progress3821.00

After looking in the public-rebalance.log this is one paragraph that shows up. 
The whole log is filled up with these.

[2015-09-15 14:50:58.239554] I [dht-common.c:3309:dht_setxattr] 0-public-dht: 
fixing the layout of /ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355
[2015-09-15 14:50:58.239730] I [dht-selfheal.c:960:dht_fix_layout_of_directory] 
0-public-dht: subvolume 0 (public-replicate-0): 251980 chunks
[2015-09-15 14:50:58.239750] I [dht-selfheal.c:960:dht_fix_layout_of_directory] 
0-public-dht: subvolume 1 (public-replicate-1): 251980 chunks
[2015-09-15 14:50:58.239759] I 
[dht-selfheal.c:1065:dht_selfheal_layout_new_directory] 0-public-dht: chunk 
size = 0x / 503960 = 0x214a
[2015-09-15 14:50:58.239784] I 
[dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-public-dht: assigning 
range size 0x7ffe51f8 to public-replicate-0
[2015-09-15 14:50:58.239791] I 
[dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-public-dht: assigning 
range size 0x7ffe51f8 to public-replicate-1
[2015-09-15 14:50:58.239816] I [MSGID: 109036] 
[dht-common.c:6296:dht_log_new_layout_for_dir_selfheal] 0-public-dht: Setting 
layout of /ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355 with 
[Subvol_name: public-replicate-0, Err: -1 , Start: 0 , Stop: 2147373559 ], 
[Subvol_name: public-replicate-1, Err: -1 , Start: 2147373560 , Stop: 
4294967295 ],
[2015-09-15 14:50:58.306701] I [dht-rebalance.c:1405:gf_defrag_migrate_data] 
0-public-dht: migrate data called on 
/ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355
[2015-09-15 14:50:58.346531] W [client-rpc-fops.c:1090:client3_3_getxattr_cbk] 
0-public-client-2: remote operation failed: Permission denied. Path: 
/ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355/1.1 rationale getallen.pdf 
(ba5220be-a462-4008-ac67-79abb16f4dd9). Key: trusted.glusterfs.pathinfo
[2015-09-15 14:50:58.354111] W [client-rpc-fops.c:1090:client3_3_getxattr_cbk] 
0-public-client-3: remote operation failed: Permission denied. Path: 
/ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355/1.1 rationale getallen.pdf 
(ba5220be-a462-4008-ac67-79abb16f4dd9). Key: trusted.glusterfs.pathinfo
[2015-09-15 14:50:58.354166] E [dht-rebalance.c:1576:gf_defrag_migrate_data] 
0-public-dht: /ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355/1.1 rationale 
getallen.pdf: failed to get trusted.distribute.linkinfo key - Permission denied
[2015-09-15 14:50:58.356191] I [dht-rebalance.c:1649:gf_defrag_migrate_data] 
0-public-dht: Migration operation on dir 
/ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355 took 0.05 secs

Now the file which is referenced here, 1.1 rationale getallen.pdf, exists on 
the hosts referenced by 0-public-client-0 and 0-public-client-1 and not on the 
hosts referenced by 0-public-client-2 and 0-public-client-3. So another 
question what is the system really trying to do here and is this normal?

I really hope somebody could give me a deeper understanding about what is going 
on here.

Thanks in advance.

Kind regards
Davy
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-15 Thread Niels de Vos
On Sun, Sep 13, 2015 at 04:30:43PM +0530, Soumya Koduri wrote:
> 
> 
> On 09/13/2015 09:38 AM, Yaroslav Molochko wrote:
> >I wish this could be that simple:
> >root@PSC01SERV008:/var/lib# netstat -nap | grep 38465
> >root@PSC01SERV008:/var/lib# ss -n  | grep 38465
> >root@PSC01SERV008:/var/lib#
> >
> >2015-09-13 1:34 GMT+08:00 Atin Mukherjee  >>:
> >
> >By any chance is your Gluster NFS server is already running? Output
> >of netstat -nap | grep 38465 might give some clue?
> >
> >-Atin
> >Sent from one plus one
> >
> >On Sep 12, 2015 10:54 PM, "Yaroslav Molochko"  >> wrote:
> >
> >Hello,
> >
> >I have a problem reported in logs:
> >==
> >[2015-09-12 13:56:06.271644] I [MSGID: 100030]
> >[glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
> >/usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs -s
> >localhost --volfile-id gluster/nfs -p
> >/var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
> >-S /var/run/gluster/cb186678589f28e74c67da70fd06e736.socket)
> >[2015-09-12 13:56:06.277921] I [MSGID: 101190]
> >[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> >thread with index 1
> >[2015-09-12 13:56:07.284888] I
> >[rpcsvc.c:2215:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service:
> >Configured rpc.outstanding-rpc-limit with value 16
> >[2015-09-12 13:56:07.292484] W [MSGID: 112153]
> >[mount3.c:3910:mnt3svc_init] 0-nfs-mount: Exports auth has been
> >disabled!
> >[2015-09-12 13:56:07.294357] E
> >[rpcsvc.c:1370:rpcsvc_program_register_portmap] 0-rpc-service:
> >Could not register with portmap 15 3 38465
> 
> Port registration failed. Could you check '/var/log/messages' and dmesg to
> see if there are any errors logged? Is firewalld running on your system.
> Verify if the port is open to be used.

Registration of services at portmap can also fail because there is a
service with that program number and version registered already. Could
you check if that is the case?

$ rpcinfo -p | grep 15
153   tcp  38465  mountd
151   tcp  38466  mountd

If you see a similar output, check if you have standard NFS services
running. The rpc.mountd process will also register itself at the
portmapper, but it will conflict with the NFS-services that Gluster
provides. Make sure all NFS services (server and client) have been
disabled and stopped. After that, check with the 'rpcinfo' command if
any of nlockmgr, mount, status or nfs are registered. If that is the
case, you can unregister them one-by-one with commands like this:

# rpcinfo -d 15 1
# rpcinfo -d 15 3
...

After unregistering the services at the portmapper, you should be able
to start the Gluster-NFS service by restarting glusterd.

HTH,
Niels

> 
> Thanks,
> Soumya
> >[2015-09-12 13:56:07.294398] E [MSGID: 112088]
> >[nfs.c:341:nfs_init_versions] 0-nfs: Required program  MOUNT3
> >registration failed
> >[2015-09-12 13:56:07.294413] E [MSGID: 112109] [nfs.c:1482:init]
> >0-nfs: Failed to initialize protocols
> >[2015-09-12 13:56:07.294426] E [MSGID: 101019]
> >[xlator.c:428:xlator_init] 0-nfs-server: Initialization of
> >volume 'nfs-server' failed, review your volfile again
> >[2015-09-12 13:56:07.294438] E
> >[graph.c:322:glusterfs_graph_init] 0-nfs-server: initializing
> >translator failed
> >[2015-09-12 13:56:07.294448] E
> >[graph.c:661:glusterfs_graph_activate] 0-graph: init failed
> >[2015-09-12 13:56:07.294781] W
> >[glusterfsd.c:1219:cleanup_and_exit]
> >(-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x11a) [0x7fbe9c754b7a]
> >-->/usr/sbin/glusterfs(glusterfs_process_volfp+0x123)
> >[0x7fbe9c74fcb3] -->/usr/sbin/glusterfs(cleanup_and_exit+0x59)
> >[0x7fbe9c74f329] ) 0-: received signum (0), shutting down
> >===
> >
> >I've checked the page:
> >
> > http://www.gluster.org/community/documentation/index.php/Gluster_3.1:_NFS_Frequently_Asked_Questions
> >
> >I've found report in RedHat that it's necessary to remove -w
> >from rpcbind becuse some times it causes problems.
> >I did all that but still no luck on one of the servers, what is
> >interesting, the other server (peered) is working fine without
> >any problems.
> >
> >root@PSC01SERV008:/var/lib/glusterd/nfs# systemctl status nfs
> >● nfs.service
> >Loaded: not-found (Reason: No such file or directory)
> >Active: inactive (dead)
> >
> >root@PSC01SERV008:/var/lib/glusterd/nfs# systemctl status rpcbind
> >● 

Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-15 Thread Yaroslav Molochko
I have two identical hosts managed by configuration managers, it was
working with 3.5 and stopped to work with 3.7 on ONE host. Okay, I've done
what you requested me, and here is result:
==
root@PSC01SERV008:~# systemctl restart rpcbind
root@PSC01SERV008:~# /etc/init.d/glusterfs-server restart
Restarting glusterfs-server (via systemctl): glusterfs-server.service.
root@PSC01SERV008:~# iptables -nvL
Another app is currently holding the xtables lock. Perhaps you want to use
the -w option?
root@PSC01SERV008:~# iptables -nvL
Chain INPUT (policy ACCEPT 3223K packets, 1760M bytes)
 pkts bytes target prot opt in out source
destination

Chain FORWARD (policy ACCEPT 1478K packets, 1926M bytes)
 pkts bytes target prot opt in out source
destination

Chain OUTPUT (policy ACCEPT 4697K packets, 1354M bytes)
 pkts bytes target prot opt in out source
destination
root@PSC01SERV008:~# cat /etc/hosts.allow
# /etc/hosts.allow: list of hosts that are allowed to access the system.
#   See the manual pages hosts_access(5) and
hosts_options(5).
#
# Example:ALL: LOCAL @some_netgroup
# ALL: .foobar.edu EXCEPT terminalserver.foobar.edu
#
# If you're going to protect the portmapper use the name "rpcbind" for the
# daemon name. See rpcbind(8) and rpc.mountd(8) for further information.
#
ALL: 127.0.0.1 : ALLOW
root@PSC01SERV008:~# gluster volume status
Status of volume: discover-music-prod-music-app-logs
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 10.116.254.17:/srv/data/glusterfs/dis
cover-music-prod/music-app-logs 49152 0  Y
17125
Brick 10.116.254.18:/srv/data/glusterfs/dis
cover-music-prod/music-app-logs 49152 0  Y
24663
NFS Server on localhost N/A   N/AN
N/A
Self-heal Daemon on localhost   N/A   N/AY
17693
NFS Server on 10.116.254.17 2049  0  Y
17146
Self-heal Daemon on 10.116.254.17   N/A   N/AY
17151

Task Status of Volume discover-music-prod-music-app-logs
--
There are no active volume tasks



For the protocol, I've reinstalled, restarted, anything I could, I've
checked anything I could find in the google and this doesn't work. Please,
lets move on with something more sophisticated than restart glusterfs... I
would not contact you if I had not tried to restart it dozen of time.

Do you have any debugging to see what is really happening?


2015-09-15 1:55 GMT+08:00 Soumya Koduri :

> Could you try
> * disabling iptables (& firewalld if enabled)
> * restart rpcbind service
> * restart glusterd
>
> If this doesn't work, (mentioned in one of the forums)
> Add below line in '/etc/hosts.allow' file.
>
> ALL: 127.0.0.1 : ALLOW
>
> Restart rpcbind and glusterd services.
>
> Thanks,
> Soumya
>
>
> On 09/14/2015 10:39 PM, Yaroslav Molochko wrote:
>
>> Could not register with portmap
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] autosnap feature?

2015-09-15 Thread Alastair Neil
Not really.  this is useful  as it distributes the snapshot control over
all the cluster members,  I am  looking for the ability to specify a
snapshot schedule like this :

frequent snapshots every 15 mins, keeping 4 snapshots
hourly snapshots every hour, keeping 24 snapshots
daily snapshots every day, keeping 31 snapshots
weekly snapshots every week, keeping 7 snapshots
monthly snapshots every month, keeping 12 snapshots.

Clearly this could be handled via the scheduling as described,  but the
feature that is missing is user friendly labeling so that users don't have
to parse long time-stamps in the snapshot name to figure out what is the
most recent snapshot.  Ideally they could have labels like "Now", "Fifteen
Minutes Ago",  "Thirty Minutes Ago", "Sunday", "Last Week" etc.  The system
should handle rotating the labels automatically, when necessary.  So some
sort of ability to create and manipulate labels on snapshots and then
expose them as links in the .snaps directory would probably be a start.

-Alastair



On 15 September 2015 at 01:35, Rajesh Joseph  wrote:

>
>
> - Original Message -
> > From: "Alastair Neil" 
> > To: "gluster-users" 
> > Sent: Friday, September 11, 2015 2:24:32 AM
> > Subject: [Gluster-users] autosnap feature?
> >
> > Wondering if there were any plans for a fexible and easy to use
> snapshotting
> > feature along the lines of zfs autosnap scipts. I imagine at the least it
> > would need the ability to rename snapshots.
> >
>
> Are you looking for something like this ?
>
> http://www.gluster.org/community/documentation/index.php/Features/Scheduling_of_Snapshot
>
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes from todays Gluster Community Bug Triage meeting (2015-09-15)

2015-09-15 Thread Mohammed Rafi K C
/Hi All,/

The minutes of the weekly community meeting held today can be found at:



Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-15/gluster-meeting.2015-09-15-12.02.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-15/gluster-meeting.2015-09-15-12.02.txt
Log:
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-15/gluster-meeting.2015-09-15-12.02.log.html


Meeting summary
---
* roll call  (rafi, 12:03:44)
  * Agenda for today: https://public.pad.fsfe.org/p/gluster-bug-triage
(rafi, 12:04:17)

* Status of last weeks action items  (rafi, 12:07:02)

* soumya to send send a reminder to the users- and devel- ML about (and
  how to find and  fix ) Coverity defects  (rafi, 12:08:28)

* Group triage  (rafi, 12:14:45)
  * https://public.pad.fsfe.org/p/gluster-bugs-to-triage  (rafi,
12:14:54)
  * we got 23 new interesting bugs to triage  (rafi, 12:15:32)

* open floor  (rafi, 12:35:32)
  * ACTION: ndevos to add a agenda for gluster-meeting to discuss about
cleaning up of older (which we are not supporting )version tag and
bugs from bugzilla  (rafi, 12:44:24)

Meeting ended at 13:06:24 UTC.




Action Items

* ndevos to add a agenda for gluster-meeting to discuss about cleaning
  up of older (which we are not supporting )version tag and bugs from
  bugzilla




Action Items, by person
---
* ndevos
  * ndevos to add a agenda for gluster-meeting to discuss about cleaning
up of older (which we are not supporting )version tag and bugs from
bugzilla
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* rafi (61)
* ndevos (41)
* kkeithley_ (24)
* skoduri (11)
* jiffin (6)
* hagarth (3)
* zodbot (2)
* amye (2)
* csim (1)


Regards
Rafi KC 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users