On Sat, Oct 18, 2014 at 01:24:12PM +0200, Demeter Tibor wrote:
> Hi, 
> 
> [root@node0 ~]# tail -n 20 /var/log/glusterfs/nfs.log 
> [2014-10-18 07:41:06.136035] E [graph.c:307:glusterfs_graph_init] 
> 0-nfs-server: initializing translator failed 
> [2014-10-18 07:41:06.136040] E [graph.c:502:glusterfs_graph_activate] 
> 0-graph: init failed 
> pending frames: 
> frame : type(0) op(0) 
> 
> patchset: git://git.gluster.com/glusterfs.git 
> signal received: 11 
> time of crash: 2014-10-18 07:41:06configuration details: 
> argp 1 
> backtrace 1 
> dlfcn 1 
> fdatasync 1 
> libpthread 1 
> llistxattr 1 
> setfsid 1 
> spinlock 1 
> epoll.h 1 
> xattr.h 1 
> st_atim.tv_nsec 1 
> package-string: glusterfs 3.5.2 

This definitely is a gluster/nfs issue. For whatever reasone, the
gluster/nfs server crashes :-/ The log does not show enough details,
some more lines before this are needed.

There might be an issue where the NFS RPC-services can not register. I
think I have seen similar crashes before, but never found the cause. You
should check with the 'rpcinfo' command to see if there are any NFS
RPC-services registered (nfs, lockd, mount, lockmgr). If there are any,
verify that there are no other nfs processes running, this includes
NFS-mounts in /etc/fstab and similar.

Could you file a bug, attach the full (gzipped) nfs.log? Try to explain
as much details of the setup as you can, and add a link to the archives
of this thread. Please post the url to the bug in a response to this
thread. A crashing process is never good, even when is could be caused
by external processes.

Link to file a bug:
- 
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&component=nfs&version=3.5.2

Thanks,
Niels


> 
> Udv: 
> 
> Demeter Tibor 
> 
> Email: tdemeter @itsmart.hu 
> Skype: candyman_78 
> Phone: +36 30 462 0500 
> Web : www.it smart.hu 
> 
> IT SMART KFT. 
> 2120 Dunakeszi Wass Albert utca 2. I. em 9. 
> Telefon: +36 30 462-0500 Fax: +36 27 637-486 
> 
> [EN] This message and any attachments are confidential and privileged and 
> intended for the use of the addressee only. If you have received this 
> communication in error, please notify the sender by replay e-mail and delete 
> this message from your system. Please note that Internet e-mail guarantees 
> neither the confidentiality nor the proper receipt of the message sent. The 
> data deriving from our correspondence with you are included in a file of 
> ITSMART Ltd which exclusive purpose is to manage the communications of the 
> company; under the understanding that, in maintaining said correspondence, 
> you authorize the treatment of such data for the mentioned purpose. You are 
> entitled to exercise your rights of access, rectification, cancellation and 
> opposition by addressing such written application to address above. 
> [HUN] Ez az üzenet és annak bármely csatolt anyaga bizalmas, a nyilvános 
> közléstol védett, kizárólag a címzett használhatja fel. Ha Ön nem az üzenet 
> címzettje, úgy kérjük válaszüzenetben értesítse errol a feladót és törölje az 
> üzenetet a rendszerbol. Kérjük vegye figyelembe, hogy az email-en történo 
> információtovábbítás kockázattal járhat, nem garantálja sem a csatorna 
> bizalmasságát, sem a kézbesítést. A levél az ITSMART Informatikai Kft. 
> kommunikációjának eszköze, az adatokat kizárólag erre a célra használjuk. 
> Jogosult tájékoztatást kérni személyes adatai kezelésérol, kérheti azok 
> helyesbítését, illetve törlését írásos kérelemben a fenti e-mail címen. 
> 
> ----- Eredeti üzenet -----
> 
> > Maybe share the last 15-20 lines of you /var/log/glusterfs/nfs.log for the
> > consideration of everyone on the list? Thanks.
> 
> > From: Demeter Tibor <[email protected]>;
> > To: Anirban Ghoshal <[email protected]>;
> > Cc: gluster-users <[email protected]>;
> > Subject: Re: [Gluster-users] NFS not start on localhost
> > Sent: Sat, Oct 18, 2014 10:36:36 AM
> 
> > 
> > Hi,
> 
> > I've try out these things:
> 
> > - nfs.disable on-of
> > - iptables disable
> > - volume stop-start
> 
> > but same.
> > So, when I make a new volume everything is fine.
> > After reboot the NFS won't listen on local host (only on server has brick0)
> 
> > Centos7 with last ovirt
> 
> > Regards,
> 
> > Tibor
> 
> > ----- Eredeti üzenet -----
> 
> > > It happens with me sometimes. Try `tail -n 20 /var/log/glusterfs/nfs.log`.
> > > You will probably find something out that will help your cause. In 
> > > general,
> > > if you just wish to start the thing up without going into the why of it,
> > > try
> > > `gluster volume set engine nfs.disable on` followed by ` gluster volume 
> > > set
> > > engine nfs.disable off`. It does the trick quite often for me because it 
> > > is
> > > a polite way to askmgmt/glusterd to try and respawn the nfs server process
> > > if need be. But, keep in mind that this will call a (albeit small) service
> > > interruption to all clients accessing volume engine over nfs.
> > 
> 
> > > Thanks,
> > 
> > > Anirban
> > 
> 
> > > On Saturday, 18 October 2014 1:03 AM, Demeter Tibor <[email protected]>
> > > wrote:
> > 
> 
> > > Hi,
> > 
> 
> > > I have make a glusterfs with nfs support.
> > 
> 
> > > I don't know why, but after a reboot the nfs does not listen on localhost,
> > > only on gs01.
> > 
> 
> > > [root@node0 ~]# gluster volume info engine
> > 
> 
> > > Volume Name: engine
> > 
> > > Type: Replicate
> > 
> > > Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3
> > 
> > > Status: Started
> > 
> > > Number of Bricks: 1 x 2 = 2
> > 
> > > Transport-type: tcp
> > 
> > > Bricks:
> > 
> > > Brick1: gs00.itsmart.cloud:/gluster/engine0
> > 
> > > Brick2: gs01.itsmart.cloud:/gluster/engine1
> > 
> > > Options Reconfigured:
> > 
> > > storage.owner-uid: 36
> > 
> > > storage.owner-gid: 36
> > 
> > > performance.quick-read: off
> > 
> > > performance.read-ahead: off
> > 
> > > performance.io-cache: off
> > 
> > > performance.stat-prefetch: off
> > 
> > > cluster.eager-lock: enable
> > 
> > > network.remote-dio: enable
> > 
> > > cluster.quorum-type: auto
> > 
> > > cluster.server-quorum-type: server
> > 
> > > auth.allow: *
> > 
> > > nfs.disable: off
> > 
> 
> > > [root@node0 ~]# gluster volume status engine
> > 
> > > Status of volume: engine
> > 
> > > Gluster process Port Online Pid
> > 
> > > ------------------------------------------------------------------------------
> > 
> > > Brick gs00.itsmart.cloud:/gluster/engine0 50158 Y 3250
> > 
> > > Brick gs01.itsmart.cloud:/gluster/engine1 50158 Y 5518
> > 
> > > NFS Server on localhost N/A N N/A
> > 
> > > Self-heal Daemon on localhost N/A Y 3261
> > 
> > > NFS Server on gs01.itsmart.cloud 2049 Y 5216
> > 
> > > Self-heal Daemon on gs01.itsmart.cloud N/A Y 5223
> > 
> 
> > > Does anybody help me?
> > 
> 
> > > Thanks in advance.
> > 
> 
> > > Tibor
> > 
> 
> > > _______________________________________________
> > 
> > > Gluster-users mailing list
> > 
> > > [email protected]
> > 
> > > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> > 

> _______________________________________________
> Gluster-users mailing list
> [email protected]
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to