I was testing Ganesha NFS V2.4.0 using the POSIX FSAL on a CephFS file
system, using the Ceph FUSE interface, and I am seeing ganesha.nfsd
killed by an ABRT signal.  I was using the same ganesha.conf file that
worked with V2.3.2 and CephFS.  Here is the gdb backtrace:

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/opt/keeper/bin/ganesha.nfsd -F -L
Program terminated with signal SIGABRT, Aborted.
#0  0x00007f4fcf23ecc9 in __GI_raise (sig=sig@entry=6) at
56 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  0x00007f4fcf23ecc9 in __GI_raise (sig=sig@entry=6) at
#1  0x00007f4fcf2420d8 in __GI_abort () at abort.c:89
#2  0x00007f4fcf237b86 in __assert_fail_base (fmt=0x7f4fcf388830
"%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=assertion@entry=0x575fef "refcount > 0",
    function=function@entry=0x5763c0 <__PRETTY_FUNCTION__.20389>
"dec_state_owner_ref") at assert.c:92
#3  0x00007f4fcf237c32 in __GI___assert_fail (assertion=0x575fef
"refcount > 0", file=0x575878
    line=917, function=0x5763c0 <__PRETTY_FUNCTION__.20389>
"dec_state_owner_ref") at assert.c:101
#4  0x00000000004c4bd2 in dec_state_owner_ref (owner=0x7f4f4085d5e8)
at /home/keeper/work/ganesha/ganesha_2.4.0/nfs-ganesha/src/SAL/state_misc.c:917
#5  0x00000000004c4fac in uncache_nfs4_owner
(nfs4_owner=0x7f4f4085d638) at
#6  0x000000000045693d in reap_expired_open_owners () at
#7  0x0000000000456bda in reaper_run (ctx=0x7f4fc9ffc180) at
#8  0x000000000050ab9d in fridgethr_start_routine (arg=0x7f4fc9ffc180)
#9  0x00007f4fcfa2c182 in start_thread (arg=0x7f4f43ffe700) at
#10 0x00007f4fcf30247d in clone () at

The output for the backtrace for all the threads using the gdb command
"thread apply all bt" is at:  http://pasted.co/18de09f1
and the output of  "thread apply all bt full" is at: http://pasted.co/72ed25ef
I can upload the core file, binary and library files if needed to our
public ftp site.

Other info:

# /opt/keeper/bin/ganesha.nfsd -v
ganesha.nfsd compiled on Sep 22 2016 at 18:34:16
Release = V2.4.0
Release comment = GANESHA file server is 64 bits compliant and
supports NFS v3,4.0,4.1 (pNFS) and 9P
Git HEAD = 0c209a710292e24a867a6e9b9281a89563fdb148
Git Describe = V2.4.0-0-g0c209a7

# cat ganesha.conf
    components {
       ALL = INFO;
       #FSAL = DEBUG;
SecType = none, sys;
Protocols = 3, 4;
Transports = TCP;
# define CephFS FUSE export
    Export_ID = 41;
    Path = /cephfsFUSE/top;
    Pseudo = /cephfsFUSE/top;
    Access_Type = RW;
    Squash = No_Root_Squash;
    FSAL {
        Name = VFS;

OS: Ubuntu 14.04
# uname -a
Linux ede-c2-gw01 4.8.0-rc7-4.8.0rc7ksafe #1 SMP Sun Sep 18 22:23:34
EDT 2016 x86_64 x86_64 x86_64 GNU/Linux

# ceph -v
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)

# cat /proc/mounts  | grep ceph
ceph-fuse /cephfsFUSE fuse.ceph-fuse
rw,noatime,user_id=0,group_id=0,allow_other 0 0

The test was 3 Ubuntu 14.04 NFS clients each having 6 processes,
writing 11,000 256k files in separate directory trees with 11 files
per lowest level node. On each Ubuntu client, 3 processes wrote to a
NFS 3 mount and 3 wrote to a NFS 4 mount.

The reason for not using the CEPHFS FSAL is with Ganesha V2.3.2 I had
some issues with file overwrites. After I finished my initial 2.4.0
tests with the POSIX interface I was going to retest with 2.4.0 with

Best regards,

Nfs-ganesha-devel mailing list

Reply via email to