Re: [Gluster-devel] Assertion failed: lru_inode_ctx->block_num > 0

2016-12-11 Thread qingwei wei
Hi Krutika,

Thanks. Looking forward to your reply.

Cw

On Mon, Dec 12, 2016 at 2:27 PM, Krutika Dhananjay  wrote:
> Hi,
>
> First of all, apologies for the late reply. Couldn't find time to look into
> this
> until now.
>
> Changing SHARD_MAX_INODES value from 12384 to 16 is a cool trick!
> Let me try that as well and get back to you in some time.
>
> -Krutika
>
> On Thu, Dec 8, 2016 at 11:07 AM, qingwei wei  wrote:
>>
>> Hi,
>>
>> With the help from my colleague, we did some changes to the code with
>> reduce number of SHARD_MAX_INODES (from 16384 to 16) and also include
>> the printing of blk_num inside __shard_update_shards_inode_list. We
>> then execute fio to first do sequential write of 300MB file. After
>> this run completed, we then use fio to generate random write (8k). And
>> during this random write run, we found that there is situation where
>> the blk_num is negative number and this trigger the following
>> assertion.
>>
>> GF_ASSERT (lru_inode_ctx->block_num > 0);
>>
>> [2016-12-08 03:16:34.217582] E
>> [shard.c:468:__shard_update_shards_inode_list]
>>
>> (-->/usr/local/lib/glusterfs/3.7.17/xlator/features/shard.so(shard_common_lookup_shards_cbk+0x2d)
>> [0x7f7300930b6d]
>>
>> -->/usr/local/lib/glusterfs/3.7.17/xlator/features/shard.so(shard_link_block_inode+0xce)
>> [0x7f7300930b1e]
>>
>> -->/usr/local/lib/glusterfs/3.7.17/xlator/features/shard.so(__shard_update_shards_inode_list+0x36b)
>> [0x7f730092bf5b] ) 0-: Assertion failed: lru_inode_ctx->block_num > 0
>>
>> Also, there is segmentation fault shortly after this assertion and
>> after that fio exit with error.
>>
>> frame : type(0) op(0)
>> patchset: git://git.gluster.com/glusterfs.git
>> signal received: 11
>> time of crash:
>> 2016-12-08 03:16:34
>> configuration details:
>> argp 1
>> backtrace 1
>> dlfcn 1
>> libpthread 1
>> llistxattr 1
>> setfsid 1
>> spinlock 1
>> epoll.h 1
>> xattr.h 1
>> st_atim.tv_nsec 1
>> package-string: glusterfs 3.7.17
>>
>> /usr/local/lib/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x92)[0x7f730e900332]
>> /usr/local/lib/libglusterfs.so.0(gf_print_trace+0x2d5)[0x7f730e9250b5]
>> /lib64/libc.so.6(+0x35670)[0x7f730d1f1670]
>>
>> /usr/local/lib/glusterfs/3.7.17/xlator/features/shard.so(__shard_update_shards_inode_list+0x1d4)[0x7f730092bdc4]
>>
>> /usr/local/lib/glusterfs/3.7.17/xlator/features/shard.so(shard_link_block_inode+0xce)[0x7f7300930b1e]
>>
>> /usr/local/lib/glusterfs/3.7.17/xlator/features/shard.so(shard_common_lookup_shards_cbk+0x2d)[0x7f7300930b6d]
>>
>> /usr/local/lib/glusterfs/3.7.17/xlator/cluster/distribute.so(dht_lookup_cbk+0x380)[0x7f7300b8e240]
>>
>> /usr/local/lib/glusterfs/3.7.17/xlator/protocol/client.so(client3_3_lookup_cbk+0x769)[0x7f7300df4989]
>> /usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7f730e6ce010]
>> /usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0x1df)[0x7f730e6ce2ef]
>> /usr/local/lib/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f730e6ca483]
>>
>> /usr/local/lib/glusterfs/3.7.17/rpc-transport/socket.so(+0x6344)[0x7f73034dc344]
>>
>> /usr/local/lib/glusterfs/3.7.17/rpc-transport/socket.so(+0x8f44)[0x7f73034def44]
>> /usr/local/lib/libglusterfs.so.0(+0x925aa)[0x7f730e96c5aa]
>> /lib64/libpthread.so.0(+0x7dc5)[0x7f730d96ddc5]
>>
>> Core dump:
>>
>> Using host libthread_db library "/lib64/libthread_db.so.1".
>> Core was generated by `/usr/local/sbin/glusterfs
>> --volfile-server=10.217.242.32 --volfile-id=/testSF1'.
>> Program terminated with signal 11, Segmentation fault.
>> #0  list_del_init (old=0x7f72f4003de0) at
>> ../../../../libglusterfs/src/list.h:87
>> 87old->prev->next = old->next;
>>
>> bt
>>
>> #0  list_del_init (old=0x7f72f4003de0) at
>> ../../../../libglusterfs/src/list.h:87
>> #1  __shard_update_shards_inode_list
>> (linked_inode=linked_inode@entry=0x7f72fa7a6e48,
>> this=this@entry=0x7f72fc0090c0, base_inode=0x7f72fa7a5108,
>> block_num=block_num@entry=10) at shard.c:469
>> #2  0x7f7300930b1e in shard_link_block_inode
>> (local=local@entry=0x7f730ec4ed00, block_num=10, inode=> out>,
>> buf=buf@entry=0x7f730180c990) at shard.c:1559
>> #3  0x7f7300930b6d in shard_common_lookup_shards_cbk
>> (frame=0x7f730c611204, cookie=, this=0x7f72fc0090c0,
>> op_ret=0,
>> op_errno=, inode=,
>> buf=0x7f730180c990, xdata=0x7f730c029cdc, postparent=0x7f730180ca00)
>> at shard.c:1596
>> #4  0x7f7300b8e240 in dht_lookup_cbk (frame=0x7f730c61dc40,
>> cookie=, this=, op_ret=0, op_errno=22,
>> inode=0x7f72fa7a6e48, stbuf=0x7f730180c990, xattr=0x7f730c029cdc,
>> postparent=0x7f730180ca00) at dht-common.c:2362
>> #5  0x7f7300df4989 in client3_3_lookup_cbk (req=,
>> iov=, count=, myframe=0x7f730c616ab4)
>> at client-rpc-fops.c:2988
>> #6  0x7f730e6ce010 in rpc_clnt_handle_reply
>> (clnt=clnt@entry=0x7f72fc04c040, pollin=pollin@entry=0x7f72fc079560)
>> at rpc-clnt.c:796
>> #7  0x7f730e6ce2ef in rpc_clnt_notify (trans=,
>> mydata=0x7f72fc04c070, event=, data=0x7f72fc079560)
>> at 

Re: [Gluster-devel] "du" a large count files in a directory casue mounted glusterfs filesystem coredump

2016-12-11 Thread Raghavendra Gowdappa


- Original Message -
> From: "Cynthia Zhou (Nokia - CN/Hangzhou)" 
> To: "Raghavendra Gowdappa" , "George Lian (Nokia - 
> CN/Hangzhou)" 
> Cc: Gluster-devel@gluster.org, "Carlos Chinea (Nokia - FI/Espoo)" 
> , "Kari Hautio (Nokia -
> FI/Espoo)" , linux-fsde...@vger.kernel.org, "Bingxuan 
> Zhang (Nokia - CN/Hangzhou)"
> , "Deqian Li (Nokia - CN/Hangzhou)" 
> , "Jan Zizka (Nokia - CZ/Prague)"
> , "Xiaohui Bao (Nokia - CN/Hangzhou)" 
> 
> Sent: Monday, December 12, 2016 10:59:14 AM
> Subject: RE: [Gluster-devel] "du" a large count files in a directory casue 
> mounted glusterfs filesystem coredump
> 
> Hi glusterfs expert:
>   From
>   
> https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/datastructure-inode/
>   , there is following description :
> 
> when the the lru limit of the inode table has been exceeded, A inode is
> removed from the inode table and eventually destroyed
> 
> From glusterfs source code in function inode_table_new, there are
> following lines, so lru_limit is not infinite.
> 
> /* In case FUSE is initing the inode table. */
> if (lru_limit == 0)
> lru_limit = DEFAULT_INODE_MEMPOOL_ENTRIES; // 32 * 1024

That's just reuse of variable lru_limit. Note that the value passed by caller 
is already stored in new itable at:
https://github.com/gluster/glusterfs/blob/master/libglusterfs/src/inode.c#L1582

Also, as can be seen here
https://github.com/gluster/glusterfs/blob/master/xlators/mount/fuse/src/fuse-bridge.c#L5205

Fuse passes 0 as lru-limit which is considered as infinite.

So, I don't seen an issue here.

> Is that possible that glusterfs remove inode table because of lru limit
> reached?
> From the callbacktrace pasted by George ,seems inode table address is
> invalid, which caused the coredump.
> 
> Best regards,
> Cynthia (周琳)
> MBB SM HETRAN SW3 MATRIX
> Storage
> Mobile: +86 (0)18657188311
> 
> -Original Message-
> From: Raghavendra Gowdappa [mailto:rgowd...@redhat.com]
> Sent: Monday, December 12, 2016 12:34 PM
> To: Lian, George (Nokia - CN/Hangzhou) 
> Cc: Gluster-devel@gluster.org; Chinea, Carlos (Nokia - FI/Espoo)
> ; Hautio, Kari (Nokia - FI/Espoo)
> ; linux-fsde...@vger.kernel.org; Zhang, Bingxuan
> (Nokia - CN/Hangzhou) ; Zhou, Cynthia (Nokia -
> CN/Hangzhou) ; Li, Deqian (Nokia - CN/Hangzhou)
> ; Zizka, Jan (Nokia - CZ/Prague) ;
> Bao, Xiaohui (Nokia - CN/Hangzhou) 
> Subject: Re: [Gluster-devel] "du" a large count files in a directory casue
> mounted glusterfs filesystem coredump
> 
> 
> 
> - Original Message -
> > From: "George Lian (Nokia - CN/Hangzhou)" 
> > To: Gluster-devel@gluster.org, "Carlos Chinea (Nokia - FI/Espoo)"
> > , "Kari Hautio (Nokia -
> > FI/Espoo)" , linux-fsde...@vger.kernel.org
> > Cc: "Bingxuan Zhang (Nokia - CN/Hangzhou)" ,
> > "Cynthia Zhou (Nokia - CN/Hangzhou)"
> > , "Deqian Li (Nokia - CN/Hangzhou)"
> > , "Jan Zizka (Nokia - CZ/Prague)"
> > , "Xiaohui Bao (Nokia - CN/Hangzhou)"
> > 
> > Sent: Friday, December 9, 2016 2:50:44 PM
> > Subject: Re: [Gluster-devel] "du" a large count files in a directory casue
> > mounted glusterfs filesystem coredump
> > 
> > For Life cycle of inode in glusterfs which showed in
> > https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/datastructure-inode/
> > It shows that “ A inode is removed from the inode table and eventually
> > destroyed when unlink or rmdir operation is performed on a file/directory,
> > or the the lru limit of the inode table has been exceeded . ”
> > Now the default value for inode lru limit is 32k for glusterfs,
> > When we “ du ” or “ ls –R” large a mount files in directory which bigger
> > than
> > 32K, it could easy lead to the limit of lru.
> 
> Glusterfs mount process has an infinite lru limit. The reason is that
> Glusterfs passes the address of inode object as "nodeid" (aka identifier)
> representing inode. For all future references of the inode, kernel just
> sends back this nodeid. So, Glusterfs cannot free up the inode as long as
> kernel remembers it. In other words, inode table size in mount process is
> dependent on the dentry-cache or inode table size in fuse kernel module. So,
> for an inode to be freed up in mount process:
> 1. There should not be any on-going ops referring the inode
> 2. Kernel should send as many number of forgets as the number of lookups 

Re: [Gluster-devel] Assertion failed: lru_inode_ctx->block_num > 0

2016-12-11 Thread Krutika Dhananjay
Hi,

First of all, apologies for the late reply. Couldn't find time to look into
this
until now.

Changing SHARD_MAX_INODES value from 12384 to 16 is a cool trick!
Let me try that as well and get back to you in some time.

-Krutika

On Thu, Dec 8, 2016 at 11:07 AM, qingwei wei  wrote:

> Hi,
>
> With the help from my colleague, we did some changes to the code with
> reduce number of SHARD_MAX_INODES (from 16384 to 16) and also include
> the printing of blk_num inside __shard_update_shards_inode_list. We
> then execute fio to first do sequential write of 300MB file. After
> this run completed, we then use fio to generate random write (8k). And
> during this random write run, we found that there is situation where
> the blk_num is negative number and this trigger the following
> assertion.
>
> GF_ASSERT (lru_inode_ctx->block_num > 0);
>
> [2016-12-08 03:16:34.217582] E
> [shard.c:468:__shard_update_shards_inode_list]
> (-->/usr/local/lib/glusterfs/3.7.17/xlator/features/shard.
> so(shard_common_lookup_shards_cbk+0x2d)
> [0x7f7300930b6d]
> -->/usr/local/lib/glusterfs/3.7.17/xlator/features/shard.so(
> shard_link_block_inode+0xce)
> [0x7f7300930b1e]
> -->/usr/local/lib/glusterfs/3.7.17/xlator/features/shard.so(
> __shard_update_shards_inode_list+0x36b)
> [0x7f730092bf5b] ) 0-: Assertion failed: lru_inode_ctx->block_num > 0
>
> Also, there is segmentation fault shortly after this assertion and
> after that fio exit with error.
>
> frame : type(0) op(0)
> patchset: git://git.gluster.com/glusterfs.git
> signal received: 11
> time of crash:
> 2016-12-08 03:16:34
> configuration details:
> argp 1
> backtrace 1
> dlfcn 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 3.7.17
> /usr/local/lib/libglusterfs.so.0(_gf_msg_backtrace_nomem+
> 0x92)[0x7f730e900332]
> /usr/local/lib/libglusterfs.so.0(gf_print_trace+0x2d5)[0x7f730e9250b5]
> /lib64/libc.so.6(+0x35670)[0x7f730d1f1670]
> /usr/local/lib/glusterfs/3.7.17/xlator/features/shard.so(__
> shard_update_shards_inode_list+0x1d4)[0x7f730092bdc4]
> /usr/local/lib/glusterfs/3.7.17/xlator/features/shard.so(
> shard_link_block_inode+0xce)[0x7f7300930b1e]
> /usr/local/lib/glusterfs/3.7.17/xlator/features/shard.so(
> shard_common_lookup_shards_cbk+0x2d)[0x7f7300930b6d]
> /usr/local/lib/glusterfs/3.7.17/xlator/cluster/distribute.
> so(dht_lookup_cbk+0x380)[0x7f7300b8e240]
> /usr/local/lib/glusterfs/3.7.17/xlator/protocol/client.so(
> client3_3_lookup_cbk+0x769)[0x7f7300df4989]
> /usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7f730e6ce010]
> /usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0x1df)[0x7f730e6ce2ef]
> /usr/local/lib/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f730e6ca483]
> /usr/local/lib/glusterfs/3.7.17/rpc-transport/socket.so(+
> 0x6344)[0x7f73034dc344]
> /usr/local/lib/glusterfs/3.7.17/rpc-transport/socket.so(+
> 0x8f44)[0x7f73034def44]
> /usr/local/lib/libglusterfs.so.0(+0x925aa)[0x7f730e96c5aa]
> /lib64/libpthread.so.0(+0x7dc5)[0x7f730d96ddc5]
>
> Core dump:
>
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `/usr/local/sbin/glusterfs
> --volfile-server=10.217.242.32 --volfile-id=/testSF1'.
> Program terminated with signal 11, Segmentation fault.
> #0  list_del_init (old=0x7f72f4003de0) at ../../../../libglusterfs/src/
> list.h:87
> 87old->prev->next = old->next;
>
> bt
>
> #0  list_del_init (old=0x7f72f4003de0) at ../../../../libglusterfs/src/
> list.h:87
> #1  __shard_update_shards_inode_list
> (linked_inode=linked_inode@entry=0x7f72fa7a6e48,
> this=this@entry=0x7f72fc0090c0, base_inode=0x7f72fa7a5108,
> block_num=block_num@entry=10) at shard.c:469
> #2  0x7f7300930b1e in shard_link_block_inode
> (local=local@entry=0x7f730ec4ed00, block_num=10, inode= out>,
> buf=buf@entry=0x7f730180c990) at shard.c:1559
> #3  0x7f7300930b6d in shard_common_lookup_shards_cbk
> (frame=0x7f730c611204, cookie=, this=0x7f72fc0090c0,
> op_ret=0,
> op_errno=, inode=,
> buf=0x7f730180c990, xdata=0x7f730c029cdc, postparent=0x7f730180ca00)
> at shard.c:1596
> #4  0x7f7300b8e240 in dht_lookup_cbk (frame=0x7f730c61dc40,
> cookie=, this=, op_ret=0, op_errno=22,
> inode=0x7f72fa7a6e48, stbuf=0x7f730180c990, xattr=0x7f730c029cdc,
> postparent=0x7f730180ca00) at dht-common.c:2362
> #5  0x7f7300df4989 in client3_3_lookup_cbk (req=,
> iov=, count=, myframe=0x7f730c616ab4)
> at client-rpc-fops.c:2988
> #6  0x7f730e6ce010 in rpc_clnt_handle_reply
> (clnt=clnt@entry=0x7f72fc04c040, pollin=pollin@entry=0x7f72fc079560)
> at rpc-clnt.c:796
> #7  0x7f730e6ce2ef in rpc_clnt_notify (trans=,
> mydata=0x7f72fc04c070, event=, data=0x7f72fc079560)
> at rpc-clnt.c:967
> #8  0x7f730e6ca483 in rpc_transport_notify
> (this=this@entry=0x7f72fc05bd30,
> event=event@entry=RPC_TRANSPORT_MSG_RECEIVED,
> data=data@entry=0x7f72fc079560) at rpc-transport.c:546
> #9  0x7f73034dc344 in socket_event_poll_in
> 

Re: [Gluster-devel] Release 3.10 feature proposal:: Statedump for libgfapi

2016-12-11 Thread Niels de Vos
On Fri, Dec 09, 2016 at 06:20:22PM +0530, Rajesh Joseph wrote:
> Gluster should have some provision to take statedump of gfapi applications.
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1169302

A part of this feature should be to find out how applications that use
libgfapi expect to trigger debugging like this. Doing a statedump from
the gluster-cli should not be the main/only option. I agree that it
helps developers that work on Gluster, but we can not expect users to
trigger statedumps like that.

I think there would be a huge benefit in having an option to communicate
with libgfapi through some minimal form of local IPC. It will allow
doing statedumps, and maybe even set/get configuration options for
applications that do not offer these in their usage (yet).

The communication should be as simple and stable as possible. This could
be the only working interface towards getting something done inside
gfapi (worst case scenario). There is no need to have this a full
featured interface, possibly a named pipe (fifo) where libgfapi is the
reader is sufficient. A simple (text) command written to it can create
statedumps and eventually other files on request.

Enabling/disabling or even selecting the possibilities for debugging
could be confiured through new functions in libgfapi, and even
environment variables.

What do others think? Would this be useful?

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] "du" a large count files in a directory casue mounted glusterfs filesystem coredump

2016-12-11 Thread Raghavendra Gowdappa


- Original Message -
> From: "George Lian (Nokia - CN/Hangzhou)" 
> To: Gluster-devel@gluster.org, "Carlos Chinea (Nokia - FI/Espoo)" 
> , "Kari Hautio (Nokia -
> FI/Espoo)" , linux-fsde...@vger.kernel.org
> Cc: "Bingxuan Zhang (Nokia - CN/Hangzhou)" , 
> "Cynthia Zhou (Nokia - CN/Hangzhou)"
> , "Deqian Li (Nokia - CN/Hangzhou)" 
> , "Jan Zizka (Nokia - CZ/Prague)"
> , "Xiaohui Bao (Nokia - CN/Hangzhou)" 
> 
> Sent: Friday, December 9, 2016 2:50:44 PM
> Subject: Re: [Gluster-devel] "du" a large count files in a directory casue 
> mounted glusterfs filesystem coredump
> 
> For Life cycle of inode in glusterfs which showed in
> https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/datastructure-inode/
> It shows that “ A inode is removed from the inode table and eventually
> destroyed when unlink or rmdir operation is performed on a file/directory,
> or the the lru limit of the inode table has been exceeded . ”
> Now the default value for inode lru limit is 32k for glusterfs,
> When we “ du ” or “ ls –R” large a mount files in directory which bigger than
> 32K, it could easy lead to the limit of lru.

Glusterfs mount process has an infinite lru limit. The reason is that Glusterfs 
passes the address of inode object as "nodeid" (aka identifier) representing 
inode. For all future references of the inode, kernel just sends back this 
nodeid. So, Glusterfs cannot free up the inode as long as kernel remembers it. 
In other words, inode table size in mount process is dependent on the 
dentry-cache or inode table size in fuse kernel module. So, for an inode to be 
freed up in mount process:
1. There should not be any on-going ops referring the inode
2. Kernel should send as many number of forgets as the number of lookups it has 
done.


> @gluster-expert, when glusterfs destroy the inode due to the LRU limit, does
> glusterfs notify to the kernel? (from my study now , it seems not)

No. It does not. As explained above mount process never destroys the inode as 
long as kernel remembers it.

> @linux-fsdevel-expert, could you please clarify the mechanism of inode
> recycle mechanism or fuse-forget FOP for inode for us ?
> Is it possible that kernel free the inode (which will trigger fuse-forget to
> glusterfs) later than the destroy in glusterfs due to lru limit?

On mount process inode is never destroyed through lru mechanism as limit is 
inifinite.

> If it is possible , then the nodeid (which is conver from the memory address
> in glusterfs) maybe stale, and when it pass to the glusterfs userspace, the
> glusterfs just conver the u64 nodeid to memory address, and try to access
> the address, it will lead to invalid access and coredump finally !

That's precisely the reason why we keep an infinite lru limit for glusterfs 
client process. Though please note that we do have a finite lru limit for brick 
process, nfsv3 server etc.

regards,
Raghavendra

> Thanks & Best Regards,
> George
> _
> From: Lian, George (Nokia - CN/Hangzhou)
> Sent: Friday, December 09, 2016 9:49 AM
> To: 'Gluster-devel@gluster.org' 
> Cc: Zhou, Cynthia (Nokia - CN/Hangzhou) ; Bao,
> Xiaohui (Nokia - CN/Hangzhou) ; Zhang, Bingxuan
> (Nokia - CN/Hangzhou) ; Li, Deqian (Nokia -
> CN/Hangzhou) 
> Subject: "du" a large count files in a directory casue mounted glusterfs
> filesystem coredump
> Hi, GlusterFS Expert,
> Now we have an issue when run “du” command for a large count files/directory
> in a directory, in our environment there are more than 150k files in the
> directory.
> # df -i .
> Filesystem Inodes IUsed IFree IUse% Mounted on
> 169.254.0.23:/home 261888 154146 107742 59% /home
> Now we run “du” command in this directory, it is so easy to cause glusterfs
> process coredump, and the coredump backtrace shows it always caused by
> do_forget API, but last call some time difference. Please see the detail
> backtrace as the end of this mail.
> From my investigation, the issue maybe caused by the unsafe call of API
> “fuse_ino_to_inode”,
> I JUST GUESS in some un-expect case, when call “fuse_ino_to_inode” with
> nodeid which came from forget FOP, just call “fuse_ino_to_inode” to get the
> address from simply mapping of uint64 to memory address,
> the inode address maybe just destroyed by “ the lru limit of the inode table
> has been exceeded” in our large file case, so this operation maybe not safe,
> and the coredump backtrace also show there are more difference case when
> core occurred.
> Could you please share your comments on my investigation?
> And BTW I have some questions,
> 
> 
> 1. How the inode number in “stat” command mapping to the inode in
>