I got some log from client side
*PARTI* [2015-02-10 08:23:01.613646] D [MSGID: 0] [dht-common.c:2183:dht_lookup] 0-r2dr3-dht: Calling fresh lookup for /1w_test00/721ea549-8e2b-4770-9c52-2cc0c4ef872d on r2dr3-replicate-0 [2015-02-10 08:23:01.613947] D [afr-lk-common.c:447:transaction_lk_op] 0-r2dr3-replicate-1: lk op is for a transaction [2015-02-10 08:23:01.614417] D [MSGID: 0] [dht-common.c:1819:dht_lookup_cbk] 0-r2dr3-dht: fresh_lookup returned for /1w_test00/721ea549-8e2b-4770-9c52-2cc0c4ef872d with op_ret -1 and op_errno 2 [2015-02-10 08:23:01.614444] D [MSGID: 0] [dht-common.c:1823:dht_lookup_cbk] 0-r2dr3-dht: Entry /1w_test00/721ea549-8e2b-4770-9c52-2cc0c4ef872d missing on subvol r2dr3-replicate-0 [2015-02-10 08:23:01.614463] D [MSGID: 0] [dht-common.c:1608:dht_lookup_everywhere] 0-r2dr3-dht: winding lookup call to 2 subvols [2015-02-10 08:23:01.615067] D [MSGID: 0] [dht-common.c:1414:dht_lookup_everywhere_cbk] 0-r2dr3-dht: returned with op_ret -1 and op_errno 2 (/1w_test00/721ea549-8e2b-4770-9c52-2cc0c4ef872d) from subvol r2dr3-replicate-1 [2015-02-10 08:23:01.615117] D [MSGID: 0] [dht-common.c:1414:dht_lookup_everywhere_cbk] 0-r2dr3-dht: returned with op_ret -1 and op_errno 2 (/1w_test00/721ea549-8e2b-4770-9c52-2cc0c4ef872d) from subvol r2dr3-replicate-0 [2015-02-10 08:23:01.615139] D [MSGID: 0] [dht-common.c:1086:dht_lookup_everywhere_done] 0-r2dr3-dht: STATUS: hashed_subvol r2dr3-replicate-0 cached_subvol null [2015-02-10 08:23:01.615158] D [MSGID: 0] [dht-common.c:1147:dht_lookup_everywhere_done] 0-r2dr3-dht: There was no cached file and unlink on hashed is not skipped /1w_test00/721ea549-8e2b-4770-9c52-2cc0c4ef872d [2015-02-10 08:23:01.615761] D [afr-transaction.c:1177:afr_post_nonblocking_entrylk_cbk] 0-r2dr3-replicate-0: Non blocking entrylks done. Proceeding to FOP [2015-02-10 08:23:01.616450] D [afr-lk-common.c:447:transaction_lk_op] 0-r2dr3-replicate-0: lk op is for a transaction [2015-02-10 08:23:01.617153] D [afr-transaction.c:1127:afr_post_nonblocking_inodelk_cbk] 0-r2dr3-replicate-0: Non blocking inodelks done. Proceeding to FOP [2015-02-10 08:23:01.618365] D [MSGID: 0] [dht-common.c:2183:dht_lookup] 0-r2dr3-dht: Calling fresh lookup for /1w_test00/70e8e730-0442-47c3-94c6-e2d13506427f on r2dr3-replicate-0 [2015-02-10 08:23:01.618670] D [afr-lk-common.c:447:transaction_lk_op] 0-r2dr3-replicate-0: lk op is for a transaction [2015-02-10 08:23:01.619054] D [MSGID: 0] [dht-common.c:1819:dht_lookup_cbk] 0-r2dr3-dht: fresh_lookup returned for /1w_test00/70e8e730-0442-47c3-94c6-e2d13506427f with op_ret -1 and op_errno 2 [2015-02-10 08:23:01.619077] D [MSGID: 0] [dht-common.c:1823:dht_lookup_cbk] 0-r2dr3-dht: Entry /1w_test00/70e8e730-0442-47c3-94c6-e2d13506427f missing on subvol r2dr3-replicate-0 [2015-02-10 08:23:01.619096] D [MSGID: 0] [dht-common.c:1608:dht_lookup_everywhere] 0-r2dr3-dht: winding lookup call to 2 subvols [2015-02-10 08:23:01.619812] D [MSGID: 0] [dht-common.c:1414:dht_lookup_everywhere_cbk] 0-r2dr3-dht: returned with op_ret -1 and op_errno 2 (/1w_test00/70e8e730-0442-47c3-94c6-e2d13506427f) from subvol r2dr3-replicate-1 [2015-02-10 08:23:01.619910] D [MSGID: 0] [dht-common.c:1414:dht_lookup_everywhere_cbk] 0-r2dr3-dht: returned with op_ret -1 and op_errno 2 (/1w_test00/70e8e730-0442-47c3-94c6-e2d13506427f) from subvol r2dr3-replicate-0 [2015-02-10 08:23:01.619953] D [MSGID: 0] [dht-common.c:1086:dht_lookup_everywhere_done] 0-r2dr3-dht: STATUS: hashed_subvol r2dr3-replicate-0 cached_subvol null [2015-02-10 08:23:01.619971] D [MSGID: 0] [dht-common.c:1147:dht_lookup_everywhere_done] 0-r2dr3-dht: There was no cached file and unlink on hashed is not skipped /1w_test00/70e8e730-0442-47c3-94c6-e2d13506427f [2015-02-10 08:23:01.620542] D [afr-transaction.c:1177:afr_post_nonblocking_entrylk_cbk] 0-r2dr3-replicate-0: Non blocking entrylks done. Proceeding to FOP [2015-02-10 08:23:01.621220] D [afr-lk-common.c:447:transaction_lk_op] 0-r2dr3-replicate-0: lk op is for a transaction [2015-02-10 08:23:01.621946] D [afr-transaction.c:1127:afr_post_nonblocking_inodelk_cbk] 0-r2dr3-replicate-0: Non blocking inodelks done. Proceeding to FOP [2015-02-10 08:23:01.623192] D [MSGID: 0] [dht-common.c:2183:dht_lookup] 0-r2dr3-dht: Calling fresh lookup for /1w_test00/65dc668b-4b42-45e8-942d-dcb283c206a0 on r2dr3-replicate-1 [2015-02-10 08:23:01.623540] D [afr-lk-common.c:447:transaction_lk_op] 0-r2dr3-replicate-0: lk op is for a transaction [2015-02-10 08:23:01.623948] D [MSGID: 0] [dht-common.c:1819:dht_lookup_cbk] 0-r2dr3-dht: fresh_lookup returned for /1w_test00/65dc668b-4b42-45e8-942d-dcb283c206a0 with op_ret -1 and op_errno 2 [2015-02-10 08:23:01.623975] D [MSGID: 0] [dht-common.c:1823:dht_lookup_cbk] 0-r2dr3-dht: Entry /1w_test00/65dc668b-4b42-45e8-942d-dcb283c206a0 missing on subvol r2dr3-replicate-1 [2015-02-10 08:23:01.623997] D [MSGID: 0] [dht-common.c:1608:dht_lookup_everywhere] 0-r2dr3-dht: winding lookup call to 2 subvols [2015-02-10 08:23:01.624645] D [MSGID: 0] [dht-common.c:1414:dht_lookup_everywhere_cbk] 0-r2dr3-dht: returned with op_ret -1 and op_errno 2 (/1w_test00/65dc668b-4b42-45e8-942d-dcb283c206a0) from subvol r2dr3-replicate-1 [2015-02-10 08:23:01.624746] D [MSGID: 0] [dht-common.c:1414:dht_lookup_everywhere_cbk] 0-r2dr3-dht: returned with op_ret -1 and op_errno 2 (/1w_test00/65dc668b-4b42-45e8-942d-dcb283c206a0) from subvol r2dr3-replicate-0 [2015-02-10 08:23:01.624773] D [MSGID: 0] [dht-common.c:1086:dht_lookup_everywhere_done] 0-r2dr3-dht: STATUS: hashed_subvol r2dr3-replicate-1 cached_subvol null [2015-02-10 08:23:01.624792] D [MSGID: 0] [dht-common.c:1147:dht_lookup_everywhere_done] 0-r2dr3-dht: There was no cached file and unlink on hashed is not skipped /1w_test00/65dc668b-4b42-45e8-942d-dcb283c206a0 [2015-02-10 08:23:01.625299] D [afr-transaction.c:1177:afr_post_nonblocking_entrylk_cbk] 0-r2dr3-replicate-1: Non blocking entrylks done. Proceeding to FOP [2015-02-10 08:23:01.625971] D [afr-lk-common.c:447:transaction_lk_op] 0-r2dr3-replicate-1: lk op is for a transaction [2015-02-10 08:23:01.626690] D [afr-transaction.c:1127:afr_post_nonblocking_inodelk_cbk] 0-r2dr3-replicate-1: Non blocking inodelks done. Proceeding to FOP *PARTII* [2015-02-10 08:48:30.396145] D [rpc-clnt-ping.c:255:rpc_clnt_start_ping] 0-r2dr3-client-1: returning as transport is already disconnected OR there are no frames (0 || 0) [2015-02-10 08:48:37.396576] D [rpc-clnt-ping.c:255:rpc_clnt_start_ping] 0-r2dr3-client-3: returning as transport is already disconnected OR there are no frames (0 || 0) [2015-02-10 08:48:38.396684] D [rpc-clnt-ping.c:255:rpc_clnt_start_ping] 0-r2dr3-client-2: returning as transport is already disconnected OR there are no frames (0 || 0) [2015-02-10 08:49:13.398692] D [rpc-clnt-ping.c:255:rpc_clnt_start_ping] 0-r2dr3-client-0: returning as transport is already disconnected OR there are no frames (0 || 0) [2015-02-10 08:49:13.398751] D [rpc-clnt-ping.c:255:rpc_clnt_start_ping] 0-r2dr3-client-1: returning as transport is already disconnected OR there are no frames (0 || 0) [2015-02-10 08:49:20.399164] D [rpc-clnt-ping.c:255:rpc_clnt_start_ping] 0-r2dr3-client-3: returning as transport is already disconnected OR there are no frames (0 || 0) *Volume INFO* Volume Name: r2dr3 Type: Distributed-Replicate Volume ID: b30ffc1a-9a6c-45e7-a847-97efd9d864d2 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: node1:/gfs_data/r2dr3 Brick2: node2:/gfs_data/r2dr3 Brick3: node3:/gfs_data/r2dr3 Brick4: node4:/gfs_data/r2dr3 Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on diagnostics.brick-log-level: DEBUG diagnostics.client-log-level: DEBUG 2015-02-06 18:32 GMT+08:00 Ason Hu <[email protected]>: > Hi Pranith, > > > Gluster Version : > glusterfs 3.6.2 built on Jan 22 2015 12:58:10 > > - 100,000 file test in XFS directly > > [root@glusterfs-node1 gfs_data]# java FileCreationAndSearchtest 10w_test > 100000 > Creation finished!! Time: *8961(ms)* > File Search Finished!! 33223 searches, time: 124ms, misses:0 > NIO Search Finished!! 33223 searches, time: 367ms, misses:0 > > > > - 100,000 file test from client (Distributed Replicated) > > [root@glusterfs-client01 drvol]# java FileCreationAndSearchtest > 10w_test01 100000 > Start!! > Creation finished!! Time: *265713(ms)* > File Search Finished!! 33453 searches, time: 18403(ms), misses:0 > NIO Search Finished!! 33453 searches, time: 17827(ms), misses:0 > > > > after many small file created (≒200 million), it cost more than > 1000000(ms) to finish 100,000 file test > > I've flushed all volumes and create new one to reproduce the condition > again. > > > > 2015-02-05 19:19 GMT+08:00 Pranith Kumar Karampuri <[email protected]>: > >> >> On 02/02/2015 02:42 PM, Ason Hu wrote: >> >> Hi, >> >> In my small file test, 3 types of volume are created in same GlusterFS >> Cluster (4 nodes) >> >> - Distritubed >> - Distributed Replicated >> - Distributed Striped >> >> >> 10 million small files (8 byte) create and search each test cycle life >> >> >> Nodes info : >> >> - OS : CentOS 6.6 64bit >> - RAM : 64GB >> - HDD : 4TB, 7200RPM, SAS *6 RAID 0 >> - FileSystem : XFS, BS=4K >> >> >> >> Problem : >> >> It occur when file reach ≒200 million (sum of 3 volumes) >> files created speed drop from ≒1500 to ≒100 >> >> If you do this test directly on the XFS partition, what is the kind of >> performance difference you see? >> >> Pranith >> >> >> Though the speed of creation become slow but search speed and >> throughput still good >> Seems each types of volume have same problem, I don't know what cause >> this problem.. >> >> >> Workaround : >> >> I try to create new volume ( Distributed | Distributed Replicated | >> Distributed Striped ), mount it, >> problem solved! >> But the old volume write speed still sad.. >> >> >> >> I'd try to increase below volume setting, but don't work at all >> >> - performance.cache-size >> - performance.io-thread-count >> >> >> I didn't enable the volume qouta option. >> Does glusterfs volume have some kind of limitation of file creation? >> >> >> >> Regards >> Nosa Hu, >> >> >> >> >> _______________________________________________ >> Gluster-users mailing >> [email protected]http://www.gluster.org/mailman/listinfo/gluster-users >> >> >> >
_______________________________________________ Gluster-users mailing list [email protected] http://www.gluster.org/mailman/listinfo/gluster-users
