[ 
https://issues.apache.org/jira/browse/HDFS-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17343725#comment-17343725
 ] 

Renukaprasad C commented on HDFS-14703:
---------------------------------------

[~shv] Thanks for sharing the patch.
I tried to test the pach applied on Trunk, results found similar with & without 
patch. I have attached results for both the results below. Did I miss something?

With Patch:
{code:java}
~/hadoop-3.4.0-SNAPSHOT/bin$ ./hdfs  
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs 
hdfs://localhost:9000 -op mkdirs -threads 200 -dirs 2000000 -dirsPerDir 128
2021-05-13 01:57:41,279 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2021-05-13 01:57:41,976 INFO namenode.NNThroughputBenchmark: Starting 
benchmark: mkdirs
2021-05-13 01:57:42,065 INFO namenode.NNThroughputBenchmark: Generate 2000000 
inputs for mkdirs
2021-05-13 01:57:43,385 INFO namenode.NNThroughputBenchmark: Log level = ERROR
2021-05-13 01:57:44,079 INFO namenode.NNThroughputBenchmark: Starting 2000000 
mkdirs(s).
2021-05-13 02:15:59,958 INFO namenode.NNThroughputBenchmark: 
2021-05-13 02:15:59,958 INFO namenode.NNThroughputBenchmark: --- mkdirs inputs 
---
2021-05-13 02:15:59,958 INFO namenode.NNThroughputBenchmark: nrDirs = 2000000
2021-05-13 02:15:59,958 INFO namenode.NNThroughputBenchmark: nrThreads = 200
2021-05-13 02:15:59,958 INFO namenode.NNThroughputBenchmark: nrDirsPerDir = 128
2021-05-13 02:15:59,958 INFO namenode.NNThroughputBenchmark: --- mkdirs stats  
---
2021-05-13 02:15:59,958 INFO namenode.NNThroughputBenchmark: # operations: 
2000000
2021-05-13 02:15:59,958 INFO namenode.NNThroughputBenchmark: Elapsed Time: 
1095122
2021-05-13 02:15:59,958 INFO namenode.NNThroughputBenchmark:  Ops per sec: 
1826.2805422592187
2021-05-13 02:15:59,959 INFO namenode.NNThroughputBenchmark: Average Time: 108
{code}

Without Patch:
{code:java}
/hadoop-3.4.0-SNAPSHOT/bin$ ./hdfs  
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs 
hdfs://localhost:9000 -op mkdirs -threads 200 -dirs 2000000 -dirsPerDir 128
2021-05-13 03:25:53,243 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2021-05-13 03:25:54,046 INFO namenode.NNThroughputBenchmark: Starting 
benchmark: mkdirs
2021-05-13 03:25:54,117 INFO namenode.NNThroughputBenchmark: Generate 2000000 
inputs for mkdirs
2021-05-13 03:25:55,076 INFO namenode.NNThroughputBenchmark: Log level = ERROR
2021-05-13 03:25:55,163 INFO namenode.NNThroughputBenchmark: Starting 2000000 
mkdirs(s).
2021-05-13 03:43:40,125 INFO namenode.NNThroughputBenchmark: 
2021-05-13 03:43:40,125 INFO namenode.NNThroughputBenchmark: --- mkdirs inputs 
---
2021-05-13 03:43:40,125 INFO namenode.NNThroughputBenchmark: nrDirs = 2000000
2021-05-13 03:43:40,125 INFO namenode.NNThroughputBenchmark: nrThreads = 200
2021-05-13 03:43:40,125 INFO namenode.NNThroughputBenchmark: nrDirsPerDir = 128
2021-05-13 03:43:40,125 INFO namenode.NNThroughputBenchmark: --- mkdirs stats  
---
2021-05-13 03:43:40,125 INFO namenode.NNThroughputBenchmark: # operations: 
2000000
2021-05-13 03:43:40,125 INFO namenode.NNThroughputBenchmark: Elapsed Time: 
1064420
2021-05-13 03:43:40,125 INFO namenode.NNThroughputBenchmark:  Ops per sec: 
1878.9575543488472
2021-05-13 03:43:40,125 INFO namenode.NNThroughputBenchmark: Average Time: 105
{code}


Similar results achived with when i tried with "file" as well, but this case 
Partitions were empty.

{code:java}
2021-05-13 09:20:36,921 INFO namenode.NNThroughputBenchmark: 
2021-05-13 09:20:36,921 INFO namenode.NNThroughputBenchmark: --- mkdirs inputs 
---
2021-05-13 09:20:36,921 INFO namenode.NNThroughputBenchmark: nrDirs = 2000000
2021-05-13 09:20:36,921 INFO namenode.NNThroughputBenchmark: nrThreads = 200
2021-05-13 09:20:36,921 INFO namenode.NNThroughputBenchmark: nrDirsPerDir = 128
2021-05-13 09:20:36,921 INFO namenode.NNThroughputBenchmark: --- mkdirs stats  
---
2021-05-13 09:20:36,921 INFO namenode.NNThroughputBenchmark: # operations: 
2000000
2021-05-13 09:20:36,921 INFO namenode.NNThroughputBenchmark: Elapsed Time: 
845625
2021-05-13 09:20:36,921 INFO namenode.NNThroughputBenchmark:  Ops per sec: 
2365.1145602365114
2021-05-13 09:20:36,921 INFO namenode.NNThroughputBenchmark: Average Time: 84
2021-05-13 09:20:36,922 INFO namenode.FSEditLog: Ending log segment 1465676, 
2015633
2021-05-13 09:20:36,987 INFO namenode.FSEditLog: Number of transactions: 549959 
Total time for transactions(ms): 2840 Number of transactions batched in Syncs: 
545346 Number of syncs: 4614 SyncTimes(ms): 240432 
2021-05-13 09:20:36,996 INFO namenode.FileJournalManager: Finalizing edits file 
/home/renu/hadoop-3.4.0-SNAPSHOT/hdfs/namenode/current/edits_inprogress_0000000000001465676
 -> 
/home/renu/hadoop-3.4.0-SNAPSHOT/hdfs/namenode/current/edits_0000000000001465676-0000000000002015634
2021-05-13 09:20:36,998 INFO namenode.FSEditLog: FSEditLogAsync was 
interrupted, exiting
2021-05-13 09:20:37,010 INFO blockmanagement.CacheReplicationMonitor: Shutting 
down CacheReplicationMonitor
2021-05-13 09:20:37,010 INFO ipc.Server: Stopping server on 34541
2021-05-13 09:20:37,013 INFO ipc.Server: Stopping IPC Server listener on 0
2021-05-13 09:20:37,013 INFO ipc.Server: Stopping IPC Server Responder
2021-05-13 09:20:37,066 INFO handler.ContextHandler: Stopped 
o.e.j.w.WebAppContext@4bf48f6{hdfs,/,null,STOPPED}{file:/home/renu/hadoop-3.4.0-SNAPSHOT/share/hadoop/hdfs/webapps/hdfs}
2021-05-13 09:20:37,069 INFO server.AbstractConnector: Stopped 
ServerConnector@d554c5f{HTTP/1.1, (http/1.1)}{0.0.0.0:0}
2021-05-13 09:20:37,069 INFO server.session: node0 Stopped scavenging
2021-05-13 09:20:37,069 INFO handler.ContextHandler: Stopped 
o.e.j.s.ServletContextHandler@268f106e{static,/static,file:///home/renu/hadoop-3.4.0-SNAPSHOT/share/hadoop/hdfs/webapps/static/,STOPPED}
2021-05-13 09:20:37,069 INFO handler.ContextHandler: Stopped 
o.e.j.s.ServletContextHandler@7ce026d3{logs,/logs,file:///home/renu/hadoop-3.4.0-SNAPSHOT/logs/,STOPPED}
2021-05-13 09:20:37,070 INFO impl.MetricsSystemImpl: Stopping NameNode metrics 
system...
2021-05-13 09:20:37,070 INFO impl.MetricsSystemImpl: NameNode metrics system 
stopped.
2021-05-13 09:20:37,070 INFO impl.MetricsSystemImpl: NameNode metrics system 
shutdown complete.
2021-05-13 09:20:37,070 ERROR util.GSet: Total GSet size = -9
2021-05-13 09:20:37,070 ERROR util.GSet: Number of partitions = 256
2021-05-13 09:20:37,072 ERROR util.GSet: Partition #0    key: [0, 16385]        
 size: 1         first: [0, 16385]
2021-05-13 09:20:37,073 ERROR util.GSet: Partition #1    key: [1, 16385]        
 size: 0         first: []
...
...
...
2021-05-13 09:20:37,097 ERROR util.GSet: Partition #255  key: [255, 16385]      
 size: 0         first: []
2021-05-13 09:20:37,097 ERROR util.GSet: Partition sizes: min = 0, avg = 0, max 
= 1, sum = 2
2021-05-13 09:20:37,097 ERROR util.GSet: Number of partitions: empty = 254, 
full = 0
{code}

Could you suggest if any of the steps i missed here?

> NameNode Fine-Grained Locking via Metadata Partitioning
> -------------------------------------------------------
>
>                 Key: HDFS-14703
>                 URL: https://issues.apache.org/jira/browse/HDFS-14703
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs, namenode
>            Reporter: Konstantin Shvachko
>            Priority: Major
>         Attachments: 001-partitioned-inodeMap-POC.tar.gz, 
> 002-partitioned-inodeMap-POC.tar.gz, 003-partitioned-inodeMap-POC.tar.gz, 
> NameNode Fine-Grained Locking.pdf, NameNode Fine-Grained Locking.pdf
>
>
> We target to enable fine-grained locking by splitting the in-memory namespace 
> into multiple partitions each having a separate lock. Intended to improve 
> performance of NameNode write operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to