[ 
https://issues.apache.org/jira/browse/HDFS-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8562:
-------------------------
    Comment: was deleted

(was: GitHub user hash-X opened a pull request:

    https://github.com/apache/hadoop/pull/42

    AltFileInputStream.java replace FileInputStream.java in apache/hadoop/HDFS

    
    
    A brief description
    Long Stop-The-World GC pauses due to Final Reference processing are 
observed.
    So, Where are those Final Reference come from ?
    
    1 : `Finalizer`
    2 : `FileInputStream`
    
    How to solve this problem ?
    
    Here is the detailed description,and I give a solution on this.
    https://issues.apache.org/jira/browse/HDFS-8562
    
    FileInputStream have a method of finalize , and it can cause GC pause for a 
long time.In our test,G1 as our GC. So,in AltFileInputStream , no finalize. A 
new design for a inputstream use in windows and non-windows.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/hash-X/hadoop AltFileInputStream

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/hadoop/pull/42.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #42
    
----
commit 8d64ef0feb8c8d8f5d5823ccaa428a1b58f6fd04
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-19T09:50:19Z

    Add some code.

commit 3ccf4c70c40cf1ba921d76b949317b5fd6752e3c
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-19T09:56:49Z

    I cannot replace FileInputStream to NewFileInputStream in a casual 
way,cause the act
    of change can damage other part of the HDFS.For example,When I test my code
    using a Single Node (psedo-distributed) Cluster."Failed to load an FSImage
    file." will happen when I start HDFS Daemons.At start,I replace
    many FileInputStream which happend as an arg or constructor to
    NewFileInputStream,but it seems like wrong.So,I have to do this in another
    way.

commit 4da55130586ee9803a09162f7e2482b533aa12d9
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-19T10:30:11Z

    Replace FIS to NFIS( NewFileInputStream ) is not recommend I
    think,although there is a man named Alan Bateman from
    https://bugs.openjdk.java.net/browse/JDK-8080225
    suggest that.But test shows it is not good.Some problem may happen.And
    these test consume so long time.Every time I change the source code,I need 
to
    build the whole project (maybe it is not needed).But I install the new
    version hadoop on my computer.So,build the whole project is needed.Maybe
    should have a good way to do it I think.

commit 06b1509e0ad6dd74cf7c903e6ed6f2ec74d9b341
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-19T11:06:37Z

    Replace FIS to NFIS,If test success,just do these first.It is not as
    simple as that.

commit 2a79cd9c3b012556af7db5bdbf96663a1c30dcc4
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-20T02:36:55Z

    Add a LOG info in DataXceiver for test.

commit 436c998ae21b3fe843b2d5ba6506e37ff2a34ab2
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-20T06:01:41Z

    Rename NewFileInputStream to AltFileInputStream.

commit 14de2788ea2407c6ee252a69cfd3b4f6132c6faa
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-20T06:16:32Z

    replace License header to Apache.

commit 387f7624a96716abef2062986f05523199e1927e
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-20T07:16:25Z

    Remove open method in AltFileInputStream.java.

commit 52b029fac56bc054add1eac836e6cf71a0735304
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-20T10:14:09Z

    Performance between AltFileInputStream and FileInputStream is not do
    from this commit.Important question I think whether
    AltFileInputStream could convert
    to FileInputStream safely.I define a frame plan to do it.But I don't know is
    this correct for the problem ? In HDFS code,compulsory conversion to
    FileInputStream is happend everywhere.

commit e76d5eb4bf0145a4b28c581ecec07dcee7bae4e5
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-20T13:11:24Z

    I think the compulsory conversion is safety,cause the AltFileInputSteam is
    the subclass of the InputStream.In the previous version of the HDFS,the
    convert to FileInputStream I see is safety cause these method return
    InputStream which is the superclass of the FileInputStream.In my version
    of HDFS,InputStream is also the superclass of the
    AltFileInputStream.So,AltFileInputStream is also a InputStream just like
    the FileInputStream is a InputStream too.So,I think it is
    safety.Everyone agree ? If not,please give your opinion and tell me What's
    wrong with that ? thank you.

commit 959e91c05c11cc1445513d36ec083707f0bba4e1
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-20T14:08:51Z

    channel.close() is not needed.Because closeing the stream will in turn
    cause the channel to be closed.

commit aa7f82efb29d6ff457dcf6e5b2a74af663682106
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-20T15:57:36Z

    Delete HDFS-8562.patch
    
    old patch.

commit 5e8e15cecb6d159706270dafd6e71dc9816abf19
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-21T02:56:34Z

    Revice AltFileInputStream.java

commit 1d0e7fb8f9cca59480d4a79d62cdf005a804912f
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-22T10:32:45Z

    Make a test.

commit 5daf8730dbc3e94fe766acf83b3ace39e60bc730
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-22T14:32:02Z

    update AltFileInputStream.java

commit f58060ce118d6991824c9c9f622fe54776c2d32d
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-23T06:22:49Z

    Test class of AltFileInputStream.java

commit 76673377a73624ff60322c0ae4cdd61701ea79fc
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-23T08:05:52Z

    new test file.

commit fea1be8a7b21d15b0b0020c12c313eff108acd56
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-23T13:48:51Z

    The latest AltFileInputStream.java

commit 3ad1b9b25f54e68d6816dfbc477a3c4221f445bd
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-23T14:03:03Z

    The latest new file of AFIS

commit 8805c8bd87985d840eb534cad6c5e891b7f6ca41
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-24T00:59:44Z

    Latest file of AltFileInputStream.java

commit 5cdabc49963652e7bb65aa00a448a5c06793b421
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-24T01:47:16Z

    update

commit 221d27dbb908cc606a495cd298a7717ad65733aa
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-24T02:01:17Z

    Soga ....

commit 69d147d98f83ee564df73ae7f6ac6d4bdcf3c816
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-24T03:02:24Z

    a patch

commit a6de350f5240554015a54e8a07e399aaf6e89b61
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-24T04:39:21Z

    Latest file.

commit 0860e5da144ea8bfe51d378b212921f365904497
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-24T04:50:38Z

    update AltFileInputStream.

commit a566a304650cc4b8b3b3e5f700c564169bfe6d5b
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-24T08:22:27Z

    Soga...

commit 02d53c9227ad124290042129cdd872b48c24dd78
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-27T08:21:10Z

    update some code.

commit 090a4c8dd1eee4e01b791e623775b2074f9e6afb
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-27T09:06:32Z

    update some code.

commit 63047775b3b528bf1e831f6c5baa2b8ccdba0f24
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-07-27T09:19:45Z

    update some code.

commit 6d3deaa377b85a3f24bd8d7e7a4d7682e8949ec1
Author: zhangminglei <minglei.l.zh...@intel.com>
Date:   2015-09-25T05:06:17Z

    Add some info

----
)

> HDFS Performance is impacted by FileInputStream Finalizer
> ---------------------------------------------------------
>
>                 Key: HDFS-8562
>                 URL: https://issues.apache.org/jira/browse/HDFS-8562
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode, performance
>    Affects Versions: 2.5.0
>         Environment: Impact any application that uses HDFS
>            Reporter: Yanping Wang
>         Attachments: HDFS-8562.01.patch
>
>
> While running HBase using HDFS as datanodes, we noticed excessive high GC 
> pause spikes. For example with jdk8 update 40 and G1 collector, we saw 
> datanode GC pauses spiked toward 160 milliseconds while they should be around 
> 20 milliseconds. 
> We tracked down to GC logs and found those long GC pauses were devoted to 
> process high number of final references. 
> For example, this Young GC:
> 2715.501: [GC pause (G1 Evacuation Pause) (young) 0.1529017 secs]
> 2715.572: [SoftReference, 0 refs, 0.0001034 secs]
> 2715.572: [WeakReference, 0 refs, 0.0000123 secs]
> 2715.572: [FinalReference, 8292 refs, 0.0748194 secs]
> 2715.647: [PhantomReference, 0 refs, 160 refs, 0.0001333 secs]
> 2715.647: [JNI Weak Reference, 0.0000140 secs]
> [Ref Proc: 122.3 ms]
> [Eden: 910.0M(910.0M)->0.0B(911.0M) Survivors: 11.0M->10.0M Heap: 
> 951.1M(1536.0M)->40.2M(1536.0M)]
> [Times: user=0.47 sys=0.01, real=0.15 secs]
> This young GC took 152.9 milliseconds STW pause, while spent 122.3 
> milliseconds in Ref Proc, which processed 8292 FinalReference in 74.8 
> milliseconds plus some overhead.
> We used JFR and JMAP with Memory Analyzer to track down and found those 
> FinalReference were all from FileInputStream.  We checked HDFS code and saw 
> the use of the FileInputStream in datanode:
> https://apache.googlesource.com/hadoop-common/+/refs/heads/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MappableBlock.java
> {code}
> 1.    public static MappableBlock load(long length,
> 2.    FileInputStream blockIn, FileInputStream metaIn,
> 3.    String blockFileName) throws IOException {
> 4.    MappableBlock mappableBlock = null;
> 5.    MappedByteBuffer mmap = null;
> 6.    FileChannel blockChannel = null;
> 7.    try {
> 8.    blockChannel = blockIn.getChannel();
> 9.    if (blockChannel == null) {
> 10.   throw new IOException("Block InputStream has no FileChannel.");
> 11.   }
> 12.   mmap = blockChannel.map(MapMode.READ_ONLY, 0, length);
> 13.   NativeIO.POSIX.getCacheManipulator().mlock(blockFileName, mmap, length);
> 14.   verifyChecksum(length, metaIn, blockChannel, blockFileName);
> 15.   mappableBlock = new MappableBlock(mmap, length);
> 16.   } finally {
> 17.   IOUtils.closeQuietly(blockChannel);
> 18.   if (mappableBlock == null) {
> 19.   if (mmap != null) {
> 20.   NativeIO.POSIX.munmap(mmap); // unmapping also unlocks
> 21.   }
> 22.   }
> 23.   }
> 24.   return mappableBlock;
> 25.   }
> {code}
> We looked up 
> https://docs.oracle.com/javase/7/docs/api/java/io/FileInputStream.html  and
> http://hg.openjdk.java.net/jdk7/jdk7/jdk/file/23bdcede4e39/src/share/classes/java/io/FileInputStream.java
>  and noticed FileInputStream relies on the Finalizer to release its resource. 
> When a class that has a finalizer created, an entry for that class instance 
> is put on a queue in the JVM so the JVM knows it has a finalizer that needs 
> to be executed.   
> The current issue is: even with programmers do call close() after using 
> FileInputStream, its finalize() method will still be called. In other words, 
> still get the side effect of the FinalReference being registered at 
> FileInputStream allocation time, and also reference processing to reclaim the 
> FinalReference during GC (any GC solution has to deal with this). 
> We can imagine When running industry deployment HDFS, millions of files could 
> be opened and closed which resulted in a very large number of finalizers 
> being registered and subsequently being executed.  That could cause very long 
> GC pause times.
> We tried to use Files.newInputStream() to replace FileInputStream, but it was 
> clear we could not replace FileInputStream in 
> hdfs/server/datanode/fsdataset/impl/MappableBlock.java 
> We notified Oracle JVM team of this performance issue that impacting all Big 
> Data applications using HDFS. We recommended the proper fix in Java SE 
> FileInputStream. Because (1) it is really nothing wrong to use 
> FileInputStream in above datanode code, (2) as the object with a finalizer is 
> registered with finalizer list within the JVM at object allocation time, if 
> someone makes an explicit call to close or free the resources that are to be 
> done in the finalizer, then the finalizer should be pulled off the internal 
> JVM’s finalizer list. That will release the JVM from having to treat the 
> object as special because it has a finalizer, i.e. no need for GC to execute 
> the finalizer as part of Reference Processing.  
> As the java fix involves both JVM code and Java SE code, it might take time 
> for the full solution to be available in future JDK releases. We would like 
> to file his JIRA to notify Big Data, HDFS community to aware this issue while 
> using HDFS and while writing code using FileInputStream 
> One alternative is to use Files.newInputStream() to substitute 
> FileInputStream if it is possible. File.newInputStream() will give an 
> InputStream and do so in a manner that does not include a finalizer.
> We welcome HDFS community to discuss this issue and see if there are 
> additional ideas to solve this problem. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to