Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Wangda Tan
Hi Vinod / Arpit,

I checked following versions:
- 2.6.5 / 2.7.5 / 2.8.3 / 2.9.0 / 3.0.1:

Jars in maven repo [1] are *always* different from jars in the binary
tarball [2]: (I only checked hadoop-yarn-api-version.jar)

(Following numbers are sizes of the jar)
2.6.5:
- Jar in Maven: 1896185
- Jar in tarball: 1891485

2.7.5:
- Jar in Maven: 2039371 (md5: 15e76f7c734b49315ef2bce952509ddf)
- Jar in tarball: 2039371 (md5: 0ef9f42f587401f5b49b39f27459f3ef)
(Even size is same, md5 is different)

2.8.3:
- Jar in Maven: 2451433
- Jar in tarball: 2438975

2.9.0:
- Jar in Maven: 2791477
- Jar in tarball: 289

3.0.1:
- Jar in Maven: 2852604
- Jar in tarball: 2851373

I guess the differences come from our release process.

Thanks,
Wangda

[1] Maven jars are downloaded from
https://repository.apache.org/service/local/repositories/releases/content/org/apache/hadoop/hadoop-yarn-api/
/hadoop-yarn-api-.jar
[2] Binary tarballs downloaded from http://apache.claz.org/hadoop/common/


On Tue, Apr 3, 2018 at 4:25 PM, Vinod Kumar Vavilapalli 
wrote:

> We vote on the source code. The binaries are convenience artifacts.
>
> This is what I would do - (a) Just replace both the maven jars as well as
> the binaries to be consistent and correct. And then (b) Give a couple more
> days for folks who tested on the binaries to reverify - I count one such
> clear vote as of now.
>
> Thanks
> +Vinod
>
>
> On Apr 3, 2018, at 3:30 PM, Wangda Tan  wrote:
>
> HI Arpit,
>
> I think it won't match if we do rebuild. It should be fine as far as
> they're signed, correct? I don't see any policy doesn't allow this.
>
> Thanks,
> Wangda
>
>
> On Tue, Apr 3, 2018 at 9:33 AM, Arpit Agarwal 
> wrote:
>
>> Thanks Wangda, I see the shaded jars now.
>>
>> Are the repo jars required to be the same as the binary release? They
>> don’t match right now, probably they got rebuilt.
>>
>> +1 (binding), modulo that remaining question.
>>
>> * Verified signatures
>> * Verified checksums for source and binary artefacts
>> * Sanity checked jars on r.a.o.
>> * Built from source
>> * Deployed to 3 node secure cluster with NameNode HA
>> * Verified HDFS web UIs
>> * Tried out HDFS shell commands
>> * Ran sample MapReduce jobs
>>
>> Thanks!
>>
>>
>> --
>> From: Wangda Tan 
>> Date: Monday, April 2, 2018 at 9:25 PM
>> To: Arpit Agarwal 
>> Cc: Gera Shegalov , Sunil G , "
>> yarn-...@hadoop.apache.org" , Hdfs-dev <
>> hdfs-dev@hadoop.apache.org>, Hadoop Common ,
>> "mapreduce-...@hadoop.apache.org" ,
>> Vinod Kumar Vavilapalli 
>> Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)
>>
>> As pointed by Arpit, the previously deployed shared jars are incorrect.
>> Just redeployed jars and staged. @Arpit, could you please check the updated
>> Maven repo? https://repository.apache.org/content/repositories/
>> orgapachehadoop-1092
>>
>> Since the jars inside binary tarballs are correct (
>> http://people.apache.org/~wangda/hadoop-3.1.0-RC1/). I think we don't
>> need roll another RC, just update Maven repo should be sufficient.
>>
>> Best,
>> Wangda
>>
>>
>> On Mon, Apr 2, 2018 at 2:39 PM, Wangda Tan 
>> wrote:
>> Hi Arpit,
>>
>> Thanks for pointing out this.
>>
>> I just removed all .md5 files from artifacts. I found md5 checksums still
>> exist in .mds files and I didn't remove them from .mds file because it is
>> generated by create-release script and Apache guidance is "should not"
>> instead of "must not". Please let me know if you think they need to be
>> removed as well.
>>
>> - Wangda
>>
>>
>>
>> On Mon, Apr 2, 2018 at 1:37 PM, Arpit Agarwal > aagar...@hortonworks.com> wrote:
>> Thanks for putting together this RC, Wangda.
>>
>> The guidance from Apache is to omit MD5s, specifically:
>>   > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).
>>
>> https://www.apache.org/dev/release-distribution#sigs-and-sums
>>
>>
>>
>>
>> On Apr 2, 2018, at 7:03 AM, Wangda Tan 
>> wrote:
>>
>> Hi Gera,
>>
>> It's my bad, I thought only src/bin tarball is enough.
>>
>> I just uploaded all other things under artifact/ to
>> http://people.apache.org/~wangda/hadoop-3.1.0-RC1/
>>
>> Please let me know if you have any other comments.
>>
>> Thanks,
>> Wangda
>>
>>
>> On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov 
>> wrote:
>>
>>
>> Thanks, Wangda!
>>
>> There are many more artifacts in previous votes, e.g., see
>> http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ .  Among others the
>> site tarball is missing.
>>
>> On Sun, Apr 1, 2018 at 11:54 PM Sunil G  wrote:
>>
>>
>> Thanks Wangda for initiating the release.
>>
>> I tested this RC 

Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-04-03 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/426/

[Apr 2, 2018 2:26:01 PM] (wangda) YARN-7142. Support placement policy in yarn 
native services. (Gour Saha
[Apr 2, 2018 2:52:40 PM] (stevel) HADOOP-15146. Remove DataOutputByteBuffer. 
Contributed by BELUGA BEHR.
[Apr 2, 2018 3:38:13 PM] (jlowe) YARN-8082. Include LocalizedResource size 
information in the NM download
[Apr 2, 2018 10:22:05 PM] (wangda) YARN-8091. Revisit checkUserAccessToQueue RM 
REST API. (wangda)
[Apr 3, 2018 5:48:26 AM] (xiao) HADOOP-15317. Improve NetworkTopology 
chooseRandom's loop.
[Apr 3, 2018 6:10:08 AM] (xiao) HADOOP-15355. TestCommonConfigurationFields is 
broken by HADOOP-15312.
[Apr 3, 2018 7:08:40 AM] (yqlin) HDFS-13364. RBF: Support NamenodeProtocol in 
the Router. Contributed by




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.fs.TestRawLocalFileSystemContract 
   hadoop.fs.TestSymlinkLocalFSFileContext 
   hadoop.fs.TestTrash 
   hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestNativeCodeLoader 
   hadoop.util.TestNodeHealthScriptRunner 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestHSync 
   hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport 
   hadoop.hdfs.server.datanode.TestStorageReport 
   hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport 
   hadoop.hdfs.server.namenode.TestAddBlock 
   

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2018-04-03 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/184/

[Apr 3, 2018 5:55:03 AM] (xiao) HADOOP-15317. Improve NetworkTopology 
chooseRandom's loop.
[Apr 3, 2018 6:10:35 AM] (xiao) HADOOP-15355. TestCommonConfigurationFields is 
broken by HADOOP-15312.
[Apr 3, 2018 4:29:20 PM] (inigoiri) HDFS-13364. RBF: Support NamenodeProtocol 
in the Router. Contributed by
[Apr 3, 2018 6:14:26 PM] (xiao) HADOOP-14987. Improve KMSClientProvider log 
around delegation token
[Apr 3, 2018 8:55:44 PM] (inigoiri) HDFS-13337. Backport HDFS-4275 to 
branch-2.9. Contributed by Xiao Liang.




-1 overall


The following subsystems voted -1:
docker


Powered by Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Iñigo Goiri
+1 (non binding)

* Deployed with 4 subclusters with HDFS Router-based federation.
* Executed DistCp across subclusters through the Router
* Checked documentation and tgz

On Tue, Apr 3, 2018 at 4:25 PM, Vinod Kumar Vavilapalli 
wrote:

> We vote on the source code. The binaries are convenience artifacts.
>
> This is what I would do - (a) Just replace both the maven jars as well as
> the binaries to be consistent and correct. And then (b) Give a couple more
> days for folks who tested on the binaries to reverify - I count one such
> clear vote as of now.
>
> Thanks
> +Vinod
>
> > On Apr 3, 2018, at 3:30 PM, Wangda Tan  wrote:
> >
> > HI Arpit,
> >
> > I think it won't match if we do rebuild. It should be fine as far as
> they're signed, correct? I don't see any policy doesn't allow this.
> >
> > Thanks,
> > Wangda
> >
> >
> > On Tue, Apr 3, 2018 at 9:33 AM, Arpit Agarwal  > wrote:
> > Thanks Wangda, I see the shaded jars now.
> >
> > Are the repo jars required to be the same as the binary release? They
> don’t match right now, probably they got rebuilt.
> >
> > +1 (binding), modulo that remaining question.
> >
> > * Verified signatures
> > * Verified checksums for source and binary artefacts
> > * Sanity checked jars on r.a.o.
> > * Built from source
> > * Deployed to 3 node secure cluster with NameNode HA
> > * Verified HDFS web UIs
> > * Tried out HDFS shell commands
> > * Ran sample MapReduce jobs
> >
> > Thanks!
> >
> >
> > --
> > From: Wangda Tan >
> > Date: Monday, April 2, 2018 at 9:25 PM
> > To: Arpit Agarwal >
> > Cc: Gera Shegalov >, Sunil G
> >, "
> yarn-...@hadoop.apache.org " <
> yarn-...@hadoop.apache.org >, Hdfs-dev
> >, Hadoop
> Common >,
> "mapreduce-...@hadoop.apache.org "
> >,
> Vinod Kumar Vavilapalli >
> > Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)
> >
> > As pointed by Arpit, the previously deployed shared jars are incorrect.
> Just redeployed jars and staged. @Arpit, could you please check the updated
> Maven repo? https://repository.apache.org/content/repositories/
> orgapachehadoop-1092  orgapachehadoop-1092>
> >
> > Since the jars inside binary tarballs are correct (
> http://people.apache.org/~wangda/hadoop-3.1.0-RC1/ <
> http://people.apache.org/~wangda/hadoop-3.1.0-RC1/>). I think we don't
> need roll another RC, just update Maven repo should be sufficient.
> >
> > Best,
> > Wangda
> >
> >
> > On Mon, Apr 2, 2018 at 2:39 PM, Wangda Tan  > wrote:
> > Hi Arpit,
> >
> > Thanks for pointing out this.
> >
> > I just removed all .md5 files from artifacts. I found md5 checksums
> still exist in .mds files and I didn't remove them from .mds file because
> it is generated by create-release script and Apache guidance is "should
> not" instead of "must not". Please let me know if you think they need to be
> removed as well.
> >
> > - Wangda
> >
> >
> >
> > On Mon, Apr 2, 2018 at 1:37 PM, Arpit Agarwal  aagar...@hortonworks.com > wrote:
> > Thanks for putting together this RC, Wangda.
> >
> > The guidance from Apache is to omit MD5s, specifically:
> >   > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).
> >
> > https://www.apache.org/dev/release-distribution#sigs-and-sums <
> https://www.apache.org/dev/release-distribution#sigs-and-sums>
> >
> >
> >
> >
> > On Apr 2, 2018, at 7:03 AM, Wangda Tan  > wrote:
> >
> > Hi Gera,
> >
> > It's my bad, I thought only src/bin tarball is enough.
> >
> > I just uploaded all other things under artifact/ to
> > http://people.apache.org/~wangda/hadoop-3.1.0-RC1/ <
> http://people.apache.org/~wangda/hadoop-3.1.0-RC1/>
> >
> > Please let me know if you have any other comments.
> >
> > Thanks,
> > Wangda
> >
> >
> > On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov  > wrote:
> >
> >
> > Thanks, Wangda!
> >
> > There are many more artifacts in previous votes, e.g., see
> > http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ <
> http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/> .  Among others the
> > site tarball is missing.
> >
> > On Sun, Apr 1, 2018 at 11:54 PM Sunil G 

Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Vinod Kumar Vavilapalli
We vote on the source code. The binaries are convenience artifacts.

This is what I would do - (a) Just replace both the maven jars as well as the 
binaries to be consistent and correct. And then (b) Give a couple more days for 
folks who tested on the binaries to reverify - I count one such clear vote as 
of now.

Thanks
+Vinod

> On Apr 3, 2018, at 3:30 PM, Wangda Tan  wrote:
> 
> HI Arpit,
> 
> I think it won't match if we do rebuild. It should be fine as far as they're 
> signed, correct? I don't see any policy doesn't allow this. 
> 
> Thanks,
> Wangda
> 
> 
> On Tue, Apr 3, 2018 at 9:33 AM, Arpit Agarwal  > wrote:
> Thanks Wangda, I see the shaded jars now.
> 
> Are the repo jars required to be the same as the binary release? They don’t 
> match right now, probably they got rebuilt.
> 
> +1 (binding), modulo that remaining question.
> 
> * Verified signatures
> * Verified checksums for source and binary artefacts
> * Sanity checked jars on r.a.o.
> * Built from source
> * Deployed to 3 node secure cluster with NameNode HA
> * Verified HDFS web UIs
> * Tried out HDFS shell commands
> * Ran sample MapReduce jobs
> 
> Thanks!
> 
> 
> --
> From: Wangda Tan >
> Date: Monday, April 2, 2018 at 9:25 PM
> To: Arpit Agarwal >
> Cc: Gera Shegalov >, Sunil G 
> >, "yarn-...@hadoop.apache.org 
> "  >, Hdfs-dev  >, Hadoop Common 
> >, 
> "mapreduce-...@hadoop.apache.org " 
> >, 
> Vinod Kumar Vavilapalli >
> Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)
> 
> As pointed by Arpit, the previously deployed shared jars are incorrect. Just 
> redeployed jars and staged. @Arpit, could you please check the updated Maven 
> repo? https://repository.apache.org/content/repositories/orgapachehadoop-1092 
> 
> 
> Since the jars inside binary tarballs are correct 
> (http://people.apache.org/~wangda/hadoop-3.1.0-RC1/ 
> ). I think we don't need 
> roll another RC, just update Maven repo should be sufficient. 
> 
> Best,
> Wangda
> 
> 
> On Mon, Apr 2, 2018 at 2:39 PM, Wangda Tan  > wrote:
> Hi Arpit,
> 
> Thanks for pointing out this.
> 
> I just removed all .md5 files from artifacts. I found md5 checksums still 
> exist in .mds files and I didn't remove them from .mds file because it is 
> generated by create-release script and Apache guidance is "should not" 
> instead of "must not". Please let me know if you think they need to be 
> removed as well. 
> 
> - Wangda
> 
> 
> 
> On Mon, Apr 2, 2018 at 1:37 PM, Arpit Agarwal 
> > wrote:
> Thanks for putting together this RC, Wangda.
> 
> The guidance from Apache is to omit MD5s, specifically:
>   > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).
> 
> https://www.apache.org/dev/release-distribution#sigs-and-sums 
> 
> 
>  
> 
> 
> On Apr 2, 2018, at 7:03 AM, Wangda Tan  > wrote:
> 
> Hi Gera,
> 
> It's my bad, I thought only src/bin tarball is enough.
> 
> I just uploaded all other things under artifact/ to
> http://people.apache.org/~wangda/hadoop-3.1.0-RC1/ 
> 
> 
> Please let me know if you have any other comments.
> 
> Thanks,
> Wangda
> 
> 
> On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov  > wrote:
> 
> 
> Thanks, Wangda!
> 
> There are many more artifacts in previous votes, e.g., see
> http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ 
>  .  Among others the
> site tarball is missing.
> 
> On Sun, Apr 1, 2018 at 11:54 PM Sunil G  > wrote:
> 
> 
> Thanks Wangda for initiating the release.
> 
> I tested this RC built from source file.
> 
> 
>   - Tested MR apps (sleep, wc) and verified both new YARN UI and old RM
> UI.
>   - Below feature sanity is done
>  - Application priority
>  - Application timeout
>  - Intra Queue preemption with 

Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Wangda Tan
HI Arpit,

I think it won't match if we do rebuild. It should be fine as far as
they're signed, correct? I don't see any policy doesn't allow this.

Thanks,
Wangda


On Tue, Apr 3, 2018 at 9:33 AM, Arpit Agarwal 
wrote:

> Thanks Wangda, I see the shaded jars now.
>
> Are the repo jars required to be the same as the binary release? They
> don’t match right now, probably they got rebuilt.
>
> +1 (binding), modulo that remaining question.
>
> * Verified signatures
> * Verified checksums for source and binary artefacts
> * Sanity checked jars on r.a.o.
> * Built from source
> * Deployed to 3 node secure cluster with NameNode HA
> * Verified HDFS web UIs
> * Tried out HDFS shell commands
> * Ran sample MapReduce jobs
>
> Thanks!
>
>
> --
> From: Wangda Tan 
> Date: Monday, April 2, 2018 at 9:25 PM
> To: Arpit Agarwal 
> Cc: Gera Shegalov , Sunil G , "
> yarn-...@hadoop.apache.org" , Hdfs-dev <
> hdfs-dev@hadoop.apache.org>, Hadoop Common ,
> "mapreduce-...@hadoop.apache.org" ,
> Vinod Kumar Vavilapalli 
> Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)
>
> As pointed by Arpit, the previously deployed shared jars are incorrect.
> Just redeployed jars and staged. @Arpit, could you please check the updated
> Maven repo? https://repository.apache.org/content/
> repositories/orgapachehadoop-1092
>
> Since the jars inside binary tarballs are correct (
> http://people.apache.org/~wangda/hadoop-3.1.0-RC1/). I think we don't
> need roll another RC, just update Maven repo should be sufficient.
>
> Best,
> Wangda
>
>
> On Mon, Apr 2, 2018 at 2:39 PM, Wangda Tan 
> wrote:
> Hi Arpit,
>
> Thanks for pointing out this.
>
> I just removed all .md5 files from artifacts. I found md5 checksums still
> exist in .mds files and I didn't remove them from .mds file because it is
> generated by create-release script and Apache guidance is "should not"
> instead of "must not". Please let me know if you think they need to be
> removed as well.
>
> - Wangda
>
>
>
> On Mon, Apr 2, 2018 at 1:37 PM, Arpit Agarwal  aagar...@hortonworks.com> wrote:
> Thanks for putting together this RC, Wangda.
>
> The guidance from Apache is to omit MD5s, specifically:
>   > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).
>
> https://www.apache.org/dev/release-distribution#sigs-and-sums
>
>
>
>
> On Apr 2, 2018, at 7:03 AM, Wangda Tan  wrote:
>
> Hi Gera,
>
> It's my bad, I thought only src/bin tarball is enough.
>
> I just uploaded all other things under artifact/ to
> http://people.apache.org/~wangda/hadoop-3.1.0-RC1/
>
> Please let me know if you have any other comments.
>
> Thanks,
> Wangda
>
>
> On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov 
> wrote:
>
>
> Thanks, Wangda!
>
> There are many more artifacts in previous votes, e.g., see
> http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ .  Among others the
> site tarball is missing.
>
> On Sun, Apr 1, 2018 at 11:54 PM Sunil G  wrote:
>
>
> Thanks Wangda for initiating the release.
>
> I tested this RC built from source file.
>
>
>   - Tested MR apps (sleep, wc) and verified both new YARN UI and old RM
> UI.
>   - Below feature sanity is done
>  - Application priority
>  - Application timeout
>  - Intra Queue preemption with priority based
>  - DS based affinity tests to verify placement constraints.
>   - Tested basic NodeLabel scenarios.
>  - Added couple of labels to few of nodes and behavior is coming
>  correct.
>  - Verified old UI  and new YARN UI for labels.
>  - Submitted apps to labelled cluster and it works fine.
>  - Also performed few cli commands related to nodelabel.
>   - Test basic HA cases and seems correct.
>   - Tested new YARN UI . All pages are getting loaded correctly.
>
>
> - Sunil
>
> On Fri, Mar 30, 2018 at 9:45 AM Wangda Tan 
> wrote:
>
>
> Hi folks,
>
> Thanks to the many who helped with this release since Dec 2017 [1].
> We've
>
> created RC1 for Apache Hadoop 3.1.0. The artifacts are available here:
>
> http://people.apache.org/~wangda/hadoop-3.1.0-RC1
>
> The RC tag in git is release-3.1.0-RC1. Last git commit SHA is
> 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
>
> The maven artifacts are available via http://repository.apache.org at
> https://repository.apache.org/content/repositories/
> orgapachehadoop-1090/
>
> This vote will run 5 days, ending on Apr 3 at 11:59 pm Pacific.
>
> 3.1.0 contains 766 [2] fixed JIRA issues since 3.0.0. Notable additions
> include the first class GPU/FPGA support on YARN, Native services,
> Support
>
> rich placement constraints in YARN, S3-related enhancements, 

Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Eric Payne
 +1 (binding)
Thanks Wangda for doing the work to produce this release.
I did the following to test the release:
- Built from source
- Installed on 6-node pseudo cluster
- Interacted with RM CLI and GUI
- Tested streaming jobs
- Tested yarn distributed shell jobs
- Tested Max AM Resource Percent

- Tested simple inter-queue preemption
- Tested priority first intra-queue preemption

- Tested userlimit first intra-queue preemption

Thanks,Eric Payne


===
On Thursday, March 29, 2018, 11:15:51 PM CDT, Wangda Tan 
 wrote:  
 
 Hi folks,

Thanks to the many who helped with this release since Dec 2017 [1]. We've
created RC1 for Apache Hadoop 3.1.0. The artifacts are available here:

http://people.apache.org/~wangda/hadoop-3.1.0-RC1

The RC tag in git is release-3.1.0-RC1. Last git commit SHA is
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d

The maven artifacts are available via repository.apache.org at
https://repository.apache.org/content/repositories/orgapachehadoop-1090/
This vote will run 5 days, ending on Apr 3 at 11:59 pm Pacific.

3.1.0 contains 766 [2] fixed JIRA issues since 3.0.0. Notable additions
include the first class GPU/FPGA support on YARN, Native services, Support
rich placement constraints in YARN, S3-related enhancements, allow HDFS
block replicas to be provided by an external storage system, etc.

For 3.1.0 RC0 vote discussion, please see [3].

We’d like to use this as a starting release for 3.1.x [1], depending on how
it goes, get it stabilized and potentially use a 3.1.1 in several weeks as
the stable release.

We have done testing with a pseudo cluster:
- Ran distributed job.
- GPU scheduling/isolation.
- Placement constraints (intra-application anti-affinity) by using
distributed shell.

My +1 to start.

Best,
Wangda/Vinod

[1]
https://lists.apache.org/thread.html/b3fb3b6da8b6357a68513a6dfd104bc9e19e559aedc5ebedb4ca08c8@%3Cyarn-dev.hadoop.apache.org%3E
[2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.0)
AND fixVersion not in (3.0.0, 3.0.0-beta1) AND status = Resolved ORDER BY
fixVersion ASC
[3]
https://lists.apache.org/thread.html/b3a7dc075b7329fd660f65b48237d72d4061f26f83547e41d0983ea6@%3Cyarn-dev.hadoop.apache.org%3E
  

[jira] [Created] (HDFS-13393) Improve OOM logging

2018-04-03 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-13393:
--

 Summary: Improve OOM logging
 Key: HDFS-13393
 URL: https://issues.apache.org/jira/browse/HDFS-13393
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover, datanode
Reporter: Wei-Chiu Chuang


It is not uncommon to find "java.lang.OutOfMemoryError: unable to create new 
native thread" error in a HDFS cluster. Most often this happens when DataNode 
creating DataXceiver threads, or when balancer creates threads for moving 
blocks around.

In most of cases, the "OOM" is a symptom of number of threads reaching system 
limit, rather than actually running out of memory.

How about capturing the OOM, and if it is due to "unable to create new native 
thread", print some more helpful message like "bump your ulimit" or "take a 
jstack of the process"?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Gera Shegalov
+1 (non-binding)

- built from source
- tested SparkPi on minicluster (modulo YARN-7747
)
- tested SparkPi on pseudo-distributed cluster
- browsed HDFS doc in the site tarball


On Tue, Apr 3, 2018 at 11:55 AM Hanisha Koneru 
wrote:

> Correction: My vote is NON-BINDING. Sorry for the confusion.
>
>
> Thanks,
> Hanisha
>
>
>
>
>
>
>
>
> On 4/3/18, 11:40 AM, "Hanisha Koneru"  wrote:
>
> >Thanks Wangda for putting up the RC for 3.1.0.
> >
> >+1 (binding).
> >
> >Verified the following:
> >- Built from source
> >- Deployed binary to a 3-node docker cluster
> >- Sanity checks
> >   - Basic dfs operations
> >   - MapReduce Wordcount & Grep
> >
> >
> >Thanks,
> >Hanisha
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >On 4/3/18, 9:33 AM, "Arpit Agarwal"  wrote:
> >
> >>Thanks Wangda, I see the shaded jars now.
> >>
> >>Are the repo jars required to be the same as the binary release? They
> don’t match right now, probably they got rebuilt.
> >>
> >>+1 (binding), modulo that remaining question.
> >>
> >>* Verified signatures
> >>* Verified checksums for source and binary artefacts
> >>* Sanity checked jars on r.a.o.
> >>* Built from source
> >>* Deployed to 3 node secure cluster with NameNode HA
> >>* Verified HDFS web UIs
> >>* Tried out HDFS shell commands
> >>* Ran sample MapReduce jobs
> >>
> >>Thanks!
> >>
> >>
> >>--
> >>From: Wangda Tan 
> >>Date: Monday, April 2, 2018 at 9:25 PM
> >>To: Arpit Agarwal 
> >>Cc: Gera Shegalov , Sunil G , "
> yarn-...@hadoop.apache.org" , Hdfs-dev <
> hdfs-dev@hadoop.apache.org>, Hadoop Common ,
> "mapreduce-...@hadoop.apache.org" ,
> Vinod Kumar Vavilapalli 
> >>Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)
> >>
> >>As pointed by Arpit, the previously deployed shared jars are incorrect.
> Just redeployed jars and staged. @Arpit, could you please check the updated
> Maven repo?
> https://repository.apache.org/content/repositories/orgapachehadoop-1092
> >>
> >>Since the jars inside binary tarballs are correct (
> http://people.apache.org/~wangda/hadoop-3.1.0-RC1/). I think we don't
> need roll another RC, just update Maven repo should be sufficient.
> >>
> >>Best,
> >>Wangda
> >>
> >>
> >>On Mon, Apr 2, 2018 at 2:39 PM, Wangda Tan 
> wrote:
> >>Hi Arpit,
> >>
> >>Thanks for pointing out this.
> >>
> >>I just removed all .md5 files from artifacts. I found md5 checksums
> still exist in .mds files and I didn't remove them from .mds file because
> it is generated by create-release script and Apache guidance is "should
> not" instead of "must not". Please let me know if you think they need to be
> removed as well.
> >>
> >>- Wangda
> >>
> >>
> >>
> >>On Mon, Apr 2, 2018 at 1:37 PM, Arpit Agarwal  aagar...@hortonworks.com> wrote:
> >>Thanks for putting together this RC, Wangda.
> >>
> >>The guidance from Apache is to omit MD5s, specifically:
> >>  > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).
> >>
> >>https://www.apache.org/dev/release-distribution#sigs-and-sums
> >>
> >>
> >>
> >>
> >>On Apr 2, 2018, at 7:03 AM, Wangda Tan 
> wrote:
> >>
> >>Hi Gera,
> >>
> >>It's my bad, I thought only src/bin tarball is enough.
> >>
> >>I just uploaded all other things under artifact/ to
> >>http://people.apache.org/~wangda/hadoop-3.1.0-RC1/
> >>
> >>Please let me know if you have any other comments.
> >>
> >>Thanks,
> >>Wangda
> >>
> >>
> >>On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov 
> wrote:
> >>
> >>
> >>Thanks, Wangda!
> >>
> >>There are many more artifacts in previous votes, e.g., see
> >>http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ .  Among others the
> >>site tarball is missing.
> >>
> >>On Sun, Apr 1, 2018 at 11:54 PM Sunil G 
> wrote:
> >>
> >>
> >>Thanks Wangda for initiating the release.
> >>
> >>I tested this RC built from source file.
> >>
> >>
> >>  - Tested MR apps (sleep, wc) and verified both new YARN UI and old RM
> >>UI.
> >>  - Below feature sanity is done
> >> - Application priority
> >> - Application timeout
> >> - Intra Queue preemption with priority based
> >> - DS based affinity tests to verify placement constraints.
> >>  - Tested basic NodeLabel scenarios.
> >> - Added couple of labels to few of nodes and behavior is coming
> >> correct.
> >> - Verified old UI  and new YARN UI for labels.
> >> - Submitted apps to labelled cluster and it works fine.
> >> - Also performed few cli commands related to nodelabel.
> >>  - Test basic HA cases and seems correct.
> >>  - Tested new YARN UI . All pages are getting 

Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Hanisha Koneru
Correction: My vote is NON-BINDING. Sorry for the confusion.


Thanks,
Hanisha








On 4/3/18, 11:40 AM, "Hanisha Koneru"  wrote:

>Thanks Wangda for putting up the RC for 3.1.0.
>
>+1 (binding).
>
>Verified the following:
>- Built from source
>- Deployed binary to a 3-node docker cluster
>- Sanity checks
>   - Basic dfs operations
>   - MapReduce Wordcount & Grep
>
>
>Thanks,
>Hanisha
>
>
>
>
>
>
>
>
>
>On 4/3/18, 9:33 AM, "Arpit Agarwal"  wrote:
>
>>Thanks Wangda, I see the shaded jars now.
>>
>>Are the repo jars required to be the same as the binary release? They don’t 
>>match right now, probably they got rebuilt.
>>
>>+1 (binding), modulo that remaining question.
>>
>>* Verified signatures
>>* Verified checksums for source and binary artefacts
>>* Sanity checked jars on r.a.o. 
>>* Built from source
>>* Deployed to 3 node secure cluster with NameNode HA
>>* Verified HDFS web UIs
>>* Tried out HDFS shell commands
>>* Ran sample MapReduce jobs
>>
>>Thanks!
>>
>>
>>--
>>From: Wangda Tan 
>>Date: Monday, April 2, 2018 at 9:25 PM
>>To: Arpit Agarwal 
>>Cc: Gera Shegalov , Sunil G , 
>>"yarn-...@hadoop.apache.org" , Hdfs-dev 
>>, Hadoop Common , 
>>"mapreduce-...@hadoop.apache.org" , Vinod 
>>Kumar Vavilapalli 
>>Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)
>>
>>As pointed by Arpit, the previously deployed shared jars are incorrect. Just 
>>redeployed jars and staged. @Arpit, could you please check the updated Maven 
>>repo? https://repository.apache.org/content/repositories/orgapachehadoop-1092 
>>
>>Since the jars inside binary tarballs are correct 
>>(http://people.apache.org/~wangda/hadoop-3.1.0-RC1/). I think we don't need 
>>roll another RC, just update Maven repo should be sufficient. 
>>
>>Best,
>>Wangda
>>
>>
>>On Mon, Apr 2, 2018 at 2:39 PM, Wangda Tan  wrote:
>>Hi Arpit, 
>>
>>Thanks for pointing out this.
>>
>>I just removed all .md5 files from artifacts. I found md5 checksums still 
>>exist in .mds files and I didn't remove them from .mds file because it is 
>>generated by create-release script and Apache guidance is "should not" 
>>instead of "must not". Please let me know if you think they need to be 
>>removed as well. 
>>
>>- Wangda
>>
>>
>>
>>On Mon, Apr 2, 2018 at 1:37 PM, Arpit Agarwal 
>> wrote:
>>Thanks for putting together this RC, Wangda. 
>>
>>The guidance from Apache is to omit MD5s, specifically:
>>  > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).
>>
>>https://www.apache.org/dev/release-distribution#sigs-and-sums
>>
>> 
>>
>>
>>On Apr 2, 2018, at 7:03 AM, Wangda Tan  wrote:
>>
>>Hi Gera,
>>
>>It's my bad, I thought only src/bin tarball is enough.
>>
>>I just uploaded all other things under artifact/ to
>>http://people.apache.org/~wangda/hadoop-3.1.0-RC1/
>>
>>Please let me know if you have any other comments.
>>
>>Thanks,
>>Wangda
>>
>>
>>On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov  
>>wrote:
>>
>>
>>Thanks, Wangda!
>>
>>There are many more artifacts in previous votes, e.g., see
>>http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ .  Among others the
>>site tarball is missing.
>>
>>On Sun, Apr 1, 2018 at 11:54 PM Sunil G  wrote:
>>
>>
>>Thanks Wangda for initiating the release.
>>
>>I tested this RC built from source file.
>>
>>
>>  - Tested MR apps (sleep, wc) and verified both new YARN UI and old RM
>>UI.
>>  - Below feature sanity is done
>> - Application priority
>> - Application timeout
>> - Intra Queue preemption with priority based
>> - DS based affinity tests to verify placement constraints.
>>  - Tested basic NodeLabel scenarios.
>> - Added couple of labels to few of nodes and behavior is coming
>> correct.
>> - Verified old UI  and new YARN UI for labels.
>> - Submitted apps to labelled cluster and it works fine.
>> - Also performed few cli commands related to nodelabel.
>>  - Test basic HA cases and seems correct.
>>  - Tested new YARN UI . All pages are getting loaded correctly.
>>
>>
>>- Sunil
>>
>>On Fri, Mar 30, 2018 at 9:45 AM Wangda Tan  wrote:
>>
>>
>>Hi folks,
>>
>>Thanks to the many who helped with this release since Dec 2017 [1].
>>We've
>>
>>created RC1 for Apache Hadoop 3.1.0. The artifacts are available here:
>>
>>http://people.apache.org/~wangda/hadoop-3.1.0-RC1
>>
>>The RC tag in git is release-3.1.0-RC1. Last git commit SHA is
>>16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
>>
>>The maven artifacts are available via http://repository.apache.org at

Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Hanisha Koneru
Thanks Wangda for putting up the RC for 3.1.0.

+1 (binding).

Verified the following:
- Built from source
- Deployed binary to a 3-node docker cluster
- Sanity checks
- Basic dfs operations
- MapReduce Wordcount & Grep


Thanks,
Hanisha









On 4/3/18, 9:33 AM, "Arpit Agarwal"  wrote:

>Thanks Wangda, I see the shaded jars now.
>
>Are the repo jars required to be the same as the binary release? They don’t 
>match right now, probably they got rebuilt.
>
>+1 (binding), modulo that remaining question.
>
>* Verified signatures
>* Verified checksums for source and binary artefacts
>* Sanity checked jars on r.a.o. 
>* Built from source
>* Deployed to 3 node secure cluster with NameNode HA
>* Verified HDFS web UIs
>* Tried out HDFS shell commands
>* Ran sample MapReduce jobs
>
>Thanks!
>
>
>--
>From: Wangda Tan 
>Date: Monday, April 2, 2018 at 9:25 PM
>To: Arpit Agarwal 
>Cc: Gera Shegalov , Sunil G , 
>"yarn-...@hadoop.apache.org" , Hdfs-dev 
>, Hadoop Common , 
>"mapreduce-...@hadoop.apache.org" , Vinod 
>Kumar Vavilapalli 
>Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)
>
>As pointed by Arpit, the previously deployed shared jars are incorrect. Just 
>redeployed jars and staged. @Arpit, could you please check the updated Maven 
>repo? https://repository.apache.org/content/repositories/orgapachehadoop-1092 
>
>Since the jars inside binary tarballs are correct 
>(http://people.apache.org/~wangda/hadoop-3.1.0-RC1/). I think we don't need 
>roll another RC, just update Maven repo should be sufficient. 
>
>Best,
>Wangda
>
>
>On Mon, Apr 2, 2018 at 2:39 PM, Wangda Tan  wrote:
>Hi Arpit, 
>
>Thanks for pointing out this.
>
>I just removed all .md5 files from artifacts. I found md5 checksums still 
>exist in .mds files and I didn't remove them from .mds file because it is 
>generated by create-release script and Apache guidance is "should not" instead 
>of "must not". Please let me know if you think they need to be removed as 
>well. 
>
>- Wangda
>
>
>
>On Mon, Apr 2, 2018 at 1:37 PM, Arpit Agarwal 
> wrote:
>Thanks for putting together this RC, Wangda. 
>
>The guidance from Apache is to omit MD5s, specifically:
>  > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).
>
>https://www.apache.org/dev/release-distribution#sigs-and-sums
>
> 
>
>
>On Apr 2, 2018, at 7:03 AM, Wangda Tan  wrote:
>
>Hi Gera,
>
>It's my bad, I thought only src/bin tarball is enough.
>
>I just uploaded all other things under artifact/ to
>http://people.apache.org/~wangda/hadoop-3.1.0-RC1/
>
>Please let me know if you have any other comments.
>
>Thanks,
>Wangda
>
>
>On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov  wrote:
>
>
>Thanks, Wangda!
>
>There are many more artifacts in previous votes, e.g., see
>http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ .  Among others the
>site tarball is missing.
>
>On Sun, Apr 1, 2018 at 11:54 PM Sunil G  wrote:
>
>
>Thanks Wangda for initiating the release.
>
>I tested this RC built from source file.
>
>
>  - Tested MR apps (sleep, wc) and verified both new YARN UI and old RM
>UI.
>  - Below feature sanity is done
> - Application priority
> - Application timeout
> - Intra Queue preemption with priority based
> - DS based affinity tests to verify placement constraints.
>  - Tested basic NodeLabel scenarios.
> - Added couple of labels to few of nodes and behavior is coming
> correct.
> - Verified old UI  and new YARN UI for labels.
> - Submitted apps to labelled cluster and it works fine.
> - Also performed few cli commands related to nodelabel.
>  - Test basic HA cases and seems correct.
>  - Tested new YARN UI . All pages are getting loaded correctly.
>
>
>- Sunil
>
>On Fri, Mar 30, 2018 at 9:45 AM Wangda Tan  wrote:
>
>
>Hi folks,
>
>Thanks to the many who helped with this release since Dec 2017 [1].
>We've
>
>created RC1 for Apache Hadoop 3.1.0. The artifacts are available here:
>
>http://people.apache.org/~wangda/hadoop-3.1.0-RC1
>
>The RC tag in git is release-3.1.0-RC1. Last git commit SHA is
>16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
>
>The maven artifacts are available via http://repository.apache.org at
>https://repository.apache.org/content/repositories/
>orgapachehadoop-1090/
>
>This vote will run 5 days, ending on Apr 3 at 11:59 pm Pacific.
>
>3.1.0 contains 766 [2] fixed JIRA issues since 3.0.0. Notable additions
>include the first class GPU/FPGA support on YARN, Native services,
>Support
>
>rich placement constraints in YARN, S3-related 

[jira] [Created] (HDFS-13392) Incorrect length in Truncate CloseEvents

2018-04-03 Thread David Tucker (JIRA)
David Tucker created HDFS-13392:
---

 Summary: Incorrect length in Truncate CloseEvents
 Key: HDFS-13392
 URL: https://issues.apache.org/jira/browse/HDFS-13392
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: David Tucker


Under stress (multiple clients truncating separate non-empty files in half 
simultaneously), the CloseEvent triggered by a Truncate RPC may contain an 
incorrect length. We're able to reproduce this reliably ~20% of the time (our 
tests are somewhat randomized/fuzzy).

For example, given this Truncate request:
{noformat}
Request:
  truncate {
src: 
"/chai_test65c9a2a0-1188-439d-92e2-96a81c14a266-\357\254\200\357\272\217\357\255\217\343\203\276\324\262\342\204\200\342\213\251/chai_testbd968366-0016-4462-ac12-e48e0487bebd-\340\270\215\334\200\311\226\342\202\242\343\202\236\340\256\205\357\272\217/chai_testb5b155e8-b331-4f67-bdfa-546f82128b5d-\312\254\340\272\201\343\202\242\306\220\340\244\205\342\202\242\343\204\270a\334\240\337\213\340\244\240\343\200\243\342\202\243\343\203\276\313\225\346\206\250"
newLength: 2003855
clientName: 
"\341\264\275\327\220\343\203\250\333\263\343\220\205\357\254\227\340\270\201\340\245\251\306\225\341\203\265\334\220\342\202\243\343\204\206!A\343\206\215\357\254\201\340\273\223\347\224\260"
  }
  Block Size: 1048576B
  Old length: 4007711B (3.82205104828 blocks)
  Truncation: 2003856B (1.91102600098 blocks)
  New length: 2003855B (1.9110250473 blocks)
Response:
  result: true
{noformat}
We see these INotify events:
{noformat}
TruncateEvent {
path: 
/chai_test65c9a2a0-1188-439d-92e2-96a81c14a266-ffﺏﭏヾԲ℀⋩/chai_testbd968366-0016-4462-ac12-e48e0487bebd-ญ܀ɖ₢ゞஅﺏ/chai_testb5b155e8-b331-4f67-bdfa-546f82128b5d-ʬກアƐअ₢ㄸaܠߋठ〣₣ヾ˕憨
length: 2003855
timestamp: 1522716573143
}
{noformat}
{noformat}
CloseEvent {
path: 
/chai_test65c9a2a0-1188-439d-92e2-96a81c14a266-ffﺏﭏヾԲ℀⋩/chai_testbd968366-0016-4462-ac12-e48e0487bebd-ญ܀ɖ₢ゞஅﺏ/chai_testb5b155e8-b331-4f67-bdfa-546f82128b5d-ʬກアƐअ₢ㄸaܠߋठ〣₣ヾ˕憨
length: -2
timestamp: 1522716575723
}
{noformat}
{{-2}} is not the only number that shows up as the length, 
{{9223372036854775807}} is common too. These are detected by Python 2 tests, 
and the latter is {{sys.maxint}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Arpit Agarwal
Thanks Wangda, I see the shaded jars now.

Are the repo jars required to be the same as the binary release? They don’t 
match right now, probably they got rebuilt.

+1 (binding), modulo that remaining question.

* Verified signatures
* Verified checksums for source and binary artefacts
* Sanity checked jars on r.a.o. 
* Built from source
* Deployed to 3 node secure cluster with NameNode HA
* Verified HDFS web UIs
* Tried out HDFS shell commands
* Ran sample MapReduce jobs

Thanks!


--
From: Wangda Tan 
Date: Monday, April 2, 2018 at 9:25 PM
To: Arpit Agarwal 
Cc: Gera Shegalov , Sunil G , 
"yarn-...@hadoop.apache.org" , Hdfs-dev 
, Hadoop Common , 
"mapreduce-...@hadoop.apache.org" , Vinod 
Kumar Vavilapalli 
Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

As pointed by Arpit, the previously deployed shared jars are incorrect. Just 
redeployed jars and staged. @Arpit, could you please check the updated Maven 
repo? https://repository.apache.org/content/repositories/orgapachehadoop-1092 

Since the jars inside binary tarballs are correct 
(http://people.apache.org/~wangda/hadoop-3.1.0-RC1/). I think we don't need 
roll another RC, just update Maven repo should be sufficient. 

Best,
Wangda


On Mon, Apr 2, 2018 at 2:39 PM, Wangda Tan  wrote:
Hi Arpit, 

Thanks for pointing out this.

I just removed all .md5 files from artifacts. I found md5 checksums still exist 
in .mds files and I didn't remove them from .mds file because it is generated 
by create-release script and Apache guidance is "should not" instead of "must 
not". Please let me know if you think they need to be removed as well. 

- Wangda



On Mon, Apr 2, 2018 at 1:37 PM, Arpit Agarwal  
wrote:
Thanks for putting together this RC, Wangda. 

The guidance from Apache is to omit MD5s, specifically:
  > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).

https://www.apache.org/dev/release-distribution#sigs-and-sums

 


On Apr 2, 2018, at 7:03 AM, Wangda Tan  wrote:

Hi Gera,

It's my bad, I thought only src/bin tarball is enough.

I just uploaded all other things under artifact/ to
http://people.apache.org/~wangda/hadoop-3.1.0-RC1/

Please let me know if you have any other comments.

Thanks,
Wangda


On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov  wrote:


Thanks, Wangda!

There are many more artifacts in previous votes, e.g., see
http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ .  Among others the
site tarball is missing.

On Sun, Apr 1, 2018 at 11:54 PM Sunil G  wrote:


Thanks Wangda for initiating the release.

I tested this RC built from source file.


  - Tested MR apps (sleep, wc) and verified both new YARN UI and old RM
UI.
  - Below feature sanity is done
 - Application priority
 - Application timeout
 - Intra Queue preemption with priority based
 - DS based affinity tests to verify placement constraints.
  - Tested basic NodeLabel scenarios.
 - Added couple of labels to few of nodes and behavior is coming
 correct.
 - Verified old UI  and new YARN UI for labels.
 - Submitted apps to labelled cluster and it works fine.
 - Also performed few cli commands related to nodelabel.
  - Test basic HA cases and seems correct.
  - Tested new YARN UI . All pages are getting loaded correctly.


- Sunil

On Fri, Mar 30, 2018 at 9:45 AM Wangda Tan  wrote:


Hi folks,

Thanks to the many who helped with this release since Dec 2017 [1].
We've

created RC1 for Apache Hadoop 3.1.0. The artifacts are available here:

http://people.apache.org/~wangda/hadoop-3.1.0-RC1

The RC tag in git is release-3.1.0-RC1. Last git commit SHA is
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d

The maven artifacts are available via http://repository.apache.org at
https://repository.apache.org/content/repositories/
orgapachehadoop-1090/

This vote will run 5 days, ending on Apr 3 at 11:59 pm Pacific.

3.1.0 contains 766 [2] fixed JIRA issues since 3.0.0. Notable additions
include the first class GPU/FPGA support on YARN, Native services,
Support

rich placement constraints in YARN, S3-related enhancements, allow HDFS
block replicas to be provided by an external storage system, etc.

For 3.1.0 RC0 vote discussion, please see [3].

We’d like to use this as a starting release for 3.1.x [1], depending on
how

it goes, get it stabilized and potentially use a 3.1.1 in several weeks
as

the stable release.

We have done testing with a pseudo cluster:
- Ran distributed job.
- GPU scheduling/isolation.
- Placement constraints (intra-application anti-affinity) by using
distributed 

[jira] [Created] (HDFS-13391) Ozone: Make dependency of internal sub-module scope as provided in maven.

2018-04-03 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-13391:
--

 Summary: Ozone: Make dependency of internal sub-module scope as 
provided in maven.
 Key: HDFS-13391
 URL: https://issues.apache.org/jira/browse/HDFS-13391
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nanda kumar
Assignee: Nanda kumar


Whenever an internal sub-module is added as a dependency the scope has to be 
set to {{provided}}.
If the scope is not mentioned it falls back to default scope which is 
{{compile}}, this makes the dependency jar (sub-module jar) to be copied to 
{{share//lib}} directory while packaging. Since we use {{copyifnotexists}} 
logic, the binary jar of the actual sub-module will not be copied. This will 
result in the jar being placed in the wrong location inside the distribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13390) AccessControlException for overwrite but not for delete

2018-04-03 Thread Nasir Ali (JIRA)
Nasir Ali created HDFS-13390:


 Summary: AccessControlException for overwrite but not for delete
 Key: HDFS-13390
 URL: https://issues.apache.org/jira/browse/HDFS-13390
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.9.0
 Environment: *Environment:*
OS: Centos
PyArrow Version: 0.8.0
Python version: 3.6
HDFS: 2.9
Reporter: Nasir Ali


 

*Problem:*
I have a file (F-1) saved in HDFS with permissions set to "-rw-r--r--" with 
user "cnali". User "nndugudi" cannot overwrite F-1 (vice versa). hdfs.write 
will generate following exception:

org.apache.hadoop.security.AccessControlException: Permission denied: 
user=nndugudi, access=WRITE, 
inode="/cerebralcortex/data/-f81c-44d2-9db8-fea69f468d58/-5087-3d56-ad0e-0b27c3c83182/20171105.gz":cnali:supergroup:-rw-r--r--

However, user "nndugudi" can delete the file without any problem. So why 
overwriting file will produce AccessControlException and not the delete method?

*Sample Code*:
File: 
[https://github.com/MD2Korg/CerebralCortex/blob/master/cerebralcortex/core/data_manager/raw/stream_handler.py]

LOC: 659-705 (write_hdfs_day_file)

 

*HDFS Configurations*:

All configurations are set to default. Security is also disabled as of now.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13389) Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using maven protoc plugin

2018-04-03 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-13389:


 Summary: Ozone: Compile Ozone/HDFS/Cblock protobuf files with 
proto3 compiler using maven protoc plugin
 Key: HDFS-13389
 URL: https://issues.apache.org/jira/browse/HDFS-13389
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


Currently all the Ozone/HDFS/Cblock proto files are compiled using proto 2.5, 
this can be changed to use proto3 compiler.

This change will help in performance improvement as well because currently in 
the client path, the xceiver client ratis converts proto2 classes to proto3 
using byte string manipulation.

Please note that for rest of hadoop (except Ozone/Cblock/HDSL), the protoc 
version will still remain 2.5 as this proto compilation will be done through 
the following plugin. 
https://www.xolstice.org/protobuf-maven-plugin/





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-04-03 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/

[Apr 2, 2018 2:26:01 PM] (wangda) YARN-7142. Support placement policy in yarn 
native services. (Gour Saha
[Apr 2, 2018 2:52:40 PM] (stevel) HADOOP-15146. Remove DataOutputByteBuffer. 
Contributed by BELUGA BEHR.
[Apr 2, 2018 3:38:13 PM] (jlowe) YARN-8082. Include LocalizedResource size 
information in the NM download
[Apr 2, 2018 10:22:05 PM] (wangda) YARN-8091. Revisit checkUserAccessToQueue RM 
REST API. (wangda)




-1 overall


The following subsystems voted -1:
findbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.conf.TestCommonConfigurationFields 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/diff-compile-javac-root.txt
  [288K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/whitespace-eol.txt
  [9.2M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [160K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [300K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [48K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/740/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]

Powered by Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

RE: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Takanobu Asanuma
Thanks for your efforts, Wangda. Sorry for participating late.

+1 (non-binding)
   - Verified the checksum of the tarball
   - Succeeded "mvn clean package -Pdist,native -Dtar -DskipTests"
   - Started docker hadoop cluster with 1 master and 5 slaves
   - Verified TeraGen/TeraSort
   - Verified some erasure coding operations
   - Seems the shaded jars (hadoop-client-runtime, hadoop-client-api, 
hadoop-client-minicluster) have correct sizes in orgapachehadoop-1092.

-Takanobu

-Original Message-
From: Wangda Tan [mailto:wheele...@gmail.com] 
Sent: Tuesday, April 03, 2018 1:25 PM
To: Arpit Agarwal 
Cc: Gera Shegalov ; Sunil G ; 
yarn-...@hadoop.apache.org; Hdfs-dev ; Hadoop 
Common ; mapreduce-...@hadoop.apache.org; Vinod 
Kumar Vavilapalli 
Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

As pointed by Arpit, the previously deployed shared jars are incorrect.
Just redeployed jars and staged. @Arpit, could you please check the updated 
Maven repo?
https://repository.apache.org/content/repositories/orgapachehadoop-1092

Since the jars inside binary tarballs are correct ( 
http://people.apache.org/~wangda/hadoop-3.1.0-RC1/). I think we don't need roll 
another RC, just update Maven repo should be sufficient.

Best,
Wangda


On Mon, Apr 2, 2018 at 2:39 PM, Wangda Tan  wrote:

> Hi Arpit,
>
> Thanks for pointing out this.
>
> I just removed all .md5 files from artifacts. I found md5 checksums 
> still exist in .mds files and I didn't remove them from .mds file 
> because it is generated by create-release script and Apache guidance is 
> "should not"
> instead of "must not". Please let me know if you think they need to be 
> removed as well.
>
> - Wangda
>
>
>
> On Mon, Apr 2, 2018 at 1:37 PM, Arpit Agarwal 
> 
> wrote:
>
>> Thanks for putting together this RC, Wangda.
>>
>> The guidance from Apache is to omit MD5s, specifically:
>>   > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).
>>
>> https://www.apache.org/dev/release-distribution#sigs-and-sums
>>
>>
>>
>> On Apr 2, 2018, at 7:03 AM, Wangda Tan  wrote:
>>
>> Hi Gera,
>>
>> It's my bad, I thought only src/bin tarball is enough.
>>
>> I just uploaded all other things under artifact/ to 
>> http://people.apache.org/~wangda/hadoop-3.1.0-RC1/
>>
>> Please let me know if you have any other comments.
>>
>> Thanks,
>> Wangda
>>
>>
>> On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov  wrote:
>>
>> Thanks, Wangda!
>>
>> There are many more artifacts in previous votes, e.g., see 
>> http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ .  Among others 
>> the site tarball is missing.
>>
>> On Sun, Apr 1, 2018 at 11:54 PM Sunil G  wrote:
>>
>> Thanks Wangda for initiating the release.
>>
>> I tested this RC built from source file.
>>
>>
>>   - Tested MR apps (sleep, wc) and verified both new YARN UI and old 
>> RM UI.
>>   - Below feature sanity is done
>>  - Application priority
>>  - Application timeout
>>  - Intra Queue preemption with priority based
>>  - DS based affinity tests to verify placement constraints.
>>   - Tested basic NodeLabel scenarios.
>>  - Added couple of labels to few of nodes and behavior is coming
>>  correct.
>>  - Verified old UI  and new YARN UI for labels.
>>  - Submitted apps to labelled cluster and it works fine.
>>  - Also performed few cli commands related to nodelabel.
>>   - Test basic HA cases and seems correct.
>>   - Tested new YARN UI . All pages are getting loaded correctly.
>>
>>
>> - Sunil
>>
>> On Fri, Mar 30, 2018 at 9:45 AM Wangda Tan  wrote:
>>
>> Hi folks,
>>
>> Thanks to the many who helped with this release since Dec 2017 [1].
>>
>> We've
>>
>> created RC1 for Apache Hadoop 3.1.0. The artifacts are available here:
>>
>> http://people.apache.org/~wangda/hadoop-3.1.0-RC1
>>
>> The RC tag in git is release-3.1.0-RC1. Last git commit SHA is 
>> 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
>>
>> The maven artifacts are available via repository.apache.org at 
>> https://repository.apache.org/content/repositories/
>>
>> orgapachehadoop-1090/
>>
>> This vote will run 5 days, ending on Apr 3 at 11:59 pm Pacific.
>>
>> 3.1.0 contains 766 [2] fixed JIRA issues since 3.0.0. Notable 
>> additions include the first class GPU/FPGA support on YARN, Native 
>> services,
>>
>> Support
>>
>> rich placement constraints in YARN, S3-related enhancements, allow 
>> HDFS block replicas to be provided by an external storage system, etc.
>>
>> For 3.1.0 RC0 vote discussion, please see [3].
>>
>> We’d like to use this as a starting release for 3.1.x [1], depending 
>> on
>>
>> how
>>
>> it goes, get it stabilized and potentially use a 3.1.1 in several 
>> weeks
>>
>> as
>>
>> the stable release.
>>
>> We have done testing 

[jira] [Created] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Jinglun (JIRA)
Jinglun created HDFS-13388:
--

 Summary: RequestHedgingProxyProvider calls multiple configured NNs 
all the time
 Key: HDFS-13388
 URL: https://issues.apache.org/jira/browse/HDFS-13388
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Jinglun


In HDFS-7858 RequestHedgingProxyProvider was designed to "first simultaneously 
call multiple configured NNs to decide which is the active Namenode and then 
for subsequent calls it will invoke the previously successful NN ." But the 
current code call multiple configured NNs every time even when we already got 
the successful NN. 
That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
proxyInfo is assigned only when it is constructed or when failover occurs. 
RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the only 
proxy we can get is always a dynamic proxy handled by 
RequestHedgingInvocationHandler.class. It handles method invoke by call 
multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13387) Make multi-thread access class thread safe

2018-04-03 Thread maobaolong (JIRA)
maobaolong created HDFS-13387:
-

 Summary: Make multi-thread access class thread safe
 Key: HDFS-13387
 URL: https://issues.apache.org/jira/browse/HDFS-13387
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.2.0
Reporter: maobaolong
Assignee: maobaolong


This jira will lead us to make the class as BlockInfoContiguous thread-safe, 
then, we should not use the NameSystemLock to lock the full flow. This just a 
base step to achieve the plan of HDFS-8966



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Konstantinos Karanasos
+1 (binding)

Thanks for putting this together, Wangda.

I deployed a 7-node hadoop cluster and:
- ran various MR jobs
- ran the same jobs with a mix of opportunistic containers via centralized
scheduling
- ran the jobs with opportunistic containers and distributed scheduling
- ran some jobs with various placement constraints.

All worked as expected.

Thanks,
Konstantinos

On Thu, Mar 29, 2018 at 9:15 PM, Wangda Tan  wrote:

> Hi folks,
>
> Thanks to the many who helped with this release since Dec 2017 [1]. We've
> created RC1 for Apache Hadoop 3.1.0. The artifacts are available here:
>
> http://people.apache.org/~wangda/hadoop-3.1.0-RC1
>
> The RC tag in git is release-3.1.0-RC1. Last git commit SHA is
> 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
>
> The maven artifacts are available via repository.apache.org at
> https://repository.apache.org/content/repositories/orgapachehadoop-1090/
> This vote will run 5 days, ending on Apr 3 at 11:59 pm Pacific.
>
> 3.1.0 contains 766 [2] fixed JIRA issues since 3.0.0. Notable additions
> include the first class GPU/FPGA support on YARN, Native services, Support
> rich placement constraints in YARN, S3-related enhancements, allow HDFS
> block replicas to be provided by an external storage system, etc.
>
> For 3.1.0 RC0 vote discussion, please see [3].
>
> We’d like to use this as a starting release for 3.1.x [1], depending on how
> it goes, get it stabilized and potentially use a 3.1.1 in several weeks as
> the stable release.
>
> We have done testing with a pseudo cluster:
> - Ran distributed job.
> - GPU scheduling/isolation.
> - Placement constraints (intra-application anti-affinity) by using
> distributed shell.
>
> My +1 to start.
>
> Best,
> Wangda/Vinod
>
> [1]
> https://lists.apache.org/thread.html/b3fb3b6da8b6357a68513a6dfd104b
> c9e19e559aedc5ebedb4ca08c8@%3Cyarn-dev.hadoop.apache.org%3E
> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.0)
> AND fixVersion not in (3.0.0, 3.0.0-beta1) AND status = Resolved ORDER BY
> fixVersion ASC
> [3]
> https://lists.apache.org/thread.html/b3a7dc075b7329fd660f65b48237d7
> 2d4061f26f83547e41d0983ea6@%3Cyarn-dev.hadoop.apache.org%3E
>