Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-04-13 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/436/

[Apr 12, 2018 1:38:30 PM] (haibochen) YARN-7931. [atsv2 read acls] Include 
domain table creation as part of
[Apr 12, 2018 2:42:31 PM] (msingh) HDFS-13426. Fix javadoc in 
FsDatasetAsyncDiskService#removeVolume.
[Apr 12, 2018 3:42:19 PM] (inigoiri) Revert "HDFS-13386. RBF: Wrong date 
information in list file(-ls)
[Apr 12, 2018 4:04:23 PM] (ericp) YARN-8120. JVM can crash with SIGSEGV when 
exiting due to custom leveldb
[Apr 12, 2018 4:12:46 PM] (jlowe) MAPREDUCE-7069. Add ability to specify user 
environment variables
[Apr 12, 2018 4:28:23 PM] (inigoiri) Revert "HDFS-13388. 
RequestHedgingProxyProvider calls multiple
[Apr 12, 2018 4:30:11 PM] (inigoiri) HDFS-13386. RBF: Wrong date information in 
list file(-ls) result.
[Apr 12, 2018 5:53:57 PM] (ericp) YARN-8147. 
TestClientRMService#testGetApplications sporadically fails.
[Apr 12, 2018 7:38:00 PM] (billie) YARN-7936. Add default service AM Xmx. 
Contributed by Jian He
[Apr 13, 2018 4:23:51 AM] (aajisaka) HDFS-13436. Fix javadoc of 
package-info.java
[Apr 13, 2018 4:51:20 AM] (Bharat) HADOOP-15379. Make IrqHandler.bind() public. 
Contributed by Ajay Kumar
[Apr 13, 2018 5:06:47 AM] (wwei) YARN-8154. Fix missing titles in 
PlacementConstraints document.
[Apr 13, 2018 5:17:37 AM] (wwei) YARN-8153. Guaranteed containers always stay 
in SCHEDULED on NM after
[Apr 13, 2018 6:27:51 AM] (shv) HADOOP-14970. MiniHadoopClusterManager doesn't 
respect lack of format
[Apr 13, 2018 9:55:45 AM] (yqlin) HDFS-13418. NetworkTopology should be 
configurable when enable




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.fs.TestRawLocalFileSystemContract 
   hadoop.fs.TestTrash 
   hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestNativeCodeLoader 
   hadoop.util.TestNodeHealthScriptRunner 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestHSync 
   hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancer 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   

RE: [VOTE] Release Apache Hadoop 2.7.6 (RC0)

2018-04-13 Thread Brahma Reddy Battula
Konstantin thanks for driving this.

+1 (binding)


--Built from the source
--Installed HA cluster
-Verified the basic shell commands
-Ran sample jobs like pi,wordcount
-Browsed the UI's



-Original Message-
From: Konstantin Shvachko [mailto:shv.had...@gmail.com] 
Sent: 10 April 2018 07:14
To: Hadoop Common ; hdfs-dev 
; mapreduce-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org
Subject: [VOTE] Release Apache Hadoop 2.7.6 (RC0)

Hi everybody,

This is the next dot release of Apache Hadoop 2.7 line. The previous one 2.7.5 
was released on December 14, 2017.
Release 2.7.6 includes critical bug fixes and optimizations. See more details 
in Release Note:
http://home.apache.org/~shv/hadoop-2.7.6-RC0/releasenotes.html

The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.6-RC0/

Please give it a try and vote on this thread. The vote will run for 5 days 
ending 04/16/2018.

My up to date public key is available from:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

Thanks,
--Konstantin


Re: [VOTE] Release Apache Hadoop 2.7.6 (RC0)

2018-04-13 Thread Zhe Zhang
+1 (binding)

- Downloaded source
- Verified checksum
- Built and started local cluster
- Checked YARN UI
- Performed basic HDFS operations

On Fri, Apr 13, 2018 at 1:40 PM Chen Liang  wrote:

> Thanks for working on this Konstantin!
>
> +1 (non-binding)
>
> - verified checksum
> - built from source
> - started a single node HDFS cluster
> - performed basic operations of ls/mkdir/put/get
> - checked web UI
> - ran the MR job pi
>
> Regards,
> Chen
>
>
> > On Apr 9, 2018, at 4:14 PM, Konstantin Shvachko 
> wrote:
> >
> > Hi everybody,
> >
> > This is the next dot release of Apache Hadoop 2.7 line. The previous one
> 2.7.5
> > was released on December 14, 2017.
> > Release 2.7.6 includes critical bug fixes and optimizations. See more
> > details in Release Note:
> > http://home.apache.org/~shv/hadoop-2.7.6-RC0/releasenotes.html
> >
> > The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.6-RC0/
> >
> > Please give it a try and vote on this thread. The vote will run for 5
> days
> > ending 04/16/2018.
> >
> > My up to date public key is available from:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > Thanks,
> > --Konstantin
>
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
> --
Zhe Zhang
Apache Hadoop Committer
http://zhe-thoughts.github.io/about/ | @oldcap


Re: [VOTE] Release Apache Hadoop 2.7.6 (RC0)

2018-04-13 Thread Erik Krogen
Thanks putting this together, Konstantin.

+1 (non-binding)

* Verified signatures, MD5, and SHA-* checksums for bin and src tarball
* Started 2-node HDFS and 1-node YARN clusters
* Tested basic file operations, copy(To/From)Local, mkdir, ls, etc. via hdfs 
and webhdfs protocol
* Verified Web UIs
* Ran DistCp, MR job pi, tera(gen/sort/validate) on YARN cluster

Erik

On 4/13/18, 1:40 PM, "Chen Liang"  wrote:

Thanks for working on this Konstantin!

+1 (non-binding)

- verified checksum
- built from source
- started a single node HDFS cluster
- performed basic operations of ls/mkdir/put/get
- checked web UI
- ran the MR job pi

Regards,
Chen


> On Apr 9, 2018, at 4:14 PM, Konstantin Shvachko  
wrote:
> 
> Hi everybody,
> 
> This is the next dot release of Apache Hadoop 2.7 line. The previous one 
2.7.5
> was released on December 14, 2017.
> Release 2.7.6 includes critical bug fixes and optimizations. See more
> details in Release Note:
> http://home.apache.org/~shv/hadoop-2.7.6-RC0/releasenotes.html
> 
> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.6-RC0/
> 
> Please give it a try and vote on this thread. The vote will run for 5 days
> ending 04/16/2018.
> 
> My up to date public key is available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
> Thanks,
> --Konstantin


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.2 (RC0)

2018-04-13 Thread Konstantin Shvachko
Hi Lei,

Did you have any luck with deploy?
Could you please post your findings on
https://issues.apache.org/jira/browse/HADOOP-15205

Thanks,
--Konstantin

On Tue, Apr 10, 2018 at 12:10 PM, Lei Xu  wrote:

> Ajay, thanks for spotting this.
>
> I am working on fix the deploy.
>
> On Tue, Apr 10, 2018 at 8:32 AM, Ajay Kumar 
> wrote:
> > Thanks Lie for working on this.
> >
> >- Downloaded src tarball and verified checksums
> >- Built from src on mac with java 1.8.0_111
> >- Built a pseudo distributed hdfs cluster
> >- Run test mr jobs (pi, dfsio,wordcount
> >- Verified basic hdfs operations
> >- Basic validation for webui
> >
> > ** I checked maven artifacts and it seems source jars are not there
> (checked hadoop-hdfs , hadoop-client). Not sure if they are required for
> release.
> >
> >
> > On 4/9/18, 4:19 PM, "Xiao Chen"  wrote:
> >
> > Thanks Eddy for the effort!
> >
> > +1 (binding)
> >
> >- Downloaded src tarball and verified checksums
> >- Built from src
> >- Started a pseudo distributed hdfs cluster
> >- Verified basic hdfs operations work
> >- Sanity checked logs / webui
> >
> > Best,
> > -Xiao
> >
> >
> > On Mon, Apr 9, 2018 at 11:28 AM, Eric Payne  invalid>
> > wrote:
> >
> > > Thanks a lot for working to produce this release.
> > >
> > > +1 (binding)
> > > Tested the following:
> > > - built from source and installed on 6-node pseudo-cluster
> > > - tested Capacity Scheduler FairOrderingPolicy and
> FifoOrderingPolicy to
> > > determine that capacity was assigned as expected in each case
> > > - tested user weights with FifoOrderingPolicy to ensure that
> weights were
> > > assigned to users as expected.
> > >
> > > Eric Payne
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Friday, April 6, 2018, 1:17:10 PM CDT, Lei Xu 
> wrote:
> > >
> > >
> > >
> > >
> > >
> > > Hi, All
> > >
> > > I've created release candidate RC-0 for Apache Hadoop 3.0.2.
> > >
> > > Please note: this is an amendment for Apache Hadoop 3.0.1 release
> to
> > > fix shaded jars in apache maven repository. The codebase of 3.0.2
> > > release is the same as 3.0.1.  New bug fixes will be included in
> > > Apache Hadoop 3.0.3 instead.
> > >
> > > The release page is:
> > > https://cwiki.apache.org/confluence/display/HADOOP/
> Hadoop+3.0+Release
> > >
> > > New RC is available at: http://home.apache.org/~lei/
> hadoop-3.0.2-RC0/
> > >
> > > The git tag is release-3.0.2-RC0, and the latest commit is
> > > 5c141f7c0f24c12cb8704a6ccc1ff8ec991f41ee
> > >
> > > The maven artifacts are available at
> > > https://repository.apache.org/content/repositories/
> orgapachehadoop-1096/
> > >
> > > Please try the release, especially, *verify the maven artifacts*,
> and vote.
> > >
> > > The vote will run 5 days, ending 4/11/2018.
> > >
> > > Thanks for everyone who helped to spot the error and proposed
> fixes!
> > >
> > > 
> -
> > > To unsubscribe, e-mail: mapreduce-dev-unsubscribe@
> hadoop.apache.org
> > > For additional commands, e-mail: mapreduce-dev-help@hadoop.
> apache.org
> > >
> > >
> > > 
> -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> > >
> >
> >
>
>
>
> --
> Lei (Eddy) Xu
> Software Engineer, Cloudera
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 2.7.6 (RC0)

2018-04-13 Thread Chen Liang
Thanks for working on this Konstantin!

+1 (non-binding)

- verified checksum
- built from source
- started a single node HDFS cluster
- performed basic operations of ls/mkdir/put/get
- checked web UI
- ran the MR job pi

Regards,
Chen


> On Apr 9, 2018, at 4:14 PM, Konstantin Shvachko  wrote:
> 
> Hi everybody,
> 
> This is the next dot release of Apache Hadoop 2.7 line. The previous one 2.7.5
> was released on December 14, 2017.
> Release 2.7.6 includes critical bug fixes and optimizations. See more
> details in Release Note:
> http://home.apache.org/~shv/hadoop-2.7.6-RC0/releasenotes.html
> 
> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.6-RC0/
> 
> Please give it a try and vote on this thread. The vote will run for 5 days
> ending 04/16/2018.
> 
> My up to date public key is available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
> Thanks,
> --Konstantin


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13448) HDFS Block Placement - Ignore Locality

2018-04-13 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-13448:
--

 Summary: HDFS Block Placement - Ignore Locality
 Key: HDFS-13448
 URL: https://issues.apache.org/jira/browse/HDFS-13448
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: block placement, hdfs-client
Affects Versions: 3.0.1, 2.9.0
Reporter: BELUGA BEHR


According to the HDFS Block Place Rules:

{quote}
/**
 * The replica placement strategy is that if the writer is on a datanode,
 * the 1st replica is placed on the local machine, 
 * otherwise a random datanode. The 2nd replica is placed on a datanode
 * that is on a different rack. The 3rd replica is placed on a datanode
 * which is on a different node of the rack as the second replica.
 */
{quote}

However, there is a hint for the hdfs-client that allows the block placement 
request to not put a block replica on the local datanode _where 'local' means 
the same host as the client is being run on._

{quote}
  /**
   * Advise that a block replica NOT be written to the local DataNode where
   * 'local' means the same host as the client is being run on.
   *
   * @see CreateFlag#NO_LOCAL_WRITE
   */
{quote}

I propose that we add a new flag that allows the hdfs-client to request that 
the first block replica be placed on a random DataNode in the cluster.  The 
subsequent block replicas should follow the normal block placement rules.

The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
replica is not placed on the local node, but it is still placed on the local 
rack.  Where this comes into play is where you have, for example, a flume agent 
that is loading data into HDFS.

If the Flume agent is running on a DataNode, then by default, the DataNode 
local to the Flume agent will always get the first block replica and this leads 
to un-even block placements, with the local node always filling up faster than 
any other node in the cluster.

Modifying this example, if the DataNode is removed from the host where the 
Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
the default block placement policy will still prefer the local rack.  This 
remedies the situation only so far as now the first block replica will always 
be distributed to a DataNode on the local rack.

This new flag would allow a single Flume agent to distribute the blocks 
randomly, evenly, over the entire cluster instead of hot-spotting the local 
node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13447) Fix Typos - Node Not Chosen

2018-04-13 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-13447:
--

 Summary: Fix Typos - Node Not Chosen
 Key: HDFS-13447
 URL: https://issues.apache.org/jira/browse/HDFS-13447
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.1, 2.2.0
Reporter: BELUGA BEHR
 Attachments: HDFS-13447.1.patch

Fix typo and improve:

 
{code:java}
private enum NodeNotChosenReason {
  NOT_IN_SERVICE("the node isn't in service"),
  NODE_STALE("the node is stale"),
  NODE_TOO_BUSY("the node is too busy"),
  TOO_MANY_NODES_ON_RACK("the rack has too many chosen nodes"),
  NOT_ENOUGH_STORAGE_SPACE("no enough storage space to place the block");{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-04-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/

[Apr 12, 2018 8:19:35 AM] (aajisaka) HADOOP-15350. [JDK10] Update maven plugin 
tools to fix compile error in
[Apr 12, 2018 8:47:37 AM] (aajisaka) HDFS-7101. Potential null dereference in 
DFSck#doWork(). Contributed by
[Apr 12, 2018 1:38:30 PM] (haibochen) YARN-7931. [atsv2 read acls] Include 
domain table creation as part of
[Apr 12, 2018 2:42:31 PM] (msingh) HDFS-13426. Fix javadoc in 
FsDatasetAsyncDiskService#removeVolume.
[Apr 12, 2018 3:42:19 PM] (inigoiri) Revert "HDFS-13386. RBF: Wrong date 
information in list file(-ls)
[Apr 12, 2018 4:04:23 PM] (ericp) YARN-8120. JVM can crash with SIGSEGV when 
exiting due to custom leveldb
[Apr 12, 2018 4:12:46 PM] (jlowe) MAPREDUCE-7069. Add ability to specify user 
environment variables
[Apr 12, 2018 4:28:23 PM] (inigoiri) Revert "HDFS-13388. 
RequestHedgingProxyProvider calls multiple
[Apr 12, 2018 4:30:11 PM] (inigoiri) HDFS-13386. RBF: Wrong date information in 
list file(-ls) result.
[Apr 12, 2018 5:53:57 PM] (ericp) YARN-8147. 
TestClientRMService#testGetApplications sporadically fails.
[Apr 12, 2018 7:38:00 PM] (billie) YARN-7936. Add default service AM Xmx. 
Contributed by Jian He




-1 overall


The following subsystems voted -1:
unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy 
   hadoop.hdfs.TestEncryptionZonesWithKMS 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.server.namenode.TestNamenodeCapacityReport 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.sls.TestSLSStreamAMSynth 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/diff-compile-javac-root.txt
  [288K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [432K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/750/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [12K]

Powered by Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-13446) Ozone: Fix OzoneFileSystem contract test failures

2018-04-13 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-13446:


 Summary: Ozone: Fix OzoneFileSystem contract test failures
 Key: HDFS-13446
 URL: https://issues.apache.org/jira/browse/HDFS-13446
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240
 Attachments: HDFS-13446-HDFS-7240.001.patch

This jira refactors contract tests to the src/test directory and also fixes the 
ozone filsystem contract tests as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13445) Web HDFS call is failing with spnego, when URL contain IP

2018-04-13 Thread Ranith Sardar (JIRA)
Ranith Sardar created HDFS-13445:


 Summary: Web HDFS call is failing with spnego, when URL contain IP
 Key: HDFS-13445
 URL: https://issues.apache.org/jira/browse/HDFS-13445
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, webhdfs
Affects Versions: 2.8.3
Reporter: Ranith Sardar
Assignee: Ranith Sardar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13444) Ozone: Fix checkstyle issues in HDFS-7240

2018-04-13 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDFS-13444:
--

 Summary: Ozone: Fix checkstyle issues in HDFS-7240
 Key: HDFS-13444
 URL: https://issues.apache.org/jira/browse/HDFS-13444
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Lokesh Jain
Assignee: Lokesh Jain
 Attachments: HDFS-7240.007.patch





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13443) Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-04-13 Thread Mohammad Arshad (JIRA)
Mohammad Arshad created HDFS-13443:
--

 Summary: Update mount table cache immediately after changing 
(add/update/remove) mount table entries.
 Key: HDFS-13443
 URL: https://issues.apache.org/jira/browse/HDFS-13443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Reporter: Mohammad Arshad


Currently mount table cache is updated periodically, by default cache is 
updated every minute. After change in mount table, user operations may still 
use old mount table. This is bit wrong.

To update mount table cache, maybe we can do following
 * *Add refresh API in MountTableManager which will update mount table cache.*
 * *When there is a change in mount table entries, router admin server can 
update its cache and ask other routers to update their cache*. For example if 
there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
at admin server side, will perform following sequence of action
 ## user submit add mount table entry request on R1
 ## R1 adds the mount table entry in state store
 ## R1 call refresh API on R2
 ## R1 calls refresh API on R3
 ## R1 directly freshest its cache
 ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org