[jira] [Created] (HDFS-13357) Improve AclException message "Invalid ACL: only directories may have a default ACL."

2018-03-27 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-13357:
--

 Summary: Improve AclException message "Invalid ACL: only 
directories may have a default ACL."
 Key: HDFS-13357
 URL: https://issues.apache.org/jira/browse/HDFS-13357
 Project: Hadoop HDFS
  Issue Type: Improvement
 Environment: CDH 5.10.1, Kerberos, KMS, encryption at rest, Sentry, 
Hive
Reporter: Wei-Chiu Chuang


I found this warning message in a HDFS cluster
{noformat}
2018-03-27 19:15:28,841 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
90 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setAcl from 
10.0.0.1:39508 Call#79376996
Retry#0: org.apache.hadoop.hdfs.protocol.AclException: Invalid ACL: only 
directories may have a default ACL.
2018-03-27 19:15:28,841 WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hive/host1.example@example.com (auth:KERBE
ROS) cause:org.apache.hadoop.hdfs.protocol.AclException: Invalid ACL: only 
directories may have a default ACL.
{noformat}
However it doesn't tell me which file had this invalid ACL.

This cluster has Sentry enabled, so it is possible this invalid ACL doesn't 
come from HDFS, but from Sentry.

File this Jira to improve the message and add file name in it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-03-27 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/419/

[Mar 26, 2018 4:42:51 PM] (inigoiri) HDFS-13204. RBF: Optimize name service 
safe mode icon. Contributed by
[Mar 26, 2018 5:21:35 PM] (eyang) YARN-8043.  Added the exception message for 
failed launches running
[Mar 26, 2018 5:45:29 PM] (xyao) HADOOP-15339. Support additional key/value 
propereties in JMX bean
[Mar 26, 2018 6:16:06 PM] (wangda) YARN-8062. yarn rmadmin -getGroups returns 
group from which the user has
[Mar 26, 2018 6:19:15 PM] (wangda) YARN-8068. Application Priority field causes 
NPE in app timeline publish
[Mar 26, 2018 6:20:16 PM] (wangda) YARN-8072. RM log is getting flooded with
[Mar 26, 2018 8:05:15 PM] (mackrorysd) HADOOP-15299. Bump Jackson 2 version to 
Jackson 2.9.x.
[Mar 26, 2018 9:30:11 PM] (haibochen) YARN-7794. SLSRunner is not loading 
timeline service jars, causing
[Mar 26, 2018 9:55:53 PM] (rkanter) MAPREDUCE-6441. Improve temporary directory 
name generation in
[Mar 26, 2018 10:46:31 PM] (eyang) YARN-8018.  Added support for initiating 
yarn service upgrade.  
[Mar 26, 2018 10:59:32 PM] (xiao) HADOOP-15313. TestKMS should close providers.




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.fs.TestRawLocalFileSystemContract 
   hadoop.fs.TestTrash 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestNativeCodeLoader 
   hadoop.util.TestNodeHealthScriptRunner 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.security.TestDelegationTokenForProxyUser 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestHSync 
   hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestDNFencing 
   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   

[jira] [Created] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb

2018-03-27 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-13356:
-

 Summary: Balancer:Set default value of minBlockSize to 10mb 
 Key: HDFS-13356
 URL: https://issues.apache.org/jira/browse/HDFS-13356
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


 It seems we can run into a problem while a rolling upgrade with this.
The Balancer is upgraded after NameNodes, so once NN is upgraded it will expect 
{{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send it yet, 
so NN will use the default, which you set to 0. So NN will start unexpectedly 
sending small blocks to the Balancer. So we should
 # either change the default in protobuf to 10 MB
 # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
the configuration variable 
{{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.

If you agree, we should create a follow up jira. I wanted to backport this down 
the chain of branches, but this upgrade scenario is stopping me.

[~barnaul] commented this in  HDFS-13222 jira.

https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13355) Create IO provider abstraction for hdsl

2018-03-27 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDFS-13355:
-

 Summary: Create IO provider abstraction for hdsl
 Key: HDFS-13355
 URL: https://issues.apache.org/jira/browse/HDFS-13355
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: HDFS-7240
Reporter: Ajay Kumar
Assignee: Ajay Kumar
 Fix For: HDFS-7240


Create an abstraction like FileIoProvider for hdsl to handle disk failure and 
other issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13354) Add config for min number of data nodes to come out of chill mode in SCM

2018-03-27 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-13354:
-

 Summary: Add config for min number of data nodes to come out of 
chill mode in SCM
 Key: HDFS-13354
 URL: https://issues.apache.org/jira/browse/HDFS-13354
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


SCM will come out of ChillMode if one datanode reports in now. We need to 
support a number of known datanodes before SCM comes out of Chill Mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Merging branch HDFS-8707 (native HDFS client) to trunk

2018-03-27 Thread Anu Engineer
Would it be possible to add a maven flag like –skipShade, that helps in 
reducing the compile time for people who does not need to build libhdfs++ ?

Thanks
Anu


From: Jim Clampffer 
Date: Tuesday, March 27, 2018 at 11:09 AM
To: Eric Badger 
Cc: Deepak Majeti , Jitendra Pandey 
, Anu Engineer , Mukul 
Kumar Singh , Owen O'Malley , 
Chris Douglas , Hdfs-dev 
Subject: Re: [VOTE] Merging branch HDFS-8707 (native HDFS client) to trunk

Hi Eric,
There isn't a way to completely skip compiling libhdfs++ as part of the native 
build.  You could pass -Dnative_cmake_args="-DHDFSPP_LIBRARY_ONLY=TRUE" to 
maven to avoid building all of the libhdfs++ tests, examples, and tools though. 
 That cut the native client build time from 4:10 to 2:20 for me.
-Jim

On Tue, Mar 27, 2018 at 12:25 PM, Eric Badger 
> wrote:
Is there a way to skip the libhdfs++ compilation during a native build? I just 
went to build native trunk to test out some container-executor changes and it 
spent 7:49 minutes out of 14:31 minutes in Apache Hadoop HDFS Native Client. 
For me it basically doubled the compilation time.

Eric

On Fri, Mar 16, 2018 at 4:01 PM, Deepak Majeti 
> wrote:
Thanks for all your hard work on getting this feature (with > 200
sub-tasks) in James!

On Fri, Mar 16, 2018 at 12:05 PM, Jim Clampffer 
>
wrote:

> With 6 +1s, 0 0s, and 0 -1s the vote passes.  I'll be merging this into
> trunk shortly.
>
> Thanks everyone who participated in the discussion and vote!  And many
> thanks to everyone who contributed code and feedback throughout the
> development process! Particularly Bob, Anatoli, Xiaowei and Deepak who
> provided lots of large pieces of code as well as folks like Owen, Chris D,
> Allen, and Stephen W who provided various support and guidance with the
> Apache process and project design.
>
> On Wed, Mar 14, 2018 at 1:32 PM, Jitendra Pandey 
> 
> >
> wrote:
>
> > +1 (binding)
> >
> > On 3/14/18, 9:57 AM, "Anu Engineer" 
> > > wrote:
> >
> > +1 (binding). Thanks for all the hard work and getting this client
> > ready.
> > It is nice to have an official and supported native client for HDFS.
> >
> > Thanks
> > Anu
> >
> > On 3/13/18, 8:16 PM, "Mukul Kumar Singh" 
> > >
> > wrote:
> >
> > +1 (binding)
> >
> > Thanks,
> > Mukul
> >
> > On 14/03/18, 2:06 AM, "Owen O'Malley" 
> > >
> > wrote:
> >
> > +1 (binding)
> >
> > .. Owen
> >
> > On Sun, Mar 11, 2018 at 6:20 PM, Chris Douglas <
> > cdoug...@apache.org> wrote:
> >
> > > +1 (binding) -C
> > >
> > > On Thu, Mar 8, 2018 at 9:31 AM, Jim Clampffer <
> > james.clampf...@gmail.com>
> > > wrote:
> > > > Hi Everyone,
> > > >
> > > > The feedback was generally positive on the discussion
> > thread [1] so I'd
> > > > like to start a formal vote for merging HDFS-8707
> > (libhdfs++) into trunk.
> > > > The vote will be open for 7 days and end 6PM EST on
> > 3/15/18.
> > > >
> > > > This branch includes a C++ implementation of an HDFS
> > client for use in
> > > > applications that don't run an in-process JVM.  Right now
> > the branch only
> > > > supports reads and metadata calls.
> > > >
> > > > Features (paraphrasing the list from the discussion
> > thread):
> > > > -Avoiding the JVM means applications that use libhdfs++
> > can explicitly
> > > > control resources (memory, FDs, threads).  The driving
> > goal for this
> > > > project was to let C/C++ applications access HDFS while
> > maintaining a
> > > > single heap.
> > > > -Includes support for Kerberos authentication.
> > > > -Includes a libhdfs/libhdfs3 compatible C API as well as
> a
> > C++ API that
> > > > supports asynchronous operations.  Applications that only
> > do reads may be
> > > > able to use this as a drop in replacement for libhdfs.
> > > > -Asynchronous IO is built on top of boost::asio which in
> > turn uses
> > > > select/epoll so many sockets can be monitored from a
> > single thread (or
> > > > thread pool) rather than spawning a thread to sleep on a
> > 

Re: [VOTE] Merging branch HDFS-8707 (native HDFS client) to trunk

2018-03-27 Thread Jim Clampffer
Hi Eric,

There isn't a way to completely skip compiling libhdfs++ as part of the
native build.  You could pass
-Dnative_cmake_args="-DHDFSPP_LIBRARY_ONLY=TRUE" to maven to avoid building
all of the libhdfs++ tests, examples, and tools though.  That cut the
native client build time from 4:10 to 2:20 for me.

-Jim

On Tue, Mar 27, 2018 at 12:25 PM, Eric Badger  wrote:

> Is there a way to skip the libhdfs++ compilation during a native build? I
> just went to build native trunk to test out some container-executor changes
> and it spent 7:49 minutes out of 14:31 minutes in Apache Hadoop HDFS Native
> Client. For me it basically doubled the compilation time.
>
> Eric
>
> On Fri, Mar 16, 2018 at 4:01 PM, Deepak Majeti 
> wrote:
>
>> Thanks for all your hard work on getting this feature (with > 200
>> sub-tasks) in James!
>>
>> On Fri, Mar 16, 2018 at 12:05 PM, Jim Clampffer <
>> james.clampf...@gmail.com>
>> wrote:
>>
>> > With 6 +1s, 0 0s, and 0 -1s the vote passes.  I'll be merging this into
>> > trunk shortly.
>> >
>> > Thanks everyone who participated in the discussion and vote!  And many
>> > thanks to everyone who contributed code and feedback throughout the
>> > development process! Particularly Bob, Anatoli, Xiaowei and Deepak who
>> > provided lots of large pieces of code as well as folks like Owen, Chris
>> D,
>> > Allen, and Stephen W who provided various support and guidance with the
>> > Apache process and project design.
>> >
>> > On Wed, Mar 14, 2018 at 1:32 PM, Jitendra Pandey <
>> jiten...@hortonworks.com
>> > >
>> > wrote:
>> >
>> > > +1 (binding)
>> > >
>> > > On 3/14/18, 9:57 AM, "Anu Engineer" 
>> wrote:
>> > >
>> > > +1 (binding). Thanks for all the hard work and getting this client
>> > > ready.
>> > > It is nice to have an official and supported native client for
>> HDFS.
>> > >
>> > > Thanks
>> > > Anu
>> > >
>> > > On 3/13/18, 8:16 PM, "Mukul Kumar Singh" 
>> > > wrote:
>> > >
>> > > +1 (binding)
>> > >
>> > > Thanks,
>> > > Mukul
>> > >
>> > > On 14/03/18, 2:06 AM, "Owen O'Malley" > >
>> > > wrote:
>> > >
>> > > +1 (binding)
>> > >
>> > > .. Owen
>> > >
>> > > On Sun, Mar 11, 2018 at 6:20 PM, Chris Douglas <
>> > > cdoug...@apache.org> wrote:
>> > >
>> > > > +1 (binding) -C
>> > > >
>> > > > On Thu, Mar 8, 2018 at 9:31 AM, Jim Clampffer <
>> > > james.clampf...@gmail.com>
>> > > > wrote:
>> > > > > Hi Everyone,
>> > > > >
>> > > > > The feedback was generally positive on the discussion
>> > > thread [1] so I'd
>> > > > > like to start a formal vote for merging HDFS-8707
>> > > (libhdfs++) into trunk.
>> > > > > The vote will be open for 7 days and end 6PM EST on
>> > > 3/15/18.
>> > > > >
>> > > > > This branch includes a C++ implementation of an HDFS
>> > > client for use in
>> > > > > applications that don't run an in-process JVM.  Right
>> now
>> > > the branch only
>> > > > > supports reads and metadata calls.
>> > > > >
>> > > > > Features (paraphrasing the list from the discussion
>> > > thread):
>> > > > > -Avoiding the JVM means applications that use
>> libhdfs++
>> > > can explicitly
>> > > > > control resources (memory, FDs, threads).  The driving
>> > > goal for this
>> > > > > project was to let C/C++ applications access HDFS
>> while
>> > > maintaining a
>> > > > > single heap.
>> > > > > -Includes support for Kerberos authentication.
>> > > > > -Includes a libhdfs/libhdfs3 compatible C API as well
>> as
>> > a
>> > > C++ API that
>> > > > > supports asynchronous operations.  Applications that
>> only
>> > > do reads may be
>> > > > > able to use this as a drop in replacement for libhdfs.
>> > > > > -Asynchronous IO is built on top of boost::asio which
>> in
>> > > turn uses
>> > > > > select/epoll so many sockets can be monitored from a
>> > > single thread (or
>> > > > > thread pool) rather than spawning a thread to sleep
>> on a
>> > > blocked socket.
>> > > > > -Includes a set of utilities written in C++ that
>> mirror
>> > > the CLI tools
>> > > > (e.g.
>> > > > > ./hdfs dfs -ls).  These have a 3 order of magnitude
>> lower
>> > > startup time
>> > > > than
>> > > > > java client which is useful for scripts that need to
>> work
>> > > with many
>> > > > files.
>> > > > > -Support for cancelable reads that release associated
>> > > resources
>> > > > > immediately.  Useful for applications that need to be
>> > > responsive to
>> > > > > interactive 

Re: [VOTE] Adopt HDSL as a new Hadoop subproject

2018-03-27 Thread Vinod Kumar Vavilapalli
Glad to see consensus on this proposal.

This new subproject will hopefully continue Hadoop's evolution forward (dare I 
say the biggest one since YARN) and also intends to accomplish this with 
minimal project overhead.

+1 binding.

Thanks
+Vinod

> On Mar 20, 2018, at 11:20 AM, Owen O'Malley  wrote:
> 
> All,
> 
> Following our discussions on the previous thread (Merging branch HDFS-7240
> to trunk), I'd like to propose the following:
> 
> * HDSL become a subproject of Hadoop.
> * HDSL will release separately from Hadoop. Hadoop releases will not
> contain HDSL and vice versa.
> * HDSL will get its own jira instance so that the release tags stay
> separate.
> * On trunk (as opposed to release branches) HDSL will be a separate module
> in Hadoop's source tree. This will enable the HDSL to work on their trunk
> and the Hadoop trunk without making releases for every change.
> * Hadoop's trunk will only build HDSL if a non-default profile is enabled.
> * When Hadoop creates a release branch, the RM will delete the HDSL module
> from the branch.
> * HDSL will have their own Yetus checks and won't cause failures in the
> Hadoop patch check.
> 
> I think this accomplishes most of the goals of encouraging HDSL development
> while minimizing the potential for disruption of HDFS development.
> 
> The vote will run the standard 7 days and requires a lazy 2/3 vote. PMC
> votes are binding, but everyone is encouraged to vote.
> 
> +1 (binding)
> 
> .. Owen


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Merging branch HDFS-8707 (native HDFS client) to trunk

2018-03-27 Thread Eric Badger
Is there a way to skip the libhdfs++ compilation during a native build? I
just went to build native trunk to test out some container-executor changes
and it spent 7:49 minutes out of 14:31 minutes in Apache Hadoop HDFS Native
Client. For me it basically doubled the compilation time.

Eric

On Fri, Mar 16, 2018 at 4:01 PM, Deepak Majeti 
wrote:

> Thanks for all your hard work on getting this feature (with > 200
> sub-tasks) in James!
>
> On Fri, Mar 16, 2018 at 12:05 PM, Jim Clampffer  >
> wrote:
>
> > With 6 +1s, 0 0s, and 0 -1s the vote passes.  I'll be merging this into
> > trunk shortly.
> >
> > Thanks everyone who participated in the discussion and vote!  And many
> > thanks to everyone who contributed code and feedback throughout the
> > development process! Particularly Bob, Anatoli, Xiaowei and Deepak who
> > provided lots of large pieces of code as well as folks like Owen, Chris
> D,
> > Allen, and Stephen W who provided various support and guidance with the
> > Apache process and project design.
> >
> > On Wed, Mar 14, 2018 at 1:32 PM, Jitendra Pandey <
> jiten...@hortonworks.com
> > >
> > wrote:
> >
> > > +1 (binding)
> > >
> > > On 3/14/18, 9:57 AM, "Anu Engineer"  wrote:
> > >
> > > +1 (binding). Thanks for all the hard work and getting this client
> > > ready.
> > > It is nice to have an official and supported native client for
> HDFS.
> > >
> > > Thanks
> > > Anu
> > >
> > > On 3/13/18, 8:16 PM, "Mukul Kumar Singh" 
> > > wrote:
> > >
> > > +1 (binding)
> > >
> > > Thanks,
> > > Mukul
> > >
> > > On 14/03/18, 2:06 AM, "Owen O'Malley" 
> > > wrote:
> > >
> > > +1 (binding)
> > >
> > > .. Owen
> > >
> > > On Sun, Mar 11, 2018 at 6:20 PM, Chris Douglas <
> > > cdoug...@apache.org> wrote:
> > >
> > > > +1 (binding) -C
> > > >
> > > > On Thu, Mar 8, 2018 at 9:31 AM, Jim Clampffer <
> > > james.clampf...@gmail.com>
> > > > wrote:
> > > > > Hi Everyone,
> > > > >
> > > > > The feedback was generally positive on the discussion
> > > thread [1] so I'd
> > > > > like to start a formal vote for merging HDFS-8707
> > > (libhdfs++) into trunk.
> > > > > The vote will be open for 7 days and end 6PM EST on
> > > 3/15/18.
> > > > >
> > > > > This branch includes a C++ implementation of an HDFS
> > > client for use in
> > > > > applications that don't run an in-process JVM.  Right
> now
> > > the branch only
> > > > > supports reads and metadata calls.
> > > > >
> > > > > Features (paraphrasing the list from the discussion
> > > thread):
> > > > > -Avoiding the JVM means applications that use libhdfs++
> > > can explicitly
> > > > > control resources (memory, FDs, threads).  The driving
> > > goal for this
> > > > > project was to let C/C++ applications access HDFS while
> > > maintaining a
> > > > > single heap.
> > > > > -Includes support for Kerberos authentication.
> > > > > -Includes a libhdfs/libhdfs3 compatible C API as well
> as
> > a
> > > C++ API that
> > > > > supports asynchronous operations.  Applications that
> only
> > > do reads may be
> > > > > able to use this as a drop in replacement for libhdfs.
> > > > > -Asynchronous IO is built on top of boost::asio which
> in
> > > turn uses
> > > > > select/epoll so many sockets can be monitored from a
> > > single thread (or
> > > > > thread pool) rather than spawning a thread to sleep on
> a
> > > blocked socket.
> > > > > -Includes a set of utilities written in C++ that mirror
> > > the CLI tools
> > > > (e.g.
> > > > > ./hdfs dfs -ls).  These have a 3 order of magnitude
> lower
> > > startup time
> > > > than
> > > > > java client which is useful for scripts that need to
> work
> > > with many
> > > > files.
> > > > > -Support for cancelable reads that release associated
> > > resources
> > > > > immediately.  Useful for applications that need to be
> > > responsive to
> > > > > interactive users.
> > > > >
> > > > > Other points:
> > > > > -This is almost all new code in a new subdirectory.  No
> > > Java source for
> > > > the
> > > > > rest of hadoop was changed so there's no risk of
> > > regressions there.  The
> > > > > only changes outside of that subdirectory were
> > integrating
> > > the build in
> > > > > some of the pom files and adding a couple dependencies
> to
> > > the DockerFile.
> > > > > -The library has had 

[RESULT][VOTE] Adopt HDSL as a new Hadoop subproject

2018-03-27 Thread Owen O'Malley
Ok, with a lot of +1's, one +0, and no -1's the vote passes.

We have a new subproject!

We should resolve the final name and then create a jira instance for it.

Thanks everyone,
   Owen


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-03-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/

[Mar 26, 2018 10:33:07 AM] (yqlin) HDFS-13291. RBF: Implement available space 
based OrderResolver.
[Mar 26, 2018 4:42:51 PM] (inigoiri) HDFS-13204. RBF: Optimize name service 
safe mode icon. Contributed by
[Mar 26, 2018 5:21:35 PM] (eyang) YARN-8043.  Added the exception message for 
failed launches running
[Mar 26, 2018 5:45:29 PM] (xyao) HADOOP-15339. Support additional key/value 
propereties in JMX bean
[Mar 26, 2018 6:16:06 PM] (wangda) YARN-8062. yarn rmadmin -getGroups returns 
group from which the user has
[Mar 26, 2018 6:19:15 PM] (wangda) YARN-8068. Application Priority field causes 
NPE in app timeline publish
[Mar 26, 2018 6:20:16 PM] (wangda) YARN-8072. RM log is getting flooded with
[Mar 26, 2018 8:05:15 PM] (mackrorysd) HADOOP-15299. Bump Jackson 2 version to 
Jackson 2.9.x.
[Mar 26, 2018 9:30:11 PM] (haibochen) YARN-7794. SLSRunner is not loading 
timeline service jars, causing
[Mar 26, 2018 9:55:53 PM] (rkanter) MAPREDUCE-6441. Improve temporary directory 
name generation in
[Mar 26, 2018 10:46:31 PM] (eyang) YARN-8018.  Added support for initiating 
yarn service upgrade.  
[Mar 26, 2018 10:59:32 PM] (xiao) HADOOP-15313. TestKMS should close providers.




-1 overall


The following subsystems voted -1:
findbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.namenode.TestReencryptionWithKMS 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/diff-compile-javac-root.txt
  [288K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/whitespace-eol.txt
  [9.2M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [416K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [48K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/733/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]

Powered by Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [VOTE] Adopt HDSL as a new Hadoop subproject

2018-03-27 Thread Rakesh Radhakrishnan
+1 for the sub-project idea. Thanks to everyone that contributed!

Regards,
Rakesh

On Tue, Mar 27, 2018 at 4:46 PM, Jack Liu  wrote:

>  +1 (non-binding)
>
>
> On Tue, Mar 27, 2018 at 2:16 AM, Tsuyoshi Ozawa  wrote:
>
> > +1(binding),
> >
> > - Tsuyoshi
> >
> > On Tue, Mar 20, 2018 at 14:21 Owen O'Malley 
> > wrote:
> >
> > > All,
> > >
> > > Following our discussions on the previous thread (Merging branch
> > HDFS-7240
> > > to trunk), I'd like to propose the following:
> > >
> > > * HDSL become a subproject of Hadoop.
> > > * HDSL will release separately from Hadoop. Hadoop releases will not
> > > contain HDSL and vice versa.
> > > * HDSL will get its own jira instance so that the release tags stay
> > > separate.
> > > * On trunk (as opposed to release branches) HDSL will be a separate
> > module
> > > in Hadoop's source tree. This will enable the HDSL to work on their
> trunk
> > > and the Hadoop trunk without making releases for every change.
> > > * Hadoop's trunk will only build HDSL if a non-default profile is
> > enabled.
> > > * When Hadoop creates a release branch, the RM will delete the HDSL
> > module
> > > from the branch.
> > > * HDSL will have their own Yetus checks and won't cause failures in the
> > > Hadoop patch check.
> > >
> > > I think this accomplishes most of the goals of encouraging HDSL
> > development
> > > while minimizing the potential for disruption of HDFS development.
> > >
> > > The vote will run the standard 7 days and requires a lazy 2/3 vote. PMC
> > > votes are binding, but everyone is encouraged to vote.
> > >
> > > +1 (binding)
> > >
> > > .. Owen
> > >
> >
>
>
>
> --
>


[jira] [Created] (HDFS-13353) TestRouterWebHDFSContractCreate failed

2018-03-27 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-13353:
---

 Summary: TestRouterWebHDFSContractCreate failed
 Key: HDFS-13353
 URL: https://issues.apache.org/jira/browse/HDFS-13353
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


{noformat}
[ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 21.685 
s <<< FAILURE! - in 
org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
[ERROR] 
testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
  Time elapsed: 0.147 s  <<< ERROR!
java.io.FileNotFoundException: expected path to be visible before file closed: 
not found webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
webhdfs://0.0.0.0:43796/test
at 
org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
at 
org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.io.FileNotFoundException: File does not exist: 
/test/testCreatedFileIsVisibleOnFlush
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1085)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:930)
... 15 more
Caused by: 
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does 
not exist: /test/testCreatedFileIsVisibleOnFlush
at 
org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:83)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:510)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:739)
at 

Re: [VOTE] Adopt HDSL as a new Hadoop subproject

2018-03-27 Thread Jack Liu
 +1 (non-binding)


On Tue, Mar 27, 2018 at 2:16 AM, Tsuyoshi Ozawa  wrote:

> +1(binding),
>
> - Tsuyoshi
>
> On Tue, Mar 20, 2018 at 14:21 Owen O'Malley 
> wrote:
>
> > All,
> >
> > Following our discussions on the previous thread (Merging branch
> HDFS-7240
> > to trunk), I'd like to propose the following:
> >
> > * HDSL become a subproject of Hadoop.
> > * HDSL will release separately from Hadoop. Hadoop releases will not
> > contain HDSL and vice versa.
> > * HDSL will get its own jira instance so that the release tags stay
> > separate.
> > * On trunk (as opposed to release branches) HDSL will be a separate
> module
> > in Hadoop's source tree. This will enable the HDSL to work on their trunk
> > and the Hadoop trunk without making releases for every change.
> > * Hadoop's trunk will only build HDSL if a non-default profile is
> enabled.
> > * When Hadoop creates a release branch, the RM will delete the HDSL
> module
> > from the branch.
> > * HDSL will have their own Yetus checks and won't cause failures in the
> > Hadoop patch check.
> >
> > I think this accomplishes most of the goals of encouraging HDSL
> development
> > while minimizing the potential for disruption of HDFS development.
> >
> > The vote will run the standard 7 days and requires a lazy 2/3 vote. PMC
> > votes are binding, but everyone is encouraged to vote.
> >
> > +1 (binding)
> >
> > .. Owen
> >
>



--


[jira] [Created] (HDFS-13352) RBF: add xsl stylesheet for hdfs-rbf-default.xml

2018-03-27 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-13352:
---

 Summary: RBF: add xsl stylesheet for hdfs-rbf-default.xml
 Key: HDFS-13352
 URL: https://issues.apache.org/jira/browse/HDFS-13352
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org