Re: adding contributor roles timing out again

2016-09-07 Thread Akira Ajisaka
Infra team created "Contributors 1" role. If you cannot add people to 
"Contributors" role, use "Contributors 1" instead.


https://issues.apache.org/jira/browse/INFRA-12487

> I've created the "Contributors 1" role which you should be able to 
add people to. Same permissions as "Contributors." This applies to all 
of the projects using Hadoop Permissions (includes common and HDFS.)


-Akira

On 8/24/16 14:16, Akira Ajisaka wrote:

How about we try to do some grouping, where we just have sub groups,
hadoop-contributors-1, hadoop-contributors-2, ... and add them as
contributors; we then edit group membership, adding a new group when the
current one gets above some chosen size limit?


Agreed. Filed INFRA-12487.

-Akira

On 8/24/16 02:00, Chris Trezzo wrote:

Thanks Chris!

On Mon, Aug 22, 2016 at 11:05 PM, Chris Nauroth

wrote:


Chris, I have taken care of adding you in the Contributors role on the
HADOOP project.



--Chris Nauroth



*From: *Chris Trezzo 
*Date: *Monday, August 22, 2016 at 3:20 PM
*To: *Weiqing Yang 
*Cc: *Chris Nauroth , Steve Loughran <
ste...@hortonworks.com>, "common-dev@hadoop.apache.org" <
common-dev@hadoop.apache.org>
*Subject: *Re: adding contributor roles timing out again



Would it be possible for someone to add myself (username: ctrezzo)? It
looks like I am not on the list and can not edit jiras in the HADOOP
project. Thank you!



On Mon, Aug 22, 2016 at 1:20 AM, Weiqing Yang 
wrote:

Thanks a lot, Chris and Steve!





On 8/21/16, 6:38 AM, "Chris Nauroth"  wrote:


I just took care of adding WeiqinYang.

--Chris Nauroth

On 8/21/16, 2:56 AM, "Steve Loughran"  wrote:


   > On 18 Aug 2016, at 16:39, Chris Nauroth 

wrote:

   >
   > It’s odd that Firefox didn’t work for you.  My standard workaround

is to use Firefox, and that’s what I just did successfully for
shenyinjie.

   >
   > It’s quite mysterious to me that this problem would be

browser-specific at all though.

   >

   Could you add WeiqingYang

   it's not working for me in Chrome, FF or Safari from an OSX box

   It's clear that there are too many people in that contributor group.

We have hit a scale limit in the hadoop coding process.


   How about we try to do some grouping, where we just have sub groups,

hadoop-contributors-1, hadoop-contributors-2, ... and add them as
contributors; we then edit group membership, adding a new group when the
current one gets above some chosen size limit?











-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine

2016-09-07 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng reopened HADOOP-13218:


Reopen it as discussed and will revert the work soon.

> Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
> -
>
> Key: HADOOP-13218
> URL: https://issues.apache.org/jira/browse/HADOOP-13218
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13218-v01.patch, HADOOP-13218-v02.patch
>
>
> Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side 
> tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be 
> reverted to allow some time for YARN/Mapreduce side related changes, open 
> this to recommit most of the test related work in HADOOP-12579 for easier 
> tracking and maintain, as other sub-tasks did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13587) distcp.map.bandwidth.mb is overwritten even when -bandwidth flag isn't set

2016-09-07 Thread Zoran Dimitrijevic (JIRA)
Zoran Dimitrijevic created HADOOP-13587:
---

 Summary: distcp.map.bandwidth.mb is overwritten even when 
-bandwidth flag isn't set
 Key: HADOOP-13587
 URL: https://issues.apache.org/jira/browse/HADOOP-13587
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Affects Versions: 3.0.0-alpha1
Reporter: Zoran Dimitrijevic
Priority: Minor


distcp.map.bandwidth.mb exists in distcp-defaults.xml config file, but it is 
not honored even when it is . Current code always overwrites it with either 
default value (java const) or with -bandwidth command line option.

The expected behavior (at least how I would expect it) is to honor the value 
set in distcp-defaults.xml unless user explicitly specify -bandwidth command 
line flag. If there is no value set in .xml file or as a command line flag, 
then the constant from java code should be used.

Additionally, I would expect that we also try to get values from 
distcp-site.xml, similar to other hadoop systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13191) FileSystem#listStatus should not return null

2016-09-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-13191.
-
Resolution: Duplicate

> FileSystem#listStatus should not return null
> 
>
> Key: HADOOP-13191
> URL: https://issues.apache.org/jira/browse/HADOOP-13191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13191.001.patch, HADOOP-13191.002.patch, 
> HADOOP-13191.003.patch, HADOOP-13191.004.patch
>
>
> This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} 
> contract does not indicate {{null}} is a valid return and some callers do not 
> test {{null}} before use:
> AbstractContractGetFileStatusTest#testListStatusEmptyDirectory:
> {code}
> assertEquals("ls on an empty directory not of length 0", 0,
> fs.listStatus(subfolder).length);
> {code}
> ChecksumFileSystem#copyToLocalFile:
> {code}
>   FileStatus[] srcs = listStatus(src);
>   for (FileStatus srcFile : srcs) {
> {code}
> SimpleCopyLIsting#getFileStatus:
> {code}
>   FileStatus[] fileStatuses = fileSystem.listStatus(path);
>   if (excludeList != null && excludeList.size() > 0) {
> ArrayList fileStatusList = new ArrayList<>();
> for(FileStatus status : fileStatuses) {
> {code}
> IMHO, there is no good reason for {{listStatus}} to return {{null}}. It 
> should throw IOExceptions upon errors or return empty list.
> To enforce the contract that null is an invalid return, update javadoc and 
> leverage @Nullable/@NotNull/@Nonnull annotations.
> So far, I am only aware of the following functions that can return null:
> * RawLocalFileSystem#listStatus



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-09-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/157/

[Sep 6, 2016 8:39:45 AM] (rohithsharmaks) YARN-5608. TestAMRMClient.setup() 
fails with ArrayOutOfBoundsException.
[Sep 7, 2016 9:05:33 AM] (kai.zheng) HADOOP-13218. Migrate other Hadoop side 
tests to prepare for removing
[Sep 6, 2016 2:31:45 PM] (vvasudev) YARN-5576. Allow resource localization 
while container is running.
[Sep 6, 2016 3:02:42 PM] (kihwal) HADOOP-13549. Eliminate intermediate buffer 
for server-side PB encoding.
[Sep 6, 2016 4:36:21 PM] (cnauroth) HADOOP-13447. Refactor S3AFileSystem to 
support introduction of separate
[Sep 6, 2016 4:51:55 PM] (wang) Add CHANGES and release notes for 3.0.0-alpha1 
to site
[Sep 6, 2016 5:38:04 PM] (cdouglas) HDFS-9847. HDFS configuration should accept 
time units. Contributed by
[Sep 6, 2016 6:02:39 PM] (cnauroth) HDFS-6962. ACL inheritance conflicts with 
umaskmode. Contributed by
[Sep 6, 2016 6:44:26 PM] (xiao) HDFS-10835. Fix typos in httpfs.sh. Contributed 
by John Zhuge.
[Sep 6, 2016 6:48:35 PM] (xiao) HDFS-10841. Remove duplicate or unused variable 
in appendFile().
[Sep 6, 2016 8:37:21 PM] (arp) HDFS-9038. DFS reserved space is erroneously 
counted towards non-DFS
[Sep 7, 2016 3:54:17 AM] (xiao) HADOOP-13558. UserGroupInformation created from 
a Subject incorrectly
[Sep 7, 2016 5:40:20 AM] (kasha) YARN-5616. Clean up WeightAdjuster. (Yufei Gu 
via kasha)




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 

Failed junit tests :

   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.mapred.TestMROpportunisticMaps 
   hadoop.mapred.TestReduceFetch 
   hadoop.mapred.TestMerge 
   hadoop.mapreduce.TestMapReduceLazyOutput 
   hadoop.mapred.TestMRIntermediateDataEncryption 
   hadoop.mapred.TestLazyOutput 
   hadoop.mapreduce.TestLargeSort 
   hadoop.mapred.TestReduceFetchFromPartialMem 
   hadoop.mapreduce.v2.TestMRJobsWithProfiler 
   hadoop.mapreduce.lib.output.TestJobOutputCommitter 
   hadoop.mapreduce.security.ssl.TestEncryptedShuffle 
   hadoop.mapreduce.v2.TestMROldApiJobs 
   hadoop.mapred.TestJobCleanup 
   hadoop.mapreduce.v2.TestSpeculativeExecution 
   hadoop.mapred.TestClusterMRNotification 
   hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken 
   hadoop.mapreduce.v2.TestMRAMWithNonNormalizedCapabilities 
   hadoop.mapreduce.v2.TestMRJobs 
   hadoop.mapred.TestJobName 
   hadoop.mapreduce.TestMRJobClient 
   hadoop.mapred.TestClusterMapReduceTestCase 
   hadoop.mapred.TestAuditLogger 
   hadoop.mapreduce.security.TestMRCredentials 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.mapreduce.v2.TestMiniMRProxyUser 
   hadoop.mapreduce.v2.TestMRJobsWithHistoryService 
   hadoop.mapred.TestMiniMRClientCluster 
   hadoop.mapred.TestMiniMRChildTask 
   hadoop.mapreduce.TestChild 
   hadoop.mapreduce.security.TestBinaryTokenFile 
   hadoop.mapred.TestJobCounters 
   hadoop.streaming.TestMultipleCachefiles 
   hadoop.streaming.TestFileArgs 
   hadoop.streaming.TestSymLink 
   hadoop.streaming.TestMultipleArchiveFiles 
   hadoop.mapred.gridmix.TestGridmixSubmission 
   hadoop.mapred.gridmix.TestLoadJob 
   hadoop.mapred.gridmix.TestSleepJob 
   hadoop.mapred.gridmix.TestDistCacheEmulation 
   hadoop.tools.TestDistCh 

Timed out junit tests :

   org.apache.hadoop.http.TestHttpServerLifecycle 
   org.apache.hadoop.mapred.TestMiniMRClasspath 
   org.apache.hadoop.mapred.TestJobSysDirWithDFS 
   org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/157/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/157/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/157/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/157/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/157/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

  

[VOTE] Merge HADOOP-13341

2016-09-07 Thread Allen Wittenauer

I’d like to call for a vote to run for 5 days (ending  Mon 12, 2016 at 
7AM PT) to merge the HADOOP-13341 feature branch into trunk. This branch was 
developed exclusively by me.  As usual with large shell script changes, it's 
been broken up into several smaller commits to make it easier to read.  The 
core of the functionality is almost entirely in hadoop-functions.sh with the 
majority of the rest of the new additions either being documentation or test 
code. In addition, large swaths of code is removed from the hadoop, hdfs, 
mapred, and yarn executables.

Here's a quick summary:

* makes the rules around _OPTS consistent across all the projects
* makes it possible to provide custom _OPTS for every hadoop, hdfs, mapred, and 
yarn subcommand
* with the exception of deprecations, removes all of the custom daemon _OPTS 
handling sprinkled around the hadoop, hdfs, mapred, and yarn subcommands
* removes the custom handling handling of HADOOP_CLIENT_OPTS and makes it 
consistent for non-daemon subcommands
* makes the _USER blocker consistent with _OPTS as well as providing better 
documentation around this feature's existence.  Note that this is an 
incompatible change against -alpha1.
* by consolidating all of this code, makes it possible to finally fix a good 
chunk of the "directory name containing spaces blows up the bash code" problems 
that's been around since the beginning of the project

Thanks!


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-09-07 Thread Steve Loughran

> On 6 Sep 2016, at 17:30, Andrew Wang  wrote:
> 
> Thanks everyone for voting and helping to validate alpha1. The VOTE closes
> with 16 +1s, 6 of them binding PMC votes, and no -1s.
> 
> I'll go ahead and wrap up the release, will send an announcement out likely
> tomorrow once the mirrors have caught up.
> 
> Best,
> Andrew
> 


Sorry I'm late with this; I was running all the tests last night and had to 
redo some today. Bandwidth issues.


If I hadn't missed the vote, i'd have gone 

+0.5

I couldn't do enough due diligence to be confident all was well. But those bits 
I did do (s3, azure, openstack) are all good.

-checked out the source, rebuilt locally, ran the Hadoop-aws s3a test suite 
against s3 ireland ,openstack against rackspace and azure against Azure. All 
well there.
-built slider. Compilation failed there because Container has added things that 
Slider's mock containers don't implement. This is well within the compatibility 
scopes, albeit inconvenient.

- I haven't done a full spark test run as don't have the time right now, and 
haven't been running locally be confident that all works. Sorry

-I did do the SPARK-7481 cloud tests. These were *really* slow, but I was doing 
the hadoop trunk s3, azure and swift tests in different windows; I suspect I 
was just using up too much CPU, RAM and bandwidth.


The windows build failed irrespective of whether I had -Pnative set or not. I 
think theres a script there that needs windows support.
Were it a real release I'd veto it for that, as it meant I wouldn't be able to 
build the windows native libraries for the good of everyone.

Filed: HADOOP-13586


Anyway, good to see the first 3.0 alpha out the door, let's see what surfaces 
in the field

-Steve

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13586) Hadoop 3.0 doesn't build on windows

2016-09-07 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13586:
---

 Summary: Hadoop 3.0 doesn't build on windows
 Key: HADOOP-13586
 URL: https://issues.apache.org/jira/browse/HADOOP-13586
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-alpha1
 Environment: Windows Server
Reporter: Steve Loughran


Builds on windows fail, even before getting to the native bits

Looks like dev-support/bin/dist-copynativelibs isn't windows-ready



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13585) shell rm command to not rename to ~/.Trash in object stores

2016-09-07 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13585:
---

 Summary: shell rm command to not rename to ~/.Trash in object 
stores
 Key: HADOOP-13585
 URL: https://issues.apache.org/jira/browse/HADOOP-13585
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: util
Affects Versions: 2.8.0
Reporter: Steve Loughran


When you do a {{hadoop fs -rm -s3a://bucket/large-file}} there's a long delay 
and then you are told that it's been moved to 
{{s3a://Users/stevel/.Trash/current/large-file}}. Where it still incurs costs. 
You need to then delete that file using {{-skipTrash}} because the {{fs 
-expunge}} command only works on the local fs: you can't point it at an object 
store unless that is the default FS.

I'd like an option to tell the shell to tell it that it should bypass the 
renaming on an FS-by-FS basis. And the for {{fs expunge}} to take a filesystem 
as an optional argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org