Re: Yes/No newbie question on contributing

2016-08-31 Thread Andrew Wang
Hello Martin,

Sorry for the slow followup here. If you'd like, someone can give you edit
permissions to the wiki so you can make changes yourself. In this case,
please provide your wiki username.

I also took a look at your document, and am having a hard time determining
the diff between it and what is on the wiki now. Your doc still says
"Please make sure that all unit tests succeed before constructing your
patch and that no new javac compiler warnings are introduced by your
patch", which I believe is inaccurate based on our discussion above.

Best,
Andrew

On Thu, Jul 28, 2016 at 4:59 PM, Martin Rosse  wrote:

> I don't have permissions to edit the Wiki, but I've included a link below
> to my proposed revisions to the How To Contribute page. As a reminder,
> these changes are meant to make it clear that one does not need to run/pass
> *all* project unit tests before starting to write code or submit a patch.
> Knowing this as a newbie would have saved me a lot of time.
>
> I'm not sure whether my edits cover the suggestion of instructing folks to
> run the same checks as are done in the automated precommit builds...I don't
> know what those checks are. And we had not concluded whether to instruct
> folks as such or not...thoughts?
>
> https://docs.google.com/document/d/1wvGFQ9SgELwCnPanmZ4FmN_
> uNLW5iyMkVIr3A4CNrTk/edit
>
> Best,
> Martin
>
>
> On Tue, Jul 26, 2016 at 2:58 PM, Martin Rosse  wrote:
>
> > Thanks everyone...that helped. I'll go ahead and edit the Wiki to clarify
> > the expectation.
> >
> > I got a successful build using:
> >
> > ~/code/hadoop$  mvn install -DskipTests
> >
> > To respond to Vinod's questions:
> >
> > 
> >
> > I think the answer is trunk. I obtained the source code using:
> >
> > git clone git://git.apache.org/hadoop.git
> >
> > ...and the pom.xml in my source says version 3.0.0-alpha1-SNAPSHOT, and I
> > haven't tried to do anything with branches yet.
> >
> > 
> >
> > You were right--without knowing any better I was running all the unit
> > testsso I came across several errors...one error that I was able to
> fix
> > was apparently due a newline in the etc/hosts file as I learned from
> > https://issues.apache.org/jira/browse/HADOOP-10888. After my fix, a
> > subsequent build passed that unit test. But then a subsequent build to
> that
> > build caused that same error again, even thought the newline was fixed.
> >
> > Another error I got when running mvn install without -DskipTests is
> > described in https://issues.apache.org/jira/browse/HADOOP-12611. This is
> > the type of error I thought would be worthy of ignoring.
> >
> > Thanks again for your time--much appreciated!
> >
> > -Martin
> >
> >
> >
> >
> > On Tue, Jul 26, 2016 at 1:27 PM, Sean Busbey 
> wrote:
> >
> >> The current HowToContribute guide expressly tells folks that they
> >> should ensure all the tests run and pass before and after their
> >> change.
> >>
> >> Sounds like we're due for an update if the expectation is now that
> >> folks should be using -DskipTests and runs on particular modules.
> >> Maybe we could instruct folks on running the same checks we'll do in
> >> the automated precommit builds?
> >>
> >> On Tue, Jul 26, 2016 at 1:47 PM, Vinod Kumar Vavilapalli
> >>  wrote:
> >> > The short answer is that it is expected to pass without any errors.
> >> >
> >> > On branch-2.x, that command passes cleanly without any errors though
> it
> >> takes north of 10 minutes. Note that I run it with -DskipTests - you
> don’t
> >> want to wait for all the unit tests to run, that’ll take too much time.
> I
> >> expect trunk to be the same too.
> >> >
> >> > Which branch are you running this against? What errors are you seeing?
> >> If it is unit-tests you are talking about, you can instead run with
> >> skipTests, run only specific tests or all tests in the module you are
> >> touching, make sure they pass and then let Jenkins infrastructure run
> the
> >> remaining tests when you submit the patch.
> >> >
> >> > +Vinod
> >> >
> >> >> On Jul 26, 2016, at 11:41 AM, Martin Rosse 
> wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> In the How To Contribute doc, it says:
> >> >>
> >> >> "Try getting the project to build and test locally before writing
> >> code"
> >> >>
> >> >> So, just to be 100% certain before I keep troubleshooting things,
> this
> >> >> means I should be able to run
> >> >>
> >> >> mvn clean install -Pdist -Dtar
> >> >>
> >> >> without getting any failures or errors at all...none...zero, right?
> >> >>
> >> >> I am surprised at how long this is taking as errors keep cropping up.
> >> >> Should I just expect it to really take many hours (already at 10+) to
> >> work
> >> >> through these issues? I am setting up a dev environment on an Ubuntu
> >> 14.04
> >> >> 64-bit desktop from the AWS marketplace running on EC2.
> >> >>
> >> >> It would seem it's an obvious YES answer, but given the time
> 

Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-31 Thread Rakesh Radhakrishnan
Thanks for getting this out.

+1 (non-binding)

- downloaded and built tarball from source
- deployed HDFS-HA cluster and tested few EC file operations
- executed few hdfs commands including EC commands
- viewed basic UI
- ran some of the sample jobs


Best Regards,
Rakesh
Intel

On Thu, Sep 1, 2016 at 6:19 AM, John Zhuge  wrote:

> +1 (non-binding)
>
> - Build source with Java 1.8.0_101 on Centos 6.6 without native
> - Verify license and notice using the shell script in HADOOP-13374
> - Deploy a pseudo cluster
> - Run basic dfs, distcp, ACL, webhdfs commands
> - Run MapReduce workcount and pi examples
> - Run balancer
>
> Thanks,
> John
>
> John Zhuge
> Software Engineer, Cloudera
>
> On Wed, Aug 31, 2016 at 11:46 AM, Gangumalla, Uma <
> uma.ganguma...@intel.com>
> wrote:
>
> > +1 (binding).
> >
> > Overall it¹s a great effort, Andrew. Thank you for putting all the
> energy.
> >
> > Downloaded and built.
> > Ran some sample jobs.
> >
> > I would love to see all this efforts will lead to get the GA from Hadoop
> > 3.X soon.
> >
> > Regards,
> > Uma
> >
> >
> > On 8/30/16, 8:51 AM, "Andrew Wang"  wrote:
> >
> > >Hi all,
> > >
> > >Thanks to the combined work of many, many contributors, here's an RC0
> for
> > >3.0.0-alpha1:
> > >
> > >http://home.apache.org/~wang/3.0.0-alpha1-RC0/
> > >
> > >alpha1 is the first in a series of planned alpha releases leading up to
> > >GA.
> > >The objective is to get an artifact out to downstreams for testing and
> to
> > >iterate quickly based on their feedback. So, please keep that in mind
> when
> > >voting; hopefully most issues can be addressed by future alphas rather
> > >than
> > >future RCs.
> > >
> > >Sorry for getting this out on a Tuesday, but I'd still like this vote to
> > >run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll
> extend
> > >if we lack the votes.
> > >
> > >Please try it out and let me know what you think.
> > >
> > >Best,
> > >Andrew
> >
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
>


[jira] [Created] (HADOOP-13570) Hadoop swift Driver should use new Apache httpclient

2016-08-31 Thread Chen He (JIRA)
Chen He created HADOOP-13570:


 Summary: Hadoop swift Driver should use new Apache httpclient
 Key: HADOOP-13570
 URL: https://issues.apache.org/jira/browse/HADOOP-13570
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 2.6.4, 2.7.3
Reporter: Chen He


Current Hadoop openstack module is still using apache httpclient v1.x. It is 
too old. We need to update it to a higher version to catch up the performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-31 Thread Aaron T. Myers
+1 (binding) from me. Downloaded the source, built from source, set up a
pseudo cluster, and ran a few of the sample jobs.

Thanks a lot for doing all this release work, Andrew.

--
Aaron T. Myers
Software Engineer, Cloudera

On Tue, Aug 30, 2016 at 8:51 AM, Andrew Wang 
wrote:

> Hi all,
>
> Thanks to the combined work of many, many contributors, here's an RC0 for
> 3.0.0-alpha1:
>
> http://home.apache.org/~wang/3.0.0-alpha1-RC0/
>
> alpha1 is the first in a series of planned alpha releases leading up to GA.
> The objective is to get an artifact out to downstreams for testing and to
> iterate quickly based on their feedback. So, please keep that in mind when
> voting; hopefully most issues can be addressed by future alphas rather than
> future RCs.
>
> Sorry for getting this out on a Tuesday, but I'd still like this vote to
> run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll extend
> if we lack the votes.
>
> Please try it out and let me know what you think.
>
> Best,
> Andrew
>


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-31 Thread Gangumalla, Uma
+1 (binding).

Overall it¹s a great effort, Andrew. Thank you for putting all the energy.

Downloaded and built.
Ran some sample jobs.

I would love to see all this efforts will lead to get the GA from Hadoop
3.X soon.

Regards,
Uma


On 8/30/16, 8:51 AM, "Andrew Wang"  wrote:

>Hi all,
>
>Thanks to the combined work of many, many contributors, here's an RC0 for
>3.0.0-alpha1:
>
>http://home.apache.org/~wang/3.0.0-alpha1-RC0/
>
>alpha1 is the first in a series of planned alpha releases leading up to
>GA.
>The objective is to get an artifact out to downstreams for testing and to
>iterate quickly based on their feedback. So, please keep that in mind when
>voting; hopefully most issues can be addressed by future alphas rather
>than
>future RCs.
>
>Sorry for getting this out on a Tuesday, but I'd still like this vote to
>run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll extend
>if we lack the votes.
>
>Please try it out and let me know what you think.
>
>Best,
>Andrew


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13566) NPE in S3AFastOutputStream.write

2016-08-31 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13566:
---

 Summary: NPE in S3AFastOutputStream.write
 Key: HADOOP-13566
 URL: https://issues.apache.org/jira/browse/HADOOP-13566
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.7.3
Reporter: Steve Loughran
Assignee: Steve Loughran


During scale tests, managed to create an NPE
{code}
test_001_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate)  
Time elapsed: 2.258 sec  <<< ERROR!
java.lang.NullPointerException: null
at 
org.apache.hadoop.fs.s3a.S3AFastOutputStream.write(S3AFastOutputStream.java:191)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate.test_001_CreateHugeFile(ITestS3AHugeFileCreate.java:132)
{code}

trace implies that {{buffer == null}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13567) S3AFileSystem to override getStoragetStatistics() and so serve up its statistics

2016-08-31 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13567:
---

 Summary: S3AFileSystem to override getStoragetStatistics() and so 
serve up its statistics
 Key: HADOOP-13567
 URL: https://issues.apache.org/jira/browse/HADOOP-13567
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


Although S3AFileSystem collects lots of statistics, these aren't available 
programatically as {{getStoragetStatistics() }} isn't overridden.

It must be overridden and serve up the local FS stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-08-31 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/

[Aug 30, 2016 1:59:57 PM] (jlowe) MAPREDUCE-4784. TestRecovery occasionally 
fails. Contributed by Haibo
[Aug 30, 2016 5:43:20 PM] (weichiu) HDFS-10760. DataXceiver#run() should not 
log InvalidToken exception as
[Aug 30, 2016 9:00:13 PM] (mingma) HDFS-9392. Admins support for maintenance 
state. Contributed by Ming Ma.
[Aug 30, 2016 10:52:29 PM] (Arun Suresh) YARN-5221. Expose 
UpdateResourceRequest API to allow AM to request for
[Aug 31, 2016 1:42:55 AM] (aengineer) HDFS-10813. DiskBalancer: Add the 
getNodeList method in Command.




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 

Failed junit tests :

   hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM 
   hadoop.hdfs.TestMissingBlocksAlert 
   hadoop.hdfs.TestDecommissionWithStriped 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM 

Timed out junit tests :

   org.apache.hadoop.http.TestHttpServerLifecycle 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   CTEST:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
  [24K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [148K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [8.0K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/150/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-13568) S3AFastOutputStream to implement flush()

2016-08-31 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13568:
---

 Summary: S3AFastOutputStream to implement flush()
 Key: HADOOP-13568
 URL: https://issues.apache.org/jira/browse/HADOOP-13568
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran
Priority: Minor


{{S3AFastOutputStream}} doesn't implement {{flush()}}, so it's a no-op.

Really it should trigger a multipart upload of the current buffer.

Note that simply calling {{uploadBuffer()} isn't enough...do that and things 
fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13569) S3AFastOutputStream to take ProgressListener in file create()

2016-08-31 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13569:
---

 Summary: S3AFastOutputStream to take ProgressListener in file 
create()
 Key: HADOOP-13569
 URL: https://issues.apache.org/jira/browse/HADOOP-13569
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


For scale testing I'd like more meaningful progress than the Hadoop 
{{Progressable}} offers. 

Proposed: having {{S3AFastOutputStream}} check to see if the progressable 
passed in is also an instance of {{com.amazonaws.event.ProgressListener}} —and 
if so, wire it up directly.

This allows tests to directly track state of upload, log it and perhaps even 
assert on it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-31 Thread Sean Busbey
It's also the key Andrew has in the project's KEYS file:

http://www.apache.org/dist/hadoop/common/KEYS



On Tue, Aug 30, 2016 at 4:12 PM, Andrew Wang  wrote:
> Hi Eric, thanks for trying this out,
>
> I tried this gpg command to get my key, seemed to work:
>
> # gpg --keyserver pgp.mit.edu --recv-keys 7501105C
> gpg: requesting key 7501105C from hkp server pgp.mit.edu
> gpg: /root/.gnupg/trustdb.gpg: trustdb created
> gpg: key 7501105C: public key "Andrew Wang (CODE SIGNING KEY) <
> andrew.w...@cloudera.com>" imported
> gpg: no ultimately trusted keys found
> gpg: Total number processed: 1
> gpg:   imported: 1  (RSA: 1)
>
> Also found via search:
> http://pgp.mit.edu/pks/lookup?search=wang%40apache.org=index
>
>
> On Tue, Aug 30, 2016 at 2:06 PM, Eric Badger  wrote:
>
>> I don't know why my email client keeps getting rid of all of my spacing.
>> Resending the same email so that it is actually legible...
>>
>> All on OSX 10.11.6:
>> - Verified the hashes. However, Andrew, I don't know where to find your
>> public key, so I wasn't able to verify that they were signed by you.
>> - Built from source
>> - Deployed a pseudo-distributed clusterRan a few sample jobs
>> - Poked around the RM UI
>> - Poked around the attached website locally via the tarball
>>
>>
>> I did find one odd thing, though. It could be a misconfiguration on my
>> system, but I've never had this problem before with other releases (though
>> I deal almost exclusively in 2.x and so I imagine things might be
>> different). When I run a sleep job, I do not see any
>> diagnostics/logs/counters printed out by the client. Initially I ran the
>> job like I would on 2.7 and it failed (because I had not set
>> yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I didn't see
>> anything until I looked at the RM UI. There I was able to see all of the
>> logs for the failed job and diagnose the issue. Then, once I fixed my
>> parameters and ran the job again, I still didn't see any
>> diagnostics/logs/counters.
>>
>>
>> ebadger@foo: env | grep HADOOP
>> HADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-
>> src/hadoop-dist/target/hadoop-3.0.0-alpha1/
>> HADOOP_CONF_DIR=/Users/ebadger/conf
>> ebadger@foo: $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/
>> mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar sleep
>> -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME"
>> -Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1
>> -m 1 -r 1
>> WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
>> ebadger@foo:
>>
>>
>> After running the above command, the RM UI showed a successful job, but as
>> you can see, I did not have anything printed onto the command line.
>> Hopefully this is just a misconfiguration on my part, but I figured that I
>> would point it out just in case.
>>
>>
>> Thanks,
>>
>>
>> Eric
>>
>>
>>
>> On Tuesday, August 30, 2016 4:00 PM, Eric Badger
>>  wrote:
>>
>>
>>
>> All on OSX 10.11.6:
>> Verified the hashes. However, Andrew, I don't know where to find your
>> public key, so I wasn't able to verify that they were signed by you.Built
>> from sourceDeployed a pseudo-distributed clusterRan a few sample jobsPoked
>> around the RM UIPoked around the attached website locally via the tarball
>> I did find one odd thing, though. It could be a misconfiguration on my
>> system, but I've never had this problem before with other releases (though
>> I deal almost exclusively in 2.x and so I imagine things might be
>> different). When I run a sleep job, I do not see any
>> diagnostics/logs/counters printed out by the client. Initially I ran the
>> job like I would on 2.7 and it failed (because I had not set
>> yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I didn't see
>> anything until I looked at the RM UI. There I was able to see all of the
>> logs for the failed job and diagnose the issue. Then, once I fixed my
>> parameters and ran the job again, I still didn't see any
>> diagnostics/logs/counters.
>> ebadger@foo: env | grep HADOOPHADOOP_HOME=/Users/
>> ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/
>> target/hadoop-3.0.0-alpha1/HADOOP_CONF_DIR=/Users/ebadger/confebadger@foo:
>> $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/
>> mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar sleep
>> -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME"
>> -Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1
>> -m 1 -r 1WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be
>> incomplete.ebadger@foo:
>> After running the above command, the RM UI showed a successful job, but as
>> you can see, I did not have anything printed onto the command line.
>> Hopefully this is just a misconfiguration on my part, but I figured that I
>> would point it out just in case.
>> Thanks,
>> Eric
>>
>>
>>
>> On 

[DISCUSS] HADOOP-13341 Merge Request

2016-08-31 Thread Allen Wittenauer

Before requesting a merge vote, I'd like for folks to take a look at 
HADOOP-13341.  This branch changes how the vast majority of the _OPTS variables 
work in various ways, making things easier for devs and users by helping to 
make the rules consistent.  It also clarifies/cleans up how the _USER variables 
work.  Probably worth while pointing out that this work is also required if we 
ever want to make spaces in file paths work properly (see HADOOP-13365, where 
I'm attempting to fix that too... ugh.).

Also, most of the patch is test code, comments, and documentation... so while 
it's a large patch, there's not much actual code. :)

Thanks.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org