Hi Eric, thanks for trying this out,

I tried this gpg command to get my key, seemed to work:

# gpg --keyserver pgp.mit.edu --recv-keys 7501105C
gpg: requesting key 7501105C from hkp server pgp.mit.edu
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 7501105C: public key "Andrew Wang (CODE SIGNING KEY) <
andrew.w...@cloudera.com>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

Also found via search:
http://pgp.mit.edu/pks/lookup?search=wang%40apache.org&op=index


On Tue, Aug 30, 2016 at 2:06 PM, Eric Badger <ebad...@yahoo-inc.com> wrote:

> I don't know why my email client keeps getting rid of all of my spacing.
> Resending the same email so that it is actually legible...
>
> All on OSX 10.11.6:
> - Verified the hashes. However, Andrew, I don't know where to find your
> public key, so I wasn't able to verify that they were signed by you.
> - Built from source
> - Deployed a pseudo-distributed clusterRan a few sample jobs
> - Poked around the RM UI
> - Poked around the attached website locally via the tarball
>
>
> I did find one odd thing, though. It could be a misconfiguration on my
> system, but I've never had this problem before with other releases (though
> I deal almost exclusively in 2.x and so I imagine things might be
> different). When I run a sleep job, I do not see any
> diagnostics/logs/counters printed out by the client. Initially I ran the
> job like I would on 2.7 and it failed (because I had not set
> yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I didn't see
> anything until I looked at the RM UI. There I was able to see all of the
> logs for the failed job and diagnose the issue. Then, once I fixed my
> parameters and ran the job again, I still didn't see any
> diagnostics/logs/counters.
>
>
> ebadger@foo: env | grep HADOOP
> HADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-
> src/hadoop-dist/target/hadoop-3.0.0-alpha1/
> HADOOP_CONF_DIR=/Users/ebadger/conf
> ebadger@foo: $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/
> mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar sleep
> -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME"
> -Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1
> -m 1 -r 1
> WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
> ebadger@foo:
>
>
> After running the above command, the RM UI showed a successful job, but as
> you can see, I did not have anything printed onto the command line.
> Hopefully this is just a misconfiguration on my part, but I figured that I
> would point it out just in case.
>
>
> Thanks,
>
>
> Eric
>
>
>
> On Tuesday, August 30, 2016 4:00 PM, Eric Badger
> <ebad...@yahoo-inc.com.INVALID> wrote:
>
>
>
> All on OSX 10.11.6:
> Verified the hashes. However, Andrew, I don't know where to find your
> public key, so I wasn't able to verify that they were signed by you.Built
> from sourceDeployed a pseudo-distributed clusterRan a few sample jobsPoked
> around the RM UIPoked around the attached website locally via the tarball
> I did find one odd thing, though. It could be a misconfiguration on my
> system, but I've never had this problem before with other releases (though
> I deal almost exclusively in 2.x and so I imagine things might be
> different). When I run a sleep job, I do not see any
> diagnostics/logs/counters printed out by the client. Initially I ran the
> job like I would on 2.7 and it failed (because I had not set
> yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I didn't see
> anything until I looked at the RM UI. There I was able to see all of the
> logs for the failed job and diagnose the issue. Then, once I fixed my
> parameters and ran the job again, I still didn't see any
> diagnostics/logs/counters.
> ebadger@foo: env | grep HADOOPHADOOP_HOME=/Users/
> ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/
> target/hadoop-3.0.0-alpha1/HADOOP_CONF_DIR=/Users/ebadger/confebadger@foo:
> $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/
> mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar sleep
> -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME"
> -Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1
> -m 1 -r 1WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be
> incomplete.ebadger@foo:
> After running the above command, the RM UI showed a successful job, but as
> you can see, I did not have anything printed onto the command line.
> Hopefully this is just a misconfiguration on my part, but I figured that I
> would point it out just in case.
> Thanks,
> Eric
>
>
>
>     On Tuesday, August 30, 2016 12:58 PM, Andrew Wang <
> andrew.w...@cloudera.com> wrote:
>
>
> I'll put my own +1 on it:
>
> * Built from source
> * Started pseudo cluster and ran Pi job successfully
>
> On Tue, Aug 30, 2016 at 10:17 AM, Zhe Zhang <z...@apache.org> wrote:
>
> >
> > Thanks Andrew for the great work! It's really exciting to finally see a
> > Hadoop 3 RC.
> >
> > I noticed CHANGES and RELEASENOTES markdown files which were not in
> > previous RCs like 2.7.3. What are good tools to verify them? I tried
> > reading them on IntelliJ but format looks odd.
> >
> > I'm still testing the RC:
> > - Downloaded and verified checksum
> > - Built from source
> > - Will start small cluster and test simple programs, focusing on EC
> > functionalities
> >
> > -- Zhe
> >
> > On Tue, Aug 30, 2016 at 8:51 AM Andrew Wang <andrew.w...@cloudera.com>
> > wrote:
> >
> >> Hi all,
> >>
> >> Thanks to the combined work of many, many contributors, here's an RC0
> for
> >> 3.0.0-alpha1:
> >>
> >> http://home.apache.org/~wang/3.0.0-alpha1-RC0/
> >>
> >> alpha1 is the first in a series of planned alpha releases leading up to
> >> GA.
> >> The objective is to get an artifact out to downstreams for testing and
> to
> >> iterate quickly based on their feedback. So, please keep that in mind
> when
> >> voting; hopefully most issues can be addressed by future alphas rather
> >> than
> >> future RCs.
> >>
> >> Sorry for getting this out on a Tuesday, but I'd still like this vote to
> >> run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll
> extend
> >> if we lack the votes.
> >>
> >> Please try it out and let me know what you think.
> >>
> >> Best,
> >> Andrew
> >>
> >
>

Reply via email to