As a heads up,
autoconf is dead, long live cmake
HADOOP-8368 is in in both trunk and branch-2 (thanks Colin)
If you are building native Hadoop (-Pnative) you may need to install CMAKE
in your system.
Thx
--
Alejandro
This is me,
https://issues.apache.org/jira/browse/HADOOP-8699
On it
Thx
On Tue, Aug 14, 2012 at 10:33 AM, Eli Collins e...@cloudera.com wrote:
Trevor,
Forgot to ask, since you can reproduce this can you confirm and see
why S3Conf.get is returning null for test.fs.s3.name?
On Mon, Aug 13,
I'm in the process of setting up test-patch for Oozie.
Got the initial test-patch, got the jenkins job (parameterized and
triggered remotely via URL)
My last piece of the puzzle is getting JIRA to call the job URL with
the JIRA ID when a patch becomes available.
How is that done in Hadoop?
.
-- Forwarded message --
From: *Alejandro Abdelnur* t...@cloudera.com mailto:t...@cloudera.com
Date: Thu, Aug 16, 2012 at 11:07 PM
Subject: hadoop qa bot, how does it work?
To: common-dev@hadoop.apache.org mailto:common-dev@hadoop.apache.org
I'm in the process
Makes sense, though the Jenkins runs should continue to run w/ native, right?
On Thu, Sep 6, 2012 at 12:49 AM, Hemanth Yamijala yhema...@gmail.com wrote:
Hi,
The test-patch script in Hadoop source runs a native compile with the
patch. On platforms like MAC, there are issues with the native
If I recall correctly many of the tests where ported over and only the
ones depending on JT/TT stuff were not, no?
Thx
On Thu, Sep 27, 2012 at 7:44 AM, Thomas Graves tgra...@yahoo-inc.com wrote:
So is someone signing up to do this port soon? Otherwise I don't see the
point in us spending time
+1
Alejandro
On Oct 15, 2012, at 10:52 AM, Robert Evans ev...@yahoo-inc.com wrote:
Eli,
By packaging I assume that you mean the RPM/Deb packages and not the tar.gz.
If that is the case I have no problem with them being removed because as you
said in the JIRA BigTop is already providing
site? let me check that, I may have missed that in my second attempt, oops!
On Thu, Oct 18, 2012 at 9:08 PM, Andrew Purtell apurt...@apache.org wrote:
The build would fail in the site phase for me for some reason because of
a missing dependency management section. Adding dev-support as parent
Hey Matt,
We already require java/mvn/protoc/cmake/forrest (forrest is hopefully on
its way out with the move of docs to APT)
Why not do a maven-plugin to do that?
Colin already has something to simplify all the cmake calls from the builds
using a maven-plugin
, the need for cross-platform scripting
remains.
Thanks,
--Matt
On Wed, Nov 21, 2012 at 11:25 AM, Alejandro Abdelnur t...@cloudera.com
wrote:
Hey Matt,
We already require java/mvn/protoc/cmake/forrest (forrest is hopefully on
its way out with the move of docs to APT)
Why not do a maven
29, 2012 at 2:39 PM, Matt Foley mfo...@hortonworks.comwrote:
Hi Alejandro,
Please see in-line below.
On Mon, Nov 26, 2012 at 1:52 PM, Alejandro Abdelnur t...@cloudera.com
wrote:
Matt,
The scope of this vote seems different from what was discussed in the
PROPOSAL thread
i've been playing around writing a couple of maven plugins, one to replace
saveversion.sh and the other to invoke protoc. they both work in windows
standard cmd (no cygwin required). together with hadoop-8887 they would remove
most of the scripting done the poms.
(they also work in linux and
Radim,
you can do mvn install in the plugins project and then you'll be able to
use it from the project you are using the plugin.
if the plugin is avail in a maven repo, then you don't need to do that.
Thx
On Tue, Dec 11, 2012 at 7:16 PM, Radim Kolar h...@filez.com wrote:
what is proposed
+1. Downloaded SRC, verified MD5 and signature, did a full build,
configured, started up HDFS, YARN, HTTPS, run a a couple of example MR
jobs, tested HTTPS access to HDFS.
Thanks for driving this release Arun.
On Thu, Feb 7, 2013 at 7:33 AM, Robert Evans ev...@yahoo-inc.com wrote:
I
AFAIK, official release artifacts MUST be source artifacts, binary
artifacts are convenience ones. The vote should be on source artifacts.
Thx
On Fri, Feb 8, 2013 at 8:52 AM, Chris Douglas cdoug...@apache.org wrote:
+1 (binding)
Looks like your key isn't in the file:
)
includes the src/ directory, which contains the full Apache sources.
Thanks,
--Matt
On Fri, Feb 8, 2013 at 9:37 AM, Alejandro Abdelnur t...@cloudera.com
wrote:
AFAIK, official release artifacts MUST be source artifacts, binary
artifacts are convenience ones. The vote should be on source
This seems to be used only in tests in common and in a standalone class in
streaming tests.
What is the purpose of these classes as they don't seem to be used in the
any of the source that ends up in Hadoop?
hadoop-common-project/hadoop-common/src/test/ddl/buffer.jr
while trying to commit MAPREDUCE-5113 to branch-2 I've noticed that the
CHANGES.txt are out of sync. Commit message are on the wrong releases.
I've spent some some time trying to fix it, but I did not find it straight
forward to do so.
I assume the same may be true for common, hdfs and yarn.
I
I've comitted HADOOP-9471 to trunk and branch-2 and closed JIRA with
fixedVersion 2.0.5.
If this JIRA makes it to 2.0.4 we need to update CHANGES.txt in
trunk/branch-2 and the fixedVersion in the JIRA.
Thx.
On Tue, Apr 9, 2013 at 8:39 PM, Arun C Murthy a...@hortonworks.com wrote:
Folks,
, Alejandro Abdelnur wrote:
while trying to commit MAPREDUCE-5113 to branch-2 I've noticed that
the
CHANGES.txt are out of sync. Commit message are on the wrong releases.
I've spent some some time trying to fix it, but I did not find it
straight
forward to do so.
I assume
Andrew,
Or with a twist, why not break/consolidate things as follows?
common API
common IMPL
hdfs CLIENT IMPL
hdfs SERVER IMPL
hdfs TOOLS
other filesystems CLIENT
yarn API
yarn CLIENT IMPL
yarn SERVER IMPL
yarn TOOLS
mapred API
mapred IMPL
mapred TOOLS
IMO, this would help significantly to
Do we need to add YARN-397?
Thanks.
On Wed, May 15, 2013 at 11:23 AM, Karthik Kambatla ka...@cloudera.comwrote:
Hi Arun,
Can we add HADOOP-9517 to the list - having compatibility guidelines should
help us support users and downstream projects better?
Thanks
Karthik
On Wed, May 15,
+1
On Tue, May 21, 2013 at 2:48 PM, Suresh Srinivas sur...@hortonworks.comwrote:
+1
On Tue, May 21, 2013 at 2:10 PM, Matt Foley ma...@apache.org wrote:
Hi all,
This has been a side topic in several email threads recently. Currently
we
have an ambiguity. We have a tradition in the
+1, verified MD5 and signature. Did a full build, started pseudo cluster,
run a few MR jobs, verified httpfs works.
Thanks.
On Sat, May 25, 2013 at 10:01 AM, Sangjin Lee sj...@apache.org wrote:
+1 (non-binding)
Thanks,
Sangjin
On Fri, May 24, 2013 at 8:48 PM, Konstantin Boudnik
+1, verified MD5 and signature. Did a full build, started pseudo cluster,
run a few MR jobs, verified httpfs works.
Thanks.
On Tue, May 28, 2013 at 9:00 AM, Thomas Graves tgra...@yahoo-inc.comwrote:
I've created a release candidate (RC0) for hadoop-0.23.8 that I would like
to release.
On the version number we use, if it is greater than 2.0.4, I really don't
care. Though I think Konstantin argument that branch-2 is publishing as
2.0.5-SNAPSHOT has some ground (still, it could be argued that they are DEV
JARs so they can be in flux).
On the changes that went into this RC, they
Konstantin, Cos,
As we change from 2.0.4.1 to 2.0.5 you'll need to do the following
housekeeping as you work the new RC.
* rename the svn branch
* update the versions in the POMs
* update the CHANGES.txt in trunk, branch-2 and the release branch
* change the current 2.0.5 version in JIRA to
Cos, just to be clear, this is happening SAT JUN01 1PM-2PM PST, not now
(FRI MAY31 1PM PST). Correct?
Thx
On Fri, May 31, 2013 at 12:45 PM, Konstantin Boudnik c...@apache.org wrote:
Guys,
I will be performing some changes wrt to moving 2.0.4.1 release candidate
to
2.0.5 space. As outline
Verified MD5 signature, built, configured pseudo cluster, run a couple of
sample jobs, tested HTTPFS.
Still, something seems odd.
The HDFS CHANGES.txt has the following entry under 2.0.5-alpha:
HDFS-4646. createNNProxyWithClientProtocol ignores configured timeout
value (Jagane Sundar via
:27PM, Alejandro Abdelnur wrote:
Verified MD5 signature, built, configured pseudo cluster, run a
couple of
sample jobs, tested HTTPFS.
Still, something seems odd.
The HDFS CHANGES.txt has the following entry under 2.0.5-alpha:
HDFS-4646
would be happy to fix it if this seems to be a problem.
Thanks,
Cos
On Sat, Jun 01, 2013 at 08:04PM, Alejandro Abdelnur wrote:
On RC1, verified MD5 signature, built, configured pseudo cluster, run a
couple of sample jobs, tested HTTPFS.
CHANGES.txt files contents are correct now. Still
+1 RC2. Verified MD5 signature, checked CHANGES.txt files, built,
configured pseudo cluster, run a couple of sample jobs, tested HTTPFS.
On Mon, Jun 3, 2013 at 12:51 PM, Konstantin Boudnik c...@apache.org wrote:
I have rolled out release candidate (rc2) for hadoop-2.0.5-alpha.
The
Arun,
This sounds great. Following is the list of JIRAs I'd like to get in. Note
that the are ready or almost ready, my estimate is that they can be taken
care of in a couple of day.
Thanks.
* YARN-752: In AMRMClient, automatically add corresponding rack requests
for requested nodes
impact:
, 2013, at 8:19 AM, Alejandro Abdelnur wrote:
Of the JIRAs in my laundry list for 2.1 the ones I would really want in
are
YARN-752, MAPREDUCE-5171 YARN-787.
I agree YARN-787 needs to go in ASAP is a blocker - I'm looking at it
right now.
I committed YARN-787. Thanks.
Arun
? This is related to YARN-787.
thanks,
Arun
On Jun 16, 2013, at 8:56 AM, Alejandro Abdelnur wrote:
Thanks Arun, I'll take care of committing YARN-752 and MAPREDUCE-5171
around noon today (SUN noon PST).
What is your take on YARN-791 MAPREDUCE-5130?
On Sun, Jun 16, 2013 at 7:02 AM, Arun C
Arun,
It seems there are still a few things to iron out before getting 2.1 out of
the door.
As RM for the release, would you mind sharing the current state of things
and your estimate on when it could happen?
Thanks.
On Wed, Jun 19, 2013 at 4:29 PM, Roman Shaposhnik r...@apache.org wrote:
This sounds great,
Is this restricted to the Hadoop project itself or the intention is to
cover the whole Hadoop ecosystem? If the later, how are you planning to
engage and sync up with the different projects?
Thanks.
On Thu, Jun 20, 2013 at 9:45 AM, Larry McCay lmc...@hortonworks.com wrote:
When Arun created branch-2.1-beta he stated:
The expectation is that 2.2.0 will be limited to content in
branch-2.1-beta
and we stick to stabilizing it henceforth (I've deliberately not created
2.2.0
fix-version on jira yet).
I working/committing some JIRAs that I'm putting in branch-2
Thanks Suresh, didn't know that, will do.
On Fri, Jun 21, 2013 at 9:48 AM, Suresh Srinivas sur...@hortonworks.comwrote:
I have added in to HDFS, HADOOP, MAPREDUCE projects. Can someone add it for
YARN?
On Fri, Jun 21, 2013 at 9:35 AM, Alejandro Abdelnur t...@cloudera.com
wrote:
When
PM, Arun C Murthy a...@hortonworks.com wrote:
I think I've shared this before, but here you go again…
http://s.apache.org/hadoop-2.1.0-beta-blockers
At this point, HADOOP-9421 seems like the most important.
thanks,
Arun
On Jun 20, 2013, at 8:31 AM, Alejandro Abdelnur t
why not just add a precommit hook in svn to reject commits with CRLF?
On Mon, Jul 1, 2013 at 10:51 AM, Raja Aluri r...@cmbasics.com wrote:
I added a couple of links that discusses 'line endings' when I added
.gitattributes in this JIRA.
Hi Lulynn,
I've commented in the JIRA, now that I see your email that gives me a bit
more of context on what you are trying to do.
If I understand correctly, you are trying to use this outside of Hadoop. If
that is the case you should set the PREFIX.kerberos.name.rules=DEFAULT
(or a custom
, Alejandro Abdelnur wrote:
Thanks Suresh, didn't know that, will do.
On Fri, Jun 21, 2013 at 9:48 AM, Suresh Srinivas sur...@hortonworks.com
wrote:
I have added in to HDFS, HADOOP, MAPREDUCE projects. Can someone add it
for
YARN?
On Fri, Jun 21, 2013 at 9:35 AM, Alejandro Abdelnur t
Leaving JIRAs and design docs aside, my recollection from the f2f lounge
discussion could be summarized as:
--
1* Decouple users-services authentication from (intra) services-services
authentication.
The main motivation for this is to get pluggable authentication and
integrated SSO
[moving bigtop to bcc]
Tim,
Except from HADOOP-9680 which has significant code changes and some false
changes (which I did not go thru), all other changes seem OK.
* Have you had a change to run ALL Hadoop testcases with them applied to
make sure there are not regression?
* Have you look at
webhdfs under load?
that would definitely help
thx
Alejandro
(phone typing)
On Jul 8, 2013, at 17:43, Suresh Srinivas sur...@hortonworks.com wrote:
Isn't Jetty used by WebHDFS? Given that, Jetty performance is still
important.
On Mon, Jul 8, 2013 at 2:06 PM, Alejandro Abdelnur t
Larry, all,
Still is not clear to me what is the end state we are aiming for, or that
we even agree on that.
IMO, Instead trying to agree what to do, we should first agree on the
final state, then we see what should be changed to there there, then we see
how we change things to get there.
The
requests with non-normalized capabilities. (ywskycn via tucu)
On Tue, Jul 9, 2013 at 10:54 AM, Arun C Murthy a...@hortonworks.com wrote:
On Jul 2, 2013, at 3:54 PM, Alejandro Abdelnur t...@cloudera.com wrote:
We need clarification on this then.
I was under
metrics. (sandyr via tucu)
On Wed, Jul 10, 2013 at 12:58 PM, Arun C Murthy a...@hortonworks.com wrote:
Sounds good. I'll re-create branch-2.1.0-beta from branch-2.1-beta when
the last 2 blockers are in.
thanks,
Arun
On Jul 10, 2013, at 10:56 AM, Alejandro Abdelnur t...@cloudera.com
wrote
Thanks Arun,
+1
* verified MD5 signature of source tarball.
* built from source tarball
* run apache-rat:check on source
* installed pseudo cluster (unsecure)
* test httpfs
* run pi example
* run unmanaged AM application
Minor NITs (in case we do a new RC):
* remove 2.1.1.-beta section from
done.
thx
On Tue, Jul 30, 2013 at 11:02 AM, Vinod Kumar Vavilapalli
vino...@hortonworks.com wrote:
Done for HADOOP, YARN and MAPREDUCE.
Can somebody on HDFS do it too?
Thanks,
+Vinod
On Jul 30, 2013, at 10:31 AM, Chris Nauroth wrote:
I just documented a patch in the 2.1.1-beta
...@yahoo-inc.com; Kai Zheng; Alejandro Abdelnur
Subject: Re: [DISCUSS] Hadoop SSO/Token Server Components
It seems to me that we can have the best of both worlds here...it's all
about the scoping.
If we were to reframe the immediate scope to the lowest common
denominator of what is needed
branch.
http://mail-archives.apache.org/mod_mbox/hadoop-general/201308.mbox/%3CCACO5Y4we4d8knB_xU3a=hr2gbeqo5m3vau+inba0li1i9e2...@mail.gmail.com%3E
Chris Nauroth
Hortonworks
http://hortonworks.com/
On Tue, Aug 6, 2013 at 1:04 PM, Alejandro Abdelnur t...@cloudera.com
wrote:
Larry
, Alejandro Abdelnur t...@cloudera.com
wrote:
I' like to upgrade to protobuf 2.5.0 for the 2.1.0 release.
As mentioned in HADOOP-9845, Protobuf 2.5 has significant benefits to
justify the upgrade.
Doing the upgrade now, with the first beta, will make things easier for
downstream projects
:
On Fri, Aug 9, 2013 at 1:15 PM, Alejandro Abdelnur t...@cloudera.com
wrote:
pinging again, I need help from somebody with sudo access to the hadoop
jenkins boxes to do this or to get sudo access for a couple of hours to set
up myself.
Have you asked on builds@ or filed an INFRA Jira issue
On Fri, Aug 9, 2013 at 10:56 PM, Giridharan Kesavan
gkesa...@hortonworks.com wrote:
Alejandro,
I'm upgrading protobuf on slaves hadoop1-hadoop9.
-Giri
On Fri, Aug 9, 2013 at 1:15 PM, Alejandro Abdelnur t...@cloudera.comwrote:
pinging again, I need help from somebody with sudo access
. 2.5 is in the default path.
If we still need 2.4 I may have to install it. Let me know
-Giri
On Sat, Aug 10, 2013 at 7:01 AM, Alejandro Abdelnur t...@cloudera.com
wrote:
thanks giri, how do we set 2.4 or 2.5., what is the path to both so we
can
use and env to set it in the jobs
as well.
Thoughts?
-Giri
On Mon, Aug 12, 2013 at 11:37 AM, Alejandro Abdelnur t...@cloudera.com
wrote:
Giri,
first of all, thanks for installing protoc 2.5.0.
I didn't know we were installing them as the only version and not driven
by
env/path settings.
Now we have a bit
12, 2013 at 2:57 PM, Alejandro Abdelnur t...@cloudera.comwrote:
About to commit HADOOP-9845 to trunk, in 5 mins. This will make trunk use
protoc 2.5.0.
thx
On Mon, Aug 12, 2013 at 11:47 AM, Giridharan Kesavan
gkesa...@hortonworks.com wrote:
I can take care of re-installing 2.4
do the protoc mismatch.
Thanks.
Alejandro
On Mon, Aug 12, 2013 at 5:53 PM, Alejandro Abdelnur t...@cloudera.comwrote:
shooting to get it i n for 2.1.0.
at moment is in trunk till the nightly finishes. then we'll decide
in the mean time, you can have multiple versions installed in diff dirs
, Alejandro Abdelnur t...@cloudera.comwrote:
Jenkins is running a full test run on trunk using protoc 2.5.0.
https://builds.apache.org/job/Hadoop-trunk/480
And it seems go be going just fine.
If everything looks OK, I'm planing to backport HADOOP-9845 to the
2.1.0-beta branch midday PST
.
By tomorrow we should have things mostly sorted out.
Thanks
On Tue, Aug 13, 2013 at 3:29 PM, Steve Loughran ste...@hortonworks.comwrote:
On 13 August 2013 13:09, Alejandro Abdelnur t...@cloudera.com wrote:
There is no indication that protoc 2.5.0 is breaking anything.
clearly
OK:
* verified MD5
* verified signature
* expanded source tar and did a build
* configured pseudo cluster and run a couple of example MR jobs
* did a few HTTP calls to HTTFS
NOT OK:
* CHANGES.txt files have 2.0.6 as UNRELEASED, they should have the date the
RC vote ends
* 'mvn apache-rat:check'
, Alejandro Abdelnur t...@cloudera.comwrote:
There is no indication that protoc 2.5.0 is breaking anything.
Hadoop-trunk builds have been failing way before 1/2 way with:
---
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test
test-patch came back.
I'll commit to trunk and all 2 branches.
Once done I'll send an email indicating new protoc is required for
development.
Thanks.
On Wed, Aug 14, 2013 at 10:51 AM, Alejandro Abdelnur t...@cloudera.comwrote:
I've filed https://issues.apache.org/jira/browse/HADOOP-9872
Following up on this.
HADOOP-9845 HADOOP-9872 have been committed
to trunk/branch-2/branch-2.1-beta/branch-2.1.0-beta.
All Hadoop developers must install protoc 2.5.0 in their development
machines for the build to run.
All Hadoop jenkins boxes are using protoc 2.5.0
The BUILDING.txt file has
forgot to add: A big thanks to Rajiv and Giri for helping out with the
changes in the Jenkins boxes.
On Wed, Aug 14, 2013 at 4:03 PM, Alejandro Abdelnur t...@cloudera.comwrote:
Following up on this.
HADOOP-9845 HADOOP-9872 have been committed
to trunk/branch-2/branch-2.1-beta/branch-2.1.0
and upload rc1
right now.
Please let me know if you feel like we need start doing the license for the
releasenotes in this release.
Thanks,
Cos
On Wed, Aug 14, 2013 at 10:40AM, Alejandro Abdelnur wrote:
OK:
* verified MD5
* verified signature
* expanded source tar and did a build
+1
Downloaded source tarball
Verified MD5
Verified Signature
Run apache-rat:check
Did a dist build
Started pseudo cluster
Run a couple of MR examples
Tested HttpFS
On Thu, Aug 15, 2013 at 10:29 PM, Konstantin Boudnik c...@apache.org wrote:
All,
I have created a release candidate (rc1) for
Thanks Arun.
+1
* Downloaded source tarball.
* Verified MD5
* Verified signature
* run apache-rat:check ok after minor tweak (see NIT1 below)
* checked CHANGES.txt headers (see NIT2 below)
* built DIST from source
* verified hadoop version of Hadoop JARs
* configured pseudo cluster
* tested
On Wed, Sep 18, 2013 at 11:29 AM, Steve Loughran ste...@hortonworks.comwrote:
I'm reluctant for this as while delaying the release, because we are going
to find problems all the way up the stack -which will require a
choreographed set of changes. Given the grief of the protbuf update, I
don't
A side note on the protobuf versions, you can have a client and a server
using different versions of protobuf, that works and it works well. What
you cannot do is compile with protoc version X and run using the JAR from
version Y.
On Thu, Sep 19, 2013 at 2:11 AM, J. Rottinghuis
Are we doing a new RC for 2.1.1-beta?
On Mon, Sep 23, 2013 at 9:04 PM, Vinod Kumar Vavilapalli vino...@apache.org
wrote:
Correct me if I am wrong, but FWIU, we already released a beta with the
same symlink issues. Given 2.1.1 is just another beta, I believe we can go
ahead with it and
Vote for the 2.1.1-beta release is closing tonight, while we had quite a
few +1s, it seems we need to address the following before doing a release:
symlink discussion: get a concrete and explicit understanding on what we
will do and in what release(s).
Also, the following JIRAs seem nasty
ping
On Tue, Sep 24, 2013 at 2:36 AM, Alejandro Abdelnur t...@cloudera.comwrote:
Vote for the 2.1.1-beta release is closing tonight, while we had quite a
few +1s, it seems we need to address the following before doing a release:
symlink discussion: get a concrete and explicit understanding
Arun,
Does this mean that you want to skip a beta release and go straight to GA
with the next release?
thx
On Tue, Oct 1, 2013 at 4:15 PM, Arun C Murthy a...@hortonworks.com wrote:
Guys,
I took a look at the content in 2.1.2-beta so far, other than the
critical fixes such as HADOOP-9984
Congratulations
On Tue, Oct 1, 2013 at 8:04 PM, Mattmann, Chris A (398J)
chris.a.mattm...@jpl.nasa.gov wrote:
That is awesome!
Congrats dudes this is great!! Please consider submitting an expanded
version of the paper to J. Big Data from Springer:
http://journalofbigdata.com/
As
+1
* downloaded source tarball
* verified MD5
* verified signature
* verified CHANGES.txt files, release # and date
* run 'mvn apache-rat:check' successfully
* built distribution
* setup speudo cluster
* started HDFS/YARN
* run some HTTFS tests
* run a couple of MR examples
* run a few tests
The following is happening in builds for MAPREDUCE and YARN patches.
I've seen the failures in hadoop5 and hadoop7 machines. I've increased
Maven memory to 1GB (export MAVEN_OPTS=-Xmx1024m in the jenkins
jobs) but still some failures persist:
Sound goods, just a little impedance between what seem to be 2 conflicting
goals:
* what features we target for each release
* train releases
If we want to do train releases at fixed times, then if a feature is not
ready, it catches the next train, no delays of the train because of a
feature. If
Shi,
From the MultipleOutputs javadocs:
When named outputs are used within a Mapper implementation, key/values
written to a name output are not part of the reduce phase, only
key/values written to the job OutputCollector are part of the reduce
phase.
Hope this helps.
Alejandro
On Wed, Oct 6,
Eric,
Yesterday I was trying the same, I've used the script from HADOOP-6846
(after doing a s/mapred/mapreduce/g)
then I had to add the hadoop-*JARs to the classpath
then when trying to start the scripts started complaining about things not
found in /usr/share
Then I've given up.
Thxs.
[Using the core-dev@ alias now]
-- Forwarded message --
From: Alejandro Abdelnur t...@cloudera.com
Date: Thu, Aug 4, 2011 at 10:22 AM
Subject: HADOOP-7119 patch brings Alfredo source into Hadoop
To: gene...@hadoop.apache.org
The previous patch for adding Kerberos Auth
PM, Allen Wittenauer a...@apache.org wrote:
On Aug 4, 2011, at 11:03 AM, Alejandro Abdelnur wrote:
What is the rationale for having the hadoop JARs outside of the lib/
directory?
It would definitely simplify packaging configuration if they are under
lib/
as well.
Any objection
notes.
Thanks.
Alejandro
On Thu, Aug 4, 2011 at 2:12 PM, Allen Wittenauer a...@apache.org wrote:
On Aug 4, 2011, at 1:59 PM, Alejandro Abdelnur wrote:
Pig, Hive bundle Hadoop JARs with distributions, so no issue there.
Re-read what I said:
I suspect lots of pig, hive
[CCed general@]
Mike,
What you are describing is MapReduce application scenario, where the DB is
handled from your MR code, nothing special from Hadoop side.
Thanks.
Alejandro
On Wed, Aug 10, 2011 at 5:34 AM, Segel, Mike mse...@navteq.com wrote:
Arrgh!
It's been far too many years since I
:$HADOOP_PREFIX/share/hadoop/*.jar
This provides a way to segment the library loading with least amount of
scripting and loosely coupled.
regards,
Eric
On Aug 4, 2011, at 2:40 PM, Alejandro Abdelnur wrote:
[moving to core-dev@, general@ BCCed]
Eric,
Even if the JAR is in lib/ you could import
On Wed, Aug 10, 2011 at 11:40 AM, Eric Yang eric...@gmail.com wrote:
On Aug 10, 2011, at 11:10 AM, Alejandro Abdelnur wrote:
Eric,
I'd argue that including the JAR as you suggest will most likely break
because of required dependencies of the Hadoop JAR that may not be part
of
HBase (ie
Eric,
Personally I'm fine either way.
Still, I fail to see why a generic/categorized tools increase/reduce the
risk of dead code and how they make more-difficult/easier the
packagedeployment.
Would you please explain this?
Thanks.
Alejandro
On Tue, Sep 6, 2011 at 6:38 PM, Eric Yang
+1. In addition, I've found easier for identifying the right patch to use a
version suffix, like HADOOP-1234v2.patch. Maybe we should recommend
something like that as a naming convention in the HowToContribute
On Fri, Sep 9, 2011 at 2:38 AM, Vinod Kumar Vavilapalli
vino...@hortonworks.com wrote:
Agreed. Furthermore, if I have 10+ versions of a patch, when getting
feedback knowing for with version would be handy, having a single name makes
this correlation difficult.
Thxs.
Alejandro
[PS: I know, I should code better not have to go thru several versions]
On Fri, Sep 9, 2011 at 11:08 AM,
Laxman,
This is not an incorrect usage of maven phases, those generated Java classes
are test classes, thus is generation in the 'generate-test-sources' phase.
The problem seem to be that eclipse does not recognize the
target/generated-test-source/java directory as a source directory (for
the generated tests directory in
the eclipse config with a pom change, which I think would be better then
trying to move the phase where test code is generated. So please file a
JIRA for it and we can discuss the proper fix in context of that JIRA.
--Bobby Evans
On 9/20/11 8:35 AM, Alejandro
Newbie committer here,
I may be missing something but I'm unable to do a commit due to the
following error.
Any idea?
Thanks.
Alejandro
SVN commit error:
-
dontknow:svn tucu$ svn commit -m HDFS-2294. Download of commons-daemon TAR
should not be under target (tucu)
Authentication realm:
Currently common, hdfs and mapred create partial tars which are not usable
unless they are stitched together into a single tar.
With HADOOP-7642 the stitching happens as part of the build.
The build currently produces the following tars:
1* common TAR
2* hdfs (partial) TAR
3* mapreduce
I've just uploaded a patch for MAPREDUCE-3003.
What is left to test is to get HDFS/MR2 running for a build and run an
example.
I don't think I'll have a chance to do that test today.
If somebody volunteers and does the run and +1 then it can be committed to
trunk and on Monday or Tuesday we can
From: Alejandro Abdelnur [t...@cloudera.com]
Sent: Wednesday, September 07, 2011 11:35 AM
To: mapreduce-...@hadoop.apache.org
Subject: Re: Hadoop Tools Layout (was Re: DistCpV2 in 0.23)
Makes sense
On Wed, Sep 7, 2011 at 11:32 AM, milind.bhandar...@emc.com wrote:
+1
Tim,
You have to download it snappy from source tarball, run './configure' and
then 'make install'
Thanks.
Alejandro
On Mon, Oct 31, 2011 at 11:24 AM, Tim Broberg tbrob...@yahoo.com wrote:
bump
Does anybody know how to build the snappy native library?
- Tim.
.
I'd really like not to switch to a different branch of the code yet again,
and surely *somebody* knows how to build snappy in the trunk...
- Tim.
From: Alejandro Abdelnur [t...@cloudera.com]
Sent: Monday, October 31, 2011 12:45 PM
To: common-dev
correct
On Mon, Oct 31, 2011 at 4:33 PM, Tim Broberg tim.brob...@exar.com wrote:
Download from google code?
From: Alejandro Abdelnur [t...@cloudera.com]
Sent: Monday, October 31, 2011 3:34 PM
To: common-dev@hadoop.apache.org
Subject: Re: Example
1 - 100 of 319 matches
Mail list logo