On 9/25/09 10:13 AM, Dhruba Borthakur dhr...@gmail.com wrote:
It is really nice to have wire-compatibility between clients and servers
running different versions of hadoop. The reason we would like this is
because we can allow the same client (Hive, etc) submit jobs to two
different clusters
On 9/25/09 12:44 PM, Sanjay Radia sra...@yahoo-inc.com wrote:
On Sep 25, 2009, at 12:03 PM, Allen Wittenauer wrote:
On 9/25/09 10:13 AM, Dhruba Borthakur dhr...@gmail.com wrote:
It is really nice to have wire-compatibility between clients and
servers
running different versions
Then you'll have no issues patching other things in 0.21 that are actual
bug fixes that also meet this criteria, right? Or does this only apply to
things that Yahoo! is hitting/deemed worthy?
On 11/25/09 12:03 PM, Tsz Wo (Nicholas), Sze s29752-hadoop...@yahoo.com
wrote:
+1 on committing
On 1/5/10 10:57 PM, gs...@tce.edu gs...@tce.edu wrote:
i am doing my research in hadoop security design.
Instead of using kerberos for hadoop security is it possible to
use Ldap authentication protocol???
Using LDAP (or NIS+, or NIS, or passwd/shadow files, or ... ) will require
a password
On 1/26/10 9:06 PM, Owen O'Malley owen.omal...@gmail.com wrote:
Doug decided he didn't like the majority of people editing their comments on
jira and disabled edits. If you want it back, please start a vote on this
list.
I think instead I'll just put comment after comment making
On 2/21/10 1:31 AM, springring springr...@126.com wrote:
in addition, can administrator help me to send the attached file. thks.
Just pop it up on the wiki.
On 2/24/10 9:35 AM, Ravi ravindra.babu.rav...@gmail.com wrote:
What are the key problems that the Hadoop community will be trying to
solve in the upcoming versions ? I did not find such discussion in the
archives. Please point me to any webpage or archives discussing this issue.
Should
On 3/15/10 9:06 AM, Owen O'Malley o...@yahoo-inc.com wrote:
From our 21 experience, it looks like our old release strategy is
failing.
Maybe this is a dumb question but... Are we sure it isn't the community
failing?
From where I stand, the major committers (PMC?) have essentially
On Apr 28, 2010, at 9:30 AM, kovachev wrote:
we are trying to set up Hadoop to run on Solaris 10 within Containers.
However, we encounter many problems.
Could you please write here down all the extra settings needed for running
Hadoop on Solaris?
The two big ones:
- whoami needs to
[Cutting the CC: line down to size ]
On Jun 18, 2010, at 3:37 AM, Bikash Singhal wrote:
Hi folks ,
I have received this error in the hadoop cluster. Has anybody anybody
seen this . Any solution.
Since you aren't picking anything out and you've shared a bunch of messages,
I'm going to go
Again, removing a bunch of CC:'es.
On Jun 21, 2010, at 2:26 AM, Bikash Singhal wrote:
Hi Hadoopers,
I have received WARN in the hadoop cluster. Has anybody seen this . Any
solution?
2010-06-06 01:45:04,079 WARN org.apache.hadoop.conf.Configuration:
On Jul 1, 2010, at 12:00 PM, Chris D wrote:
Yes, it is mountable on all machines simultaneously, and, for example, works
properly through file:///mnt/to/dfs in a single node cluster.
Then file:// will likely work a multi-node cluster as well. So I doubt you'll
need to write anything at
I seem to be ok with the little bit of _20 I've been using.
On Jul 21, 2010, at 5:58 AM, Bill Au wrote:
Now that jdk 1.6.0_21 is out, has anyone been running it with Hadoop? We
have also had problem running Hadoop with 1.6.0_18. So what version of
1.6.0 would people recommend for use with
On Jul 26, 2010, at 5:13 AM, Jorge Rodrigez wrote:
(though I am not sure of the exact integration point in
Hudson).
I think hudson is subscribed to the various jira mailing lists. The mail is
piped via stdin to a script that parses it and fires off the appropriate action
in hudson.
On Sep 19, 2010, at 7:57 PM, steven zhuang wrote:
hi, all,
I have sent this mail in common user list before, duplicate it
here to seek for more help from experts.
You'll likely have more luck on hdfs-dev.
I am wondering why seek(long) is disabled in HDFS.BlockReader?
Can I use
On Oct 14, 2010, at 12:58 PM, Null Ecksor wrote:
Does the namenode starts a fresh and gather all the namespace information
again from the datanodes?
I want to know if at all it looses the info written on the disk, is it able
to obtain the same from the datanodes in the cluster.
I've added
(Removing common-dev, because this isn't a dev question)
On Feb 26, 2011, at 7:25 AM, bikash sharma wrote:
Hi,
I have a 10 nodes Hadoop cluster, where I am running some benchmarks for
experiments.
Surprisingly, when I initialize the Hadoop cluster
(hadoop/bin/start-mapred.sh), in many
(Removing common-dev, because this isn't a dev question.)
On Mar 1, 2011, at 6:13 AM, bikash sharma wrote:
Hi,
Is there a way to disable the use of pipelining , i.e., the reduce phase is
started only after the map phase is completed?
Set mapred.reduce.slowstart.completed.maps to 1.
Be
On Mar 16, 2011, at 12:52 PM, Jane Chen wrote:
Hi,
I'm quite confused of the status and future of recent Hadoop versions.
Since I'm sure I'll be struck down by lightening from someone, just let
me put in the disclaimer that these are my opinions and do not necessarily (and
On Mar 17, 2011, at 1:19 PM, Jane Chen wrote:
In 0.20.2, there are some Interface API that got deprecated
(org.apache.hadoop.mapred), and their abstract class counter parts were
introduced (org.apache.hadoop.mapreduce). I read in the mailing list that
now the deprecated APIs are
On Apr 22, 2011, at 4:31 PM, Owen O'Malley wrote:
I've just created the 0.20-security-204 branch to start the stabilization
process for 0.20.204. I hope to get the 203 branch ready for a vote next week.
Could someone actually take the branch and try to install from scratch?
i.e.,
On May 10, 2011, at 5:13 PM, Trevor Robinson wrote:
Is the native build failing on ARM (where gcc doesn't support -m32) a
known issue, and is there a workaround or fix pending?
That's interesting. I didn't realize there was a gcc that didn't
support -m. This seems like an odd thing
On Jun 13, 2011, at 2:24 PM, Chris Douglas wrote:
Yes to both. I haven't tested the concat bzip2 support, but I've heard
it's broken. The commit log and release note for the issue are correct
on HADOOP-6835.
OK, thanks. I was looking for a double-check in case I missed
something in
On Jul 5, 2011, at 2:40 AM, Steve Loughran wrote:
1. you could use DNS proper, by way of Bonjour/avahi. You don't need to be
running any mDNS server to support .local, and I would strongly advise
against it in a large cluster (because .local resolution puts a lot of CPU
load on every
On Jul 6, 2011, at 5:05 PM, Eric Yang wrote:
Did you know that almost all linux desktop system comes with avahi
pre-installed and turn on by default?
... which is why most admins turn those services off by default. :)
What is more interesting is
that there are thousands of those
On Jul 25, 2011, at 7:05 PM, Owen O'Malley wrote:
I've created a release candidate for 0.20.204.0 that I would like to release.
It is available at: http://people.apache.org/~omalley/hadoop-0.20.204.0-rc0/
0.20.204.0 has many fixes including disk fail in place and the new rpm and
deb
I can't believe we're holding a vote on a release that isn't passing
the nightly build. If my vote was binding, I'd -1 it based upon that alone.
On Aug 2, 2011, at 4:28 AM, Steve Loughran wrote:
I'm getting confused about release roadmaps right now
branch-20 is the new trunk, given that features keep popping up in it rather
than bug fixes.
On Aug 2, 2011, at 12:23 PM, Eli Collins wrote:
However it is disappointing
to see some of the features being developed on branch-20-security,
rather being developed first on trunk and then ported to
branch-20-security.
... which was exactly my point.
On Aug 4, 2011, at 1:06 PM, Alejandro Abdelnur wrote:
[moving to core-dev@]
A big release note is doable.
Still, people normally use 'hadoop' script when submitting jobs and 'hadoop'
would take care of having the JAR in the classpath. What other things would
break?
Everyone
On Aug 4, 2011, at 1:59 PM, Alejandro Abdelnur wrote:
Pig, Hive bundle Hadoop JARs with distributions, so no issue there.
Re-read what I said:
I suspect lots of pig, hive, and hbase installations will also break.
It still remains a potential issue for those of us who
On Aug 9, 2011, at 2:28 PM, Harsh J wrote:
Mike,
On Wed, Aug 10, 2011 at 2:18 AM, Segel, Mike mse...@navteq.com wrote:
Right.
The problem is how do you distinguish when someone is asking about the
Hadoop (HDFS, MapReduce)
And the Hadoop ecosystem. (HDFS, MapReduce, HBase, Hive, Pig,
On Aug 9, 2011, at 8:55 AM, Owen O'Malley wrote:
All,
Matt rolled a 0.20.204.0rc1, but I think it got lost in the previous vote
thread. Unfortunately, it had the version as 0.20.204 and didn't update the
release notes. I've updated it, run the regression tests and I think we
should
On Aug 9, 2011, at 8:55 AM, Owen O'Malley wrote:
All,
Matt rolled a 0.20.204.0rc1, but I think it got lost in the previous vote
thread. Unfortunately, it had the version as 0.20.204 and didn't update the
release notes. I've updated it, run the regression tests and I think we
should
On Aug 18, 2011, at 12:28 AM, Owen O'Malley wrote:
This vote is still running with no votes other than mine.
I've tested with and without security on a 60 node cluster and I'm seeing
some failures, but not that many. On a terasort with 15,000 maps and 200
reduces, I ran the following
On Aug 25, 2011, at 1:07 PM, milind.bhandar...@emc.com
milind.bhandar...@emc.com wrote:
The problem is that the 0.20.2xx releases are neither a superset or subset
of the 0.21 release. In many ways, the 0.20.2xx.y releases would be better
named as 1.x.y.
This confused me totally. Is
On Sep 6, 2011, at 9:30 AM, Vinod Kumar Vavilapalli wrote:
We still need to answer Amareshwari's question (2) she asked some time back
about the automated code compilation and test execution of the tools module.
My #1 question is if tools is basically contrib reborn. If not, what
makes
On Sep 6, 2011, at 4:32 PM, Eli Collins wrote:
IMO if the tools module only gets stuff like distcp that's maintained
then it's not contrib, if it contains all the stuff from the current
MR contrib then tools is just a re-labeling of contrib. Given that
this proposal only covers moving
On Sep 9, 2011, at 12:57 PM, Eli Collins wrote:
Patches for trunk should be named: jira-xyz.patch
eg hdfs-123.patch
s,patch,txt, since jira doesn't appear to pass a content-type to indicate it is
readable by the browser (as you mentioned earlier).
2.5.1 should fix HADOOP-10986 as well.
On Aug 20, 2014, at 4:29 PM, Arun Murthy a...@hortonworks.com wrote:
+1.
Thanks Karthik.
Arun
On Wed, Aug 20, 2014 at 4:25 PM, Karthik Kambatla ka...@cloudera.com
wrote:
Thanks for the suggestions, gents.
Given we are doing 2.5.1, I
RE: —author
No, we should not do this. This just increases friction. It’s hard enough
getting patches reviewed and committed, esp if you are not @apache.org.
Trying to dig up a valid address (nevermind potentially having to deal with
'nsf...@example.com') is just asking for trouble.
Plus, the
Did a build. Started some stuff. Have a patch ready to be committed. ;)
Thanks Karthik and Daniel!
On Aug 26, 2014, at 2:26 PM, Karthik Kambatla ka...@cloudera.com wrote:
I compared the new asf git repo against the svn and github repos (mirrored
from svn). Here is what I see:
- for i in
On Sep 3, 2014, at 11:07 AM, Chris Douglas cdoug...@apache.org wrote:
As long as release notes and incompatible changes are recorded in each
branch, we gain no accuracy by maintaining this manually. Commit
messages that record the merge revisions instead of the change are
similarly
On Sep 3, 2014, at 11:42 AM, Allen Wittenauer a...@altiscale.com wrote:
On Sep 3, 2014, at 11:07 AM, Chris Douglas cdoug...@apache.org wrote:
As long as release notes and incompatible changes are recorded in each
branch, we gain no accuracy by maintaining this manually. Commit
messages
that
generates a changelog for a release. We could then use that to make sure
the various fields are set properly for previous releases, remove
CHANGES.txt once we're confident, and then use said script to generate the
changelog for future releases.
On Wed, Sep 3, 2014 at 11:47 AM, Allen Wittenauer
OK, it does, but only under certain conditions. Hmm.
On Sep 3, 2014, at 12:04 PM, Allen Wittenauer a...@altiscale.com wrote:
Looks like the web UI doesn't allow for bulk change of Fix Version.
*cries*
On Sep 3, 2014, at 11:56 AM, Andrew Wang andrew.w...@cloudera.com wrote:
Allen
to filter on 'Fixed' as well, given duplicates and won't fixed and
invalids listed w/a fix version as well.
On Sep 3, 2014, at 12:10 PM, Allen Wittenauer a...@altiscale.com wrote:
OK, it does, but only under certain conditions. Hmm.
On Sep 3, 2014, at 12:04 PM, Allen Wittenauer a...@altiscale.com
as
that is the earliest release that includes the fix.
On Wed, Sep 3, 2014 at 12:45 PM, Allen Wittenauer a...@altiscale.com wrote:
Figured it out. Basically can only do bulk fix version edits of one
project at a time since the versions are technically different for every
project
at 1:01 PM, Allen Wittenauer a...@altiscale.com wrote:
I was doing that too, but I went to the source:
https://wiki.apache.org/hadoop/HowToCommit says:
Resolve the issue as fixed, thanking the contributor. Always set the Fix
Version at this point, but please only set a single fix version
On Sep 3, 2014, at 4:57 PM, Chris Douglas cdoug...@apache.org wrote:
On Wed, Sep 3, 2014 at 11:42 AM, Allen Wittenauer a...@altiscale.com wrote:
We’ll also need to get much more strict about Fix Version really only
listing the earliest version. Many of list (next release) + (trunk
Nothing official or clean or whatever, but just to give people an idea of what
an auto generated CHANGES.txt file might look like, here are some sample runs
of the hacky thing I built, based upon the fixVersion information. It doesn't
break it down by improvement, etc. Also, the name on the
at 6:51 PM, Allen Wittenauer a...@altiscale.com wrote:
Nothing official or clean or whatever, but just to give people an idea of
what an auto generated CHANGES.txt file might look like, here are some
sample runs of the hacky thing I built, based upon the fixVersion
information. It doesn't
Oh, it's in hdfs. Sneaky.
On Sep 3, 2014, at 7:10 PM, Allen Wittenauer a...@altiscale.com wrote:
I don't see HADOOP-10957 in hadoop-common-project/hadoop-cmmon/CHANGES.txt on
github in the 2.5.1 branch.
On Sep 3, 2014, at 7:00 PM, Karthik Kambatla ka...@cloudera.com wrote:
2.5.1 - I
incompatible —but it is still good to
highlight the things we know are likely to cause trouble
On 4 September 2014 02:51, Allen Wittenauer a...@altiscale.com wrote:
Nothing official or clean or whatever, but just to give people an idea of
what an auto generated CHANGES.txt file might look
On Sep 5, 2014, at 9:19 AM, Karthik Kambatla ka...@cloudera.com wrote:
On Thu, Sep 4, 2014 at 8:37 AM, Allen Wittenauer a...@altiscale.com wrote:
We do need to have a talk about 3.x though. Looking over the
list, it would appear that a lot of (what became) early 2.x JIRAs were
Fixed the subtasks.
Hacked on my changes generator a bit more. Source is in my github repo if
anyone wants to play with it.
Here’s a (merged) 3.x changes.txt file from the current output built off of
JIRA. (The unmerged versions are also created, but look pretty much identical.)
Removing security@ , adding hdfs-dev@ .
On Sep 16, 2014, at 1:19 AM, Zhijie Shen zs...@hortonworks.com wrote:
Hi folks,
There're a bunch of ACLs configuration defaults, which are set to *:
1. yarn.admin.acl in yarn-default.xml
2.
On Sep 17, 2014, at 2:47 AM, Steve Loughran ste...@hortonworks.com wrote:
I don't agree. Certainly the stuff I got into Hadoop 2.5 nailed down the
filesystem binding with more tests than ever before.
FWIW, based upon my survey of JIRA, there are a lot of unit test fixes
that are
Here’s the date of the last commit by email address (or, at least, what git
thinks is the email address…) and the commit hash. People-wise, there are some
obvious dupes here but I’m too lazy to filter them out. ;)
2014-09-27 ste...@apache.org 7f300bcdc78d164a42d56c3f65a512cfe0ac40be
No illustration, as it isn't an image.
On Sep 28, 2014, at 2:12 PM, Roman Shaposhnik r...@apache.org wrote:
And this illustrates what exactly?
Thanks,
Roman.
On Sun, Sep 28, 2014 at 2:01 PM, Allen Wittenauer a...@altiscale.com wrote:
Here’s the date of the last commit by email
On Sep 28, 2014, at 2:25 PM, Roman Shaposhnik ro...@shaposhnik.org wrote:
On Sun, Sep 28, 2014 at 2:17 PM, Allen Wittenauer a...@altiscale.com wrote:
No illustration, as it isn't an image.
And I suppose it has no point either, then. So perhaps it doesn't belong
to the public mailing list
I think people forget we have a wiki that documents this and other things ...
https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch
On Dec 2, 2014, at 10:01 AM, Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com wrote:
jiraNameId.[branchName.]revisionNum.patch*
+1 for this format. Thanks
sys_errlist was removed for a reason. Creating a fake sys_errlist on Solaris
will mean the libhadoop.so will need to be tied a specific build
(kernel/include pairing) and therefore limits upward mobility/compatibility.
That doesn’t seem like a very good idea.
IMO, switching to strerror_r
On Dec 13, 2014, at 4:05 AM, Steve Loughran ste...@hortonworks.com wrote:
The shell scripts are also undertested and only intermittently maintained
—there's been recent work there in HADOOP-9902(?) which is a great
improvement to trunk. If you can help test the CLI on your OS, that will
On Dec 14, 2014, at 8:38 AM, Allen Wittenauer a...@altiscale.com wrote:
On Dec 13, 2014, at 4:05 AM, Steve Loughran ste...@hortonworks.com wrote:
The shell scripts are also undertested and only intermittently maintained
—there's been recent work there in HADOOP-9902(?) which is a great
On Dec 20, 2014, at 9:09 AM, Raghavendra Vaidya raghavendra.vai...@gmail.com
wrote:
I have been struggling to setup hadoop code with native libraries on mac
osx this gave me an idea to write a utility which can help setup
hadoop development environment either on intellij or eclipse
Is process really the problem? Or, more directly, how does any of this
actually increase the pool beyond the (I’m feeling generous today) 10 or so
committers (never mind PMC) that actually review patches that come from outside
their employers on a regular basis?
To put this
on the correct linkage of the
binary representations of programs in the Java programming language.
Certainly removing existing levels would be backwards-incompatible.
Chris Nauroth
Hortonworks
http://hortonworks.com/
On Thu, Jan 15, 2015 at 6:14 AM, Allen Wittenauer a...@altiscale.com
IIRC, it was marked as evolving because it wasn’t clear at the time
whether we would need to add more stability levels. (One of the key
inspirations for the stability levels—Sun’s ARC process—had more.)
So I think it’s important to remember that if this gets changed to
stable,
The fact that reviews.apache.org has ~35k users (
https://reviews.apache.org/users/?page=711 ) that mostly appear to be bots
gives me zero confidence in using this tool for anything real.
On Jan 30, 2015, at 11:11 AM, Gera Shegalov g...@apache.org wrote:
Splitting the conversation via
IMO, HDFS-5796 (in some form or another) is a blocker for 2.7.
Right now, DFS browsing is pretty much broken on secure systems when a
hadoop-auth-compatible plugin is in use. The fix (HDFS-5716) introduced
(what appears to be) an incompatible and undocumented method to provide auth
versus
or report redundant commits on a branch with merged
ancestor branches?
Thanks.
--Yongjun
On Tue, Mar 17, 2015 at 11:21 AM, Allen Wittenauer a...@altiscale.com wrote:
Nope. I’m not particularly in the mood to write a book about a
topic that I’ve beat to death in private
should switch to using your way, and save
committer's effort of taking care of CHANGES.txt (quite some save IMO).
Hope more people can share their thoughts.
Thanks.
--Yongjun
On Fri, Mar 13, 2015 at 4:45 PM, Allen Wittenauer a...@altiscale.com wrote:
I think the general consensus
).
Hope more people can share their thoughts.
Thanks.
--Yongjun
On Fri, Mar 13, 2015 at 4:45 PM, Allen Wittenauer a...@altiscale.com
wrote:
I think the general consensus is don’t include the changes.txt file in
your commit. It won’t be correct for both branches if such a commit
On Mar 18, 2015, at 2:32 PM, Andrew Wang andrew.w...@cloudera.com wrote:
The bigger question for me is why we have CHANGES.txt at all when we have
release notes, since the information is almost identical.
Yeah, I don’t think it was always like that. Stripping it down to just the
ones
It would be great if everyone could make sure that when resolving a
JIRA Issue that they:
A) Make sure that their final message is actually in the comment box and not in
the release notes box. For 2.7, there are currently 5 out of the 24 release
notes that are +1 type
Hi folks,
There are ~6,000 Hadoop JIRA issues that have gone unaddressed,
including ~900 with patches waiting to be reviewed. Among other things, this
lack of attention to our backlog is making the Hadoop project very unfriendly
to contributors--which is ultimately very unhealthy for
Between this and the other thread, I’m seeing:
* companies that were forced to make internal forks because their
patches were ignored are now considered the deciders for whether we move forward
* 5 years since the last branch off of trunk is considered ‘soon’
*
).
On Thu, Mar 5, 2015 at 9:21 PM, Allen Wittenauer a...@altiscale.com wrote:
Is there going to be a general upgrade of dependencies? I'm thinking of
jetty jackson in particular.
On Mar 5, 2015, at 5:24 PM, Andrew Wang andrew.w...@cloudera.com wrote:
I've taken the liberty of adding
I think the general consensus is don’t include the changes.txt file in your
commit. It won’t be correct for both branches if such a commit is destined for
both. (No, the two branches aren’t the same.)
No, git log isn’t more accurate. The problems are:
a) cherry picks
b) branch mergers
c)
On Mar 10, 2015, at 12:40 PM, Karthik Kambatla ka...@cloudera.com wrote:
Are we okay with breaking other forms of compatibility for Hadoop-3, like
behavior, dependencies, JDK, classpath, environment? I think so. Are we
okay with breaking these forms of compatibility in future Hadoop-2.x?
How would you propose we use SELinux features to support security,
especially in a distributed manner where clients might be under different
administrative controls? What about the non-Linux platforms that Hadoop runs
on?
On Mar 26, 2015, at 3:46 AM, Madhan Sundararajan
Very likely. One of the things I noticed during HADOOP-11746 is that
there is a HUGE, catastrophic race if Jenkins doesn’t setup the environment
correctly or leaks variables between runs. shellcheck prints out so many
messages on the current code I’m surprised it doesn’t crash.
On
Between:
* removing -finalize
* breaking HDFS browsing
* changing du’s output (in the 2.7 branch)
* changing various names of metrics (either intentionally or otherwise)
* changing the JDK release
… and probably lots of other stuff in branch-2 I
One of the questions that keeps popping up is “what exactly is in trunk?”
As some may recall, I had done some experiments creating the change log based
upon JIRA. While the interest level appeared to be approaching zero, I kept
playing with it a bit and eventually also started playing with
The big question is whether or not Java’s implementation of Kerberos
supports it. If so, which JDK release. Java’s implementation tends to run a
bit behind MIT. Additionally, there is a general reluctance to move Hadoop’s
baseline Java version to something even supported until user
-9840141129
Mailto: madhan.sundarara...@tcs.com
Website: http://www.tcs.com
Experience certainty. IT Services
Business Solutions
Consulting
From: Allen
Hello everyone!
(to: and reply-to: set to common-dev, cc: the rest of ‘em, to
concentrate the discussion)
HADOOP-11731 has just been committed to *trunk*. This change does two
things:
a) Removes dev-support/relnotes.py
b) Adds dev-support/releasedocmaker.py
On Apr 2, 2015, at 12:40 PM, Vinod Kumar Vavilapalli vino...@hortonworks.com
wrote:
We'd then doing two commits for every patch. Let's simply not remove
CHANGES.txt from trunk, keep the existing dev workflow, but doc the release
process to remove CHANGES.txt in trunk at the time of a
On Apr 2, 2015, at 11:36 AM, Mai Haohui ricet...@gmail.com wrote:
Hi Allen,
Thanks for driving this. Just some quick questions:
Removing changes.txt, relnotes.py, etc from branch-2 would be an
incompatible change. Pushing aside the questions of that document’s
quality (hint:
On Apr 23, 2015, at 7:57 PM, Sidharta Seethana sidharta.apa...@gmail.com
wrote:
About (3.) , a lot of the check style rules seem to be arcane/unnecessary.
Please see : https://issues.apache.org/jira/browse/HADOOP-11869
a) I've closed it as a dupe of HADOOP-11866 to keep everything in one
(Reply-to set to common-dev@)
With over 900 patches not yet reviewed and approved for Apache Hadoop,
it's time to make some strong progress on the bug list!
A number of Apache Hadoop committers and Hadoop-related tech companies
are hosting an Apache Hadoop Community event on
FYI, MAPREDUCE-6324 is the first JIRA to be tested with the new code in place
if someone hasn’t seen the new output.
On Apr 21, 2015, at 9:06 PM, Allen Wittenauer a...@altiscale.com wrote:
Just a heads up that I’ll be committing this to trunk and branch-2 here
in a bit. I’ll
16, 2015, at 7:38 PM, Chris Nauroth cnaur...@hortonworks.com wrote:
I'd like to thank Allen Wittenauer for his work on HADOOP-11746 to rewrite
test-patch.sh. There is a lot of nice new functionality in there. My
favorite part is that some patches will execute much faster, so I expect
Hey gang,
Just so everyone is aware, if you are working on a patch for either a
feature branch or a major branch, if you name the patch with the branch name
following the spec in HowToContribute (and a few other ways… test-patch tries
to figure it out!), test-patch.sh
Vavilapalli vino...@hortonworks.com
wrote:
Does this mean HADOOP-7435 is no longer needed / closeable as dup?
Thanks
+Vinod
On Apr 22, 2015, at 12:34 PM, Allen Wittenauer a...@altiscale.com wrote:
Hey gang,
Just so everyone is aware, if you are working on a patch for either
On Apr 24, 2015, at 11:41 PM, Vinod Kumar Vavilapalli vino...@hortonworks.com
wrote:
Marco Zühlke pinged me offline informing me that I completely got my
issue-count wrong.
Seems like I had a very bizarre filter, I missed closing some tickets too at
release time.
You were
Oh, this is also in the release notes, but one can use a git reference # as
well. :) (with kudos to OOM for the idea.)
On Apr 22, 2015, at 8:57 PM, Allen Wittenauer a...@altiscale.com wrote:
More than likely. It probably needs more testing (esp under Jenkins).
It should be noted
On Apr 22, 2015, at 11:34 PM, Zheng, Kai kai.zh...@intel.com wrote:
Hi Allen,
This sounds great.
Naming a patch foo-HDFS-7285.00.patch should get tested on the HDFS-7285
branch.
Does it happen locally in developer's machine when running test-patch.sh, or
also mean something in
Err, first jira mentioned should be HADOOP-11861.
On Apr 22, 2015, at 8:10 AM, Allen Wittenauer a...@altiscale.com wrote:
Some status:
* So far, HADOOP-11627 was filed which is luckily an extremely easy bug to
fix.
* There have been a few runs which seems to indicate that *something
1 - 100 of 1185 matches
Mail list logo