The "git" way of doing things would be to rebase the feature branch on
master (trunk) and then commit the patch stack.
Squashing the entire feature into a 10 MB megapatch is the "svn" way of
doing things.
The svn workflow evolved because merging feature branches back to trunk
was really painful
On Mon, Aug 28, 2017, at 14:22, Allen Wittenauer wrote:
>
> > On Aug 28, 2017, at 12:41 PM, Jason Lowe wrote:
> >
> > I think this gets back to the "if it's worth committing" part.
>
> This brings us back to my original question:
>
> "Doesn't this place an undue
On Mon, Aug 28, 2017, at 09:58, Allen Wittenauer wrote:
>
> > On Aug 25, 2017, at 1:23 PM, Jason Lowe wrote:
> >
> > Allen Wittenauer wrote:
> >
> > > Doesn't this place an undue burden on the contributor with the first
> > > incompatible patch to prove worthiness? What
One anti-pattern that keeps coming up over and over again is people
trying to do big and complex features without feature branches. This
happened with HDFS truncate as well. This inevitably leads to
controversy because people see very big and invasive patches zooming
past and get alarmed.
I think the Tomcat situation is concerning in a lot of ways.
1. We are downloading without authentication, using http rather than
https.
2. We are downloading an obsolete release.
3. Our build process is violating the apache.archive.org guidelines by
downloading from the site directly, rather
Hi all,
Recently a discussion came up on HADOOP-13028 about the wisdom of
overloading S3AInputStream#toString to output statistics information.
It's a difficult judgement for me to make, since I'm not aware of any
compatibility guidelines for InputStream#toString. Do we have
compatibility
Thanks for explaining, Chris. I generally agree that
UserGroupInformation should be annotated as Public rather than
LimitedPrivate, although you guys have more context than I do.
However, I do think it's important that we clarify that we can break
public APIs across a major version transition
On Tue, May 10, 2016, at 11:34, Hitesh Shah wrote:
> There seems to be some incorrect assumptions on why the application had
> an issue. For rolling upgrade deployments, the application bundles the
> client-side jars that it was compiled against and uses them in its
> classpath and expects to be
Did INFRA have any information on this?
best,
On Fri, May 6, 2016, at 15:14, Allen Wittenauer wrote:
>
> Anyone know why?
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands,
+1 for updating this in trunk. Thanks, Tsuyoshi Ozawa.
cheers,
Colin
On Mon, May 9, 2016, at 12:12, Tsuyoshi Ozawa wrote:
> Hi developers,
>
> We’ve worked on upgrading jersey(HADOOP-9613) for a years. It's
> essential change to support complication with JDK8. It’s almost there.
>
> One
opy-pasting your
> current comments) on this issue to the JIRA: HADOOP-12893
>
> Thanks
> +Vinod
>
> > On Apr 7, 2016, at 7:43 AM, Sean Busbey <bus...@cloudera.com> wrote:
> >
> > On Wed, Apr 6, 2016 at 6:26 PM, Colin McCabe <cmcc...@apache.org
> >
In general, the only bundled native component I can see is lz4. I guess
debatably we should add tree.h to the NOTICE file as well, since it came
from BSD and is licensed under that license.
Please keep in mind bundling means "included in the source tree", NOT
"downloaded during the build
.@huawei.com>
> wrote:
>
> > https://issues.apache.org/jira/browse/INFRA-11597 has been filed for this.
> >
> > -Vinay
> >
> > -Original Message-
> > From: Colin McCabe [mailto:co...@cmccabe.xyz]
> > Sent: 05 April 2016 08:07
> > T
Yes, please. Let's disable these mails.
C.
On Mon, Apr 4, 2016, at 06:21, Vinayakumar B wrote:
> bq. We don't spam common-dev about every time a new patch attachment
> gets posted
> to an existing JIRA. We shouldn't do that for github either.
>
> Is there any update on this. ?
> Any INFRA
> On 3/22/16, 11:03 PM, "Allen Wittenauer"
> wrote:
>
> >> On Mar 22, 2016, at 6:46 PM, Gangumalla, Uma
> >>wrote:
> >>
> >>> is it possible for me to setup a branch, self review+commit to that
> >>> branch, then request a branch
If the underlying problem is lack of reviewers for these improvements,
how about a design doc giving some motivation for the improvements and
explaining how they'll be implemented? Then we can decide if a branch
or a few JIRAs on trunk makes more sense.
The description for HADOOP-12857 is just
On Mon, Sep 28, 2015 at 12:52 AM, Steve Loughran wrote:
>
> the jenkins machines are shared across multiple projects; cut the executors
> to 1/node and then everyone's performance drops, including the time to
> complete of all jenkins patches, which is one of the goals.
+1, would be great to see Hadoop get ipv6 support.
Colin
On Mon, Aug 17, 2015 at 5:04 PM, Elliott Clark ecl...@apache.org wrote:
Nate (nkedel) and I have been working on IPv6 on Hadoop and HBase lately.
We're getting somewhere but there are a lot of different places that make
assumptions
I think it might make sense to keep around a repository of third-party
open source native code that we use in Hadoop. Nothing fancy, just a
few .tar.gz files in a git repo that we manage. This would avoid
incidents like this in the future and ensure that we will be able to
build old verisons of
PM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:
Branch merges made it hard to access change history on subversion
sometimes.
You can read the tale of woe here:
http://programmers.stackexchange.com/questions/206016/maintaining-svn-history-for-a-file-when-merge-is-done-from-the-dev
that
has merges, would you please elaborate the problem?
Thanks.
--Yongjun
On Mon, Mar 16, 2015 at 7:08 PM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:
Branch merges made it hard to access change history on subversion
sometimes.
You can read the tale of woe here
Is there a maven plugin or setting we can use to simply remove
directories that have no executable permissions on them? Clearly we
have the permission to do this from a technical point of view (since
we created the directories as the jenkins user), it's simply that the
code refuses to do it.
to see a directory. That might
even let us enable some of these tests that are skipped on Windows,
because Windows allows access for the owner even after permissions have
been stripped.
+1. JIRA?
Colin
Chris Nauroth
Hortonworks
http://hortonworks.com/
On 3/11/15, 2:10 PM, Colin
+1 for starting thinking about releasing 2.7 soon.
Re: building Windows binaries. Do we release binaries for all the
Linux and UNIX architectures? I thought we didn't. It seems a little
inconsistent to release binaries just for Windows, but not for those
other architectures and OSes. I wonder
errnum is 43, on Ubuntu 14.04 we have 133,
and on Solaris 11.1 we get 151.
If this is OK with you, I will open a jira for this.
Thanks,
Malcolm
On 12/12/2014 11:10 PM, Colin McCabe wrote:
Just use snprintf to copy the error message from strerror_r into a
thread-local buffer of 64 bytes
you to make the call.
Thanks for your patience,
Malcolm
On 12/10/2014 09:54 PM, Colin McCabe wrote:
On Wed, Dec 10, 2014 at 2:31 AM, malcolm
malcolm.kaval...@oracle.com wrote:
Hi Colin,
Thanks for the hints around JIRAs.
You are correct errno still exists, however sys_errlist does
changes. Perhaps it would be
better to file multiple JIRAs for each change, perhaps grouped, one per
issue ? Or should I file a JIRA for each modified source file ?
Thank you,
Malcolm
On 12/08/2014 09:53 PM, Colin McCabe wrote:
Hi Malcolm,
It's great that you are going to contribute
in.
cheers,
Colin
Thanks,
Malcolm
On 12/10/2014 10:45 AM, Colin McCabe wrote:
Hi Malcolm,
In general we file JIRAs for particular issues. So if one issue is
handling errlist on Solaris, that might be one JIRA. Another issue
might be handling socket write timeouts on Solaris. And so
Hi Malcolm,
It's great that you are going to contribute! Please make your patches
against trunk.
2.2 is fairly old at this point. It hasn't been the focus of
development in more than a year.
We don't use github or pull requests.
Check the section on
On Mon, Dec 8, 2014 at 7:46 AM, Steve Loughran ste...@hortonworks.com wrote:
On 8 December 2014 at 14:58, Ted Yu yuzhih...@gmail.com wrote:
Looks like there was still OutOfMemoryError :
On Wed, Nov 26, 2014 at 2:58 PM, Karthik Kambatla ka...@cloudera.com
wrote:
Yongjun, thanks for starting this thread. I personally like Steve's
suggestions, but think two digits should be enough.
I propose we limit the restrictions to versioning the patches with version
numbers and .patch
25, 2014 at 2:28 AM, Steve Loughran ste...@hortonworks.com
wrote:
On 25 November 2014 at 00:58, Bernd Eckenfels e...@zusammenkunft.net
wrote:
Hello,
Am Mon, 24 Nov 2014 16:16:00 -0800
schrieb Colin McCabe cmcc...@alumni.cmu.edu:
Conceptually, I think it's important
I'm usually an advocate for getting rid of unnecessary dependencies
(cough, jetty, cough), but a lot of the things in Guava are really
useful.
Immutable collections, BiMap, Multisets, Arrays#asList, the stuff for
writing hashCode() and equals(), String#Joiner, the list goes on. We
particularly
On Thu, Oct 2, 2014 at 1:15 PM, Ted Yu yuzhih...@gmail.com wrote:
On my Mac and on Linux, I was able to
find /usr/include/openssl/opensslconf.h
However the file is absent on Jenkins machine(s).
Just want to make sure that the file is needed for native build before
filing INFRA ticket.
On Wed, Oct 1, 2014 at 4:30 PM, John Smith sharepockywithj...@gmail.com wrote:
hi developers.
i have some native code working on solaris, but my changes use getgrouplist
from openssh. is that ok? do i need to do anything special? is the
license in file enough?
Is the license in which file
It looks like builds are failing on the H9 host with cannot access
java.lang.Runnable
Example from
https://builds.apache.org/job/PreCommit-HDFS-Build/8313/artifact/patchprocess/trunkJavacWarnings.txt
:
[INFO]
[INFO] BUILD
...@hortonworks.com wrote:
all the slaves are getting re-booted give it some more time
-giri
On Fri, Oct 3, 2014 at 1:13 PM, Ted Yu yuzhih...@gmail.com wrote:
Adding builds@
On Fri, Oct 3, 2014 at 1:07 PM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:
It looks like builds are failing on the H9 host
Thanks, Steve.
Should we just put everything in patchprocess/ like before? It seems
like renaming this directory to PreCommit-HADOOP-Build-patchprocess/
or PreCommit-YARN-Build-patchprocess/ in various builds has created
problems, and not made things any more clear. What do you guys think?
On Mon, Sep 15, 2014 at 10:48 AM, Allen Wittenauer a...@altiscale.com wrote:
It’s now September. With the passage of time, I have a lot of doubts
about this plan and where that trajectory takes us.
* The list of changes that are already in branch-2 scare the crap out of any
risk
It's an issue with test-patch.sh. See
https://issues.apache.org/jira/browse/HADOOP-11084
best,
Colin
On Mon, Sep 8, 2014 at 3:38 PM, Andrew Wang andrew.w...@cloudera.com wrote:
We're still not seeing findbugs results show up on precommit runs. I see
that we're archiving ../patchprocess/*, and
+1 for using git log instead of CHANGES.txt.
Colin
On Wed, Sep 3, 2014 at 11:07 AM, Chris Douglas cdoug...@apache.org wrote:
On Tue, Sep 2, 2014 at 2:38 PM, Andrew Wang andrew.w...@cloudera.com wrote:
Not to derail the conversation, but if CHANGES.txt is making backports more
annoying, why
Thanks for making this happen, Karthik and Daniel. Great job.
best,
Colin
On Tue, Aug 26, 2014 at 5:59 PM, Karthik Kambatla ka...@cloudera.com wrote:
Yes, we have requested for force-push disabled on trunk and branch-*
branches. I didn't test it though :P, it is not writable yet.
On Tue,
This mailing list is for questions about Apache Hadoop, not commercial
Hadoop distributions. Try asking a Hortonworks-specific mailing list.
best,
Colin
On Thu, Aug 14, 2014 at 3:23 PM, Niels Basjes ni...@basjes.nl wrote:
Hi,
In the core Hadoop you can on your (desktop) client have multiple
On Fri, Aug 15, 2014 at 8:50 AM, Aaron T. Myers a...@cloudera.com wrote:
Not necessarily opposed to switching logging frameworks, but I believe we
can actually support async logging with today's logging system if we wanted
to, e.g. as was done for the HDFS audit logger in this JIRA:
+1.
best,
Colin
On Fri, Aug 8, 2014 at 7:57 PM, Karthik Kambatla ka...@cloudera.com wrote:
I have put together this proposal based on recent discussion on this topic.
Please vote on the proposal. The vote runs for 7 days.
1. Migrate from subversion to git for version control.
2.
On Tue, Jul 29, 2014 at 2:45 AM, 俊平堵 junping...@apache.org wrote:
Sun's java code convention (published in year of 97) suggest 80 column per
line for old-style terminals. It sounds pretty old, However, I saw some
developers (not me :)) like to open multiple terminals in one screen for
+1.
Colin
On Tue, Jul 22, 2014 at 2:54 PM, Karthik Kambatla ka...@cloudera.com
wrote:
Hi devs
As you might have noticed, we have several classes and methods in them that
are not annotated at all. This is seldom intentional. Avoiding incompatible
changes to all these classes can be
Thanks for working on this, Dmitry. It's good to see Hadoop support
another platform. The code changes look pretty minor, too.
best,
Colin
On Tue, Jul 8, 2014 at 7:08 AM, Dmitry Sivachenko trtrmi...@gmail.com wrote:
Hello,
I am trying to make hadoop usable on FreeBSD OS. Following Steve
A unit test failed. See my response on JIRA.
best,
Colin
On Tue, Jul 8, 2014 at 3:11 PM, Jay Vyas jayunit100.apa...@gmail.com wrote:
these appear to be java errors related use to your jdk?
maybe your JDK doesnt match up well with your OS.
Consider trying red hat 6+ or Fedora 20?
On Jul 8,
hand, I
would imagine discussion and debate on what 8+ language features might be
useful to use at some future time could be a lively one.
On Wed, Jun 18, 2014 at 3:03 PM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:
In CDH5, Cloudera encourages people to use JDK7. JDK6 has been EOL
Er, that should read order in which it ran unit tests.
C.
On Fri, Jun 20, 2014 at 11:02 AM, Colin McCabe cmcc...@alumni.cmu.edu wrote:
I think the important thing to do right now is to ensure our code
works with jdk8. This is similar to the work we did last year to fix
issues that cropped up
when java 7 goes EOL.
-steve
(personal opinions only, etc, )
On Mon, Apr 14, 2014 at 9:22 AM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:
I think the bottom line here is that as long as our stable release
uses JDK6, there is going to be a very, very strong disincentive to
put
.
-steve
(personal opinions only, etc, )
On Mon, Apr 14, 2014 at 9:22 AM, Colin McCabe
cmcc...@alumni.cmu.edumailto:cmcc...@alumni.cmu.edu
wrote:
I think the bottom line here is that as long as our stable
release uses JDK6
It's not always practical to edit the log4j.properties file. For one
thing, if you're using a management system, there may be many log4j
properties sprinkled around the system, and it could be difficult to figure
out which is the one you need to edit. For another, you may not (should
not?) have
I think the bottom line here is that as long as our stable release
uses JDK6, there is going to be a very, very strong disincentive to
put any code which can't run on JDK6 into trunk.
Like I said earlier, the traditional reason for putting something in
trunk but not the stable release is that it
I took a quick glance at the build output, and I don't think openssl
is getting linked statically into libhadooppipes.a.
I see the following lines:
Linking CXX static library libhadooppipes.a
/usr/bin/cmake -P CMakeFiles/hadooppipes.dir/cmake_clean_target.cmake
/usr/bin/cmake -E
I've been using JDK7 for Hadoop development for a while now, and I
know a lot of other folks have as well. Correct me if I'm wrong, but
what we're talking about here is not moving towards JDK7 but
breaking compatibility with JDK6.
There are a lot of good reasons to ditch JDK6. It would let us
I think we need some way of isolating YARN, MR, and HDFS clients from
the Hadoop dependencies. Anything else just isn't sane... whatever we
may say, there will always be clients that rely on the dependencies
that we pull in, if we make those visible. I can't really blame
clients for this. It's
+1 for making this guarantee explicit.
It also definitely seems like a good idea to test mixed versions in bigtop.
HDFS is not immune to new client, old server scenarios because the HDFS
client gets bundled into a lot of places.
Colin
On Mar 20, 2014 10:55 AM, Chris Nauroth
Looks good.
+1, also non-binding.
I downloaded the source tarball, checked md5, built, ran some unit
tests, ran an HDFS cluster.
cheers,
Colin
On Tue, Feb 11, 2014 at 6:53 PM, Andrew Wang andrew.w...@cloudera.com wrote:
Thanks for putting this together Arun.
+1 non-binding
Downloaded
There is a maximum length for message buffers that was introduced by
HADOOP-9676. So messages with length 1752330339 should not be
accepted.
best,
Colin
On Sat, Dec 28, 2013 at 11:06 AM, Dhaivat Pandya
dhaivatpan...@gmail.com wrote:
Hi,
I've been working a lot with the Hadoop NameNode IPC
If 2.4 is released in January, I think it's very unlikely to include
symlinks. There is still a lot of work to be done before they're
usable. You can look at the progress on HADOOP-10019. For some of
the subtasks, it will require some community discussion before any
code can be written.
For
On Wed, Nov 13, 2013 at 10:10 AM, Arun C Murthy a...@hortonworks.com wrote:
On Nov 12, 2013, at 1:54 PM, Todd Lipcon t...@cloudera.com wrote:
On Mon, Nov 11, 2013 at 2:57 PM, Colin McCabe cmcc...@alumni.cmu.eduwrote:
To be honest, I'm not aware of anything in 2.2.1 that shouldn't
not sure if I can get
this
done in the next 7 days, so I'll keep you posted.
Chris Nauroth
Hortonworks
http://hortonworks.com/
On Fri, Oct 18, 2013 at 11:15 AM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:
Hi Chris,
I think it's feasible to complete those tasks
all of the above in the next 7 days? For the issues assigned
to me, I do expect to complete them.
Thanks again for all of your hard work!
Chris Nauroth
Hortonworks
http://hortonworks.com/
On Thu, Oct 17, 2013 at 3:07 PM, Colin McCabe cmcc...@alumni.cmu.eduwrote:
+1. Thanks, guys.
best
+1. Thanks, guys.
best,
Colin
On Thu, Oct 17, 2013 at 3:01 PM, Andrew Wang andrew.w...@cloudera.com wrote:
Hello all,
I'd like to call a vote to merge the HDFS-4949 branch (in-memory caching)
to trunk. Colin McCabe and I have been hard at work the last 3.5 months
implementing this feature
On Tue, Oct 1, 2013 at 8:59 PM, Arun C Murthy a...@hortonworks.com wrote:
Yes, sorry if it wasn't clear.
As others seem to agree, I think we'll be better getting a protocol/api
stable GA done and then iterating on bugs etc.
I'm not super worried about HADOOP-9984 since symlinks just made it
I don't think HADOOP-9972 is a must-do for the next Apache release,
whatever version number it ends up having. It's just adding a new
API, not changing any existing ones, and it can be done entirely in
generic code. (The globber doesn't involve FileSystem or AFS
subclasses).
My understanding is
What we're trying to get to here is a consensus on whether
FileSystem#listStatus and FileSystem#globStatus should return symlinks
__as_symlinks__. If 2.1-beta goes out with these semantics, I think
we are not going to be able to change them later. That is what will
happen in the do nothing
I think it makes sense to finish symlinks support in the Hadoop 2 GA release.
Colin
On Mon, Sep 16, 2013 at 6:49 PM, Andrew Wang andrew.w...@cloudera.com wrote:
Hi all,
I wanted to broadcast plans for putting the FileSystem symlinks work
(HADOOP-8040) into branch-2.1 for the pending Hadoop 2
The issue is not modifying existing APIs. The issue is that code has
been written that makes assumptions that are incompatible with the
existence of things that are not files or directories. For example,
there is a lot of code out there that looks at FileStatus#isFile, and
if it returns false,
On Wed, Aug 21, 2013 at 3:49 PM, Stack st...@duboce.net wrote:
On Wed, Aug 21, 2013 at 1:25 PM, Colin McCabe cmcc...@alumni.cmu.eduwrote:
St.Ack wrote:
+ Once I figured where the logs were, found that JAVA_HOME was not being
exported (don't need this in hadoop-2.0.5 for instance). Adding
St.Ack wrote:
+ Once I figured where the logs were, found that JAVA_HOME was not being
exported (don't need this in hadoop-2.0.5 for instance). Adding an
exported JAVA_HOME to my running shell which don't seem right but it took
care of it (I gave up pretty quick on messing w/
to CRLF as needed. After all,
eol-style=native would not be very useful if it only applied on
checkout. Windows users would be constantly checking in CRLF in that
case.
I'm not an svn expert, though, and I haven't tested the above.
Colin
On Fri, Jun 28, 2013 at 1:03 PM, Colin McCabe cmcc
I think the fix for this is to set svn:eol-style to native on this
file. It's set on many other files, just not on this one:
cmccabe@keter:~/hadoopST/trunk svn propget svn:eol-style
./hadoop-project-dist/README.txt
native
cmccabe@keter:~/hadoopST/trunk svn propget svn:eol-style
Hi Chris,
Thanks for the report. I filed
https://issues.apache.org/jira/browse/HADOOP-9667 for this.
Colin
Software Engineer, Cloudera
On Mon, Jun 24, 2013 at 2:20 AM, Christopher Ng cng1...@gmail.com wrote:
cross-posting this from cdh-users group where it received little interest:
is
You might try looking at what KosmoFS (KFS) did. They have some code in
org/apache/hadoop/fs which calls their own Java shim.
This way, the shim code in hadoop-common gets updated whenever FileSystem
changes, but there is no requirement to install KFS before building Hadoop.
You might also try
+1 (non-binding)
best,
Colin
On Sun, Mar 10, 2013 at 8:38 PM, Matt Foley ma...@apache.org wrote:
Hi all,
I have created branch-1.2 from branch-1, and propose to cut the first
release candidate for 1.2.0 on Monday 3/18 (a week from tomorrow), or as
soon thereafter as I can achieve a stable
Hi Erik,
Eclipse can run junit tests very rapidly. If you want a shorter test
cycle, that's one way to get it.
There is also Maven-shell, which reduces some of the overhead of starting
Maven. But I haven't used it so I can't really comment.
cheers,
Colin
On Mon, Jan 21, 2013 at 8:36 AM,
for this and tweak the reporting
code.
Which, if it involved going near maven source, is not something I am
prepared to do
On 14 December 2012 18:57, Colin McCabe cmcc...@alumni.cmu.edu wrote:
One approach we've taken in the past is making the junit test skip
itself when some
On Tue, Dec 18, 2012 at 1:05 AM, Colin McCabe cmcc...@alumni.cmu.edu wrote:
On Mon, Dec 17, 2012 at 11:03 AM, Steve Loughran ste...@hortonworks.com
wrote:
On 17 December 2012 16:06, Tom White t...@cloudera.com wrote:
There are some tests like the S3 tests that end with Test (e.g
One approach we've taken in the past is making the junit test skip
itself when some precondition is not true. Then, we often create a
property which people can use to cause the skipped tests to become a
hard error.
For example, all the tests that rely on libhadoop start with these lines:
@Test
On Fri, Dec 7, 2012 at 5:31 PM, Radim Kolar h...@filez.com wrote:
1. cmake and protoc maven plugins already exists. why you want to write a
new ones?
This has already been discussed; see
https://groups.google.com/forum/?fromgroups=#!topic/cmake-maven-project-users/5FpfUHmg5Ho
Actually the
Hi Peter,
This might be a good question for hdfs-dev?
As Harsh pointed out below, HDFS-573 was never committed. I don't
even see a patch attached, although there is some discussion.
In the mean time, might I suggest using the webhdfs interface on
Windows? webhdfs was intended as a stable REST
Hi Andrew,
It seems that your attachment did not appear on the mailing list.
I'm taking a look at the branch-2 build presently.
Colin
On Thu, Oct 18, 2012 at 6:20 PM, Andrew Purtell apurt...@apache.org wrote:
The port of HADOOP-8887 to branch-2 fails the build. Please kindly see
attached.
/version
+ version2.0.3-SNAPSHOT/version
/plugin
plugin
groupIdorg.apache.maven.plugins/groupId
--
1.7.9.5
On Thu, Oct 18, 2012 at 6:35 PM, Colin McCabe cmcc...@alumni.cmu.eduwrote:
Hi Andrew,
It seems that your attachment did not appear
We could also call uname from test-patch.sh and skip running native
tests on Mac OS X.
I also think that HADOOP-7147 should be open rather than won't fix,
as Alejandro commented. Allen Wittenauer closed it as won't fix
because he personally did not intend to fix it, but that doesn't mean
it's
Hi all,
We'd like to use CMake instead of autotools to build native (C/C++) code in
Hadoop. There are a lot of reasons to want to do this. For one thing, it is
not feasible to use autotools on the Windows platform, because it depends on
UNIX shell scripts, the m4 macro processor, and some other
88 matches
Mail list logo