On 22 May 2012 11:33, Radim Kolar h...@filez.com wrote:
i have better experience with scons instead of cmake
mmm, may be better to jump beyond make altogether to the higher levels of
nativeish build tools.
What would be good is that the output can be parsed by jenkins have that
set up to
1. This is a user question, so please use the common-user or mapreduce-user
mailing lists. There are more people on it and it is the better place.
2. Before panicing and asking others for help, always try and do a bit of
research. The stack trace says the cause is BindException and Address
This is a hadoop-user q, not a development one -please use the right as
user questions get ignored on the dev ones.
also:
http://wiki.apache.org/hadoop/ConnectionRefused
On 11 July 2012 19:23, Momina Khan momina.a...@gmail.com wrote:
i use the following command to try to copy data from hdfs to
There aren't bin and net categories in JIRA, yet often bugs go against the
code there?
Should I add them?
done.
On 6 August 2012 13:45, Eli Collins e...@cloudera.com wrote:
Works for me. I've been adding missing ones that make sense (eg webhdfs).
On Mon, Aug 6, 2012 at 11:34 AM, Steve Loughran ste...@hortonworks.com
wrote:
There aren't bin and net categories in JIRA, yet often bugs go against
correction: i'd like a junit test that pushes some of the std writables to
an ObjectOutputStream and back, verifies round-tripping of data. It may
seem extra work -but it stops serialization breaking in future.
-steve
On 9 August 2012 09:11, Steve Loughran ste...@hortonworks.com wrote:
I've
--
Steve Loughran
Hortonworks Inc
ste...@hortonworks.com
skype: steve_loughran
tel: +1 408 400 3721
http://hortonworks.com/download/
What are you trying to log/analyse?
I self-assigned to me do better logging for machine analysis, a long time
ago, but never sat down to do any of it -yet-
https://issues.apache.org/jira/browse/HADOOP-7466
On 23 August 2012 20:44, Li Shengmei lisheng...@ict.ac.cn wrote:
Hi, all
I
On 29 August 2012 10:36, Li Shengmei lisheng...@ict.ac.cn wrote:
Hi, Steven,
Thank you very much. I just start to study the log analysis of
HADOOP. I will review your work carefully.
Btw, which part of hadoop source codes should be read carefully if I want
to get know the log record?
On 5 October 2012 18:27, Thilee Subramaniam thi...@quantcast.com wrote:
We at Quantcast have released QFS 1.0 (Quantcast File System) to open
source. This is based on the KFS 0.5 (Kosmos Distributed File System),
a C++ distributed filesystem implementation. KFS plugs into Apache
Hadoop via
On 10 October 2012 16:03, Thilee Subramaniam thi...@quantcast.com wrote:
Hi Steve,
Like Harsh said, HADOOP-8886 addresses removing KFS from apache tree.
But I interpret your suggestion as 'moving qfs.jar out of apache tree, and
keeping the jar in possibly a maven repo externally. The new fs
Good point Steve. This touches on the larger issue of whether it
makes sense to host FS clients for other file systems in Hadoop
itself. I agree with what I think you're getting which is - if we can
handle the testing and integration via external dependencies it would
probably be better to
On 11 October 2012 00:34, Thilee Subramaniam thi...@quantcast.com wrote:
My initial goal was to make Hadoop use QFS the same way it used KFS. Since
Hadoop branch-1 had lib/kfs.xx.jar, I was expecting to include a
qfs.x.x.jar in the Hadoop release; my first patch was to use such jar. But
now
+1, only creates confusion
On 15 October 2012 19:00, Eli Collins e...@cloudera.com wrote:
Hey Bobby,
That's correct, I mean the packages directories in common, hdfs, and
MR top-level directories, which contain the debs and RPMs. I'm not
opposed to someone re-working/contributing new code
On 26 October 2012 01:24, Thilee Subramaniam thi...@quantcast.com wrote:
We have made the changes recommended here, and made available a 'Hadoop
QFS jar' with QFS. This plugin and the QFS libraries will be maintained
released by the QFS open-source project.
Please see the download and
negative exit code on bad arguments. (Steve Loughran via suresh)
MAPREDUCE-4782. NLineInputFormat skips first line of last InputSplit
(Mark Fuhs via bobby)
don't worry about the patch fail message on jenkins as it currently only
patches against trunk
The standard development process for is
-use Git, with git.apache.org the read-only repository
-branch for each JIRA issue, trunk branch-1 for the asf versions
e.g
On 22 November 2012 02:40, Chris Nauroth cnaur...@hortonworks.com wrote:
It seems like the trickiest issue is preservation of permissions and
symlinks in tar files. I suspect that any JVM-based solution like custom
Maven plugins, Groovy, or jtar would be limited in this respect. According
On 21 November 2012 19:15, Matt Foley ma...@apache.org wrote:
This discussion started in
Those of us involved in the branch-1-win port of Hadoop to Windows without
use of Cygwin, have faced the issue of frequent use of shell scripts
throughout the system, both in build time (eg, the utility
On 21 November 2012 15:03, Radim Kolar h...@filez.com wrote:
what it takes to gain commit access to hadoop?
good question.
I've put some of my thoughts on the topic into a presentation I gave last
month:
http://www.slideshare.net/steve_l/inside-hadoopdev
That isn't so much about
On 24 November 2012 20:13, Matt Foley ma...@apache.org wrote:
For discussion, please see previous thread [PROPOSAL] introduce Python as
build-time and run-time dependency for Hadoop and throughout Hadoop stack.
This vote consists of three separate items:
1. Contributors shall be allowed to
On 26 November 2012 21:25, Radim Kolar h...@filez.com wrote:
The main feature is that when you get the +1 vote you yourself get to
deal with the grunge work of apply
patches to one or more svn branches, resyncing that with the git branches
you inevitably do your own work on.
no, main
On 30 November 2012 00:29, Radim Kolar h...@filez.com wrote:
* What else in the current build, besides saveVersion.sh, you see as
candidate to be migrated to Phyton?
inline ant scripts
=0. Ant's versioning is stricter; you can pull down the exact Jar versions,
and some of us in the Ant
On 30 November 2012 12:57, Luke Lu l...@apache.org wrote:
I'd like to change my binding vote to -1, -0, -1.
Considering the hadoop stack/ecosystem as a whole, I think the best cross
platform scripting language to adopt is jruby for following reasons:
1. HBase already adopted jruby for HBase
On 1 December 2012 01:08, Eli Collins e...@cloudera.com wrote:
-1, 0, -1
IIUC the only platform we plan to add support for that we can't easily
support today (w/o an emulation layer like cygwin) is Windows, and it
seems like making the bash scripts simpler and having parallel bat
files is
On 30 November 2012 13:40, Radim Kolar h...@filez.com wrote:
inline ant scripts
=0. Ant's versioning is stricter; you can pull down the exact Jar
versions,
and some of us in the Ant team worked very hard to get it going
everywhere.
You don't gain anything by going to .py
there are sh
The RPMs are being built with bigtop;
grab it from here
https://github.com/apache/bigtop/tree/master/bigtop-packages/src/rpm/hadoop
I'm not sure which branch to use for hadoop-1.1.1; let me check that
On 4 December 2012 17:24, Michael Johnson m...@michaelpjohnson.com wrote:
Hello All,
I've
On 4 December 2012 18:51, Michael Johnson m...@michaelpjohnson.com wrote:
On 12/04/2012 12:54 PM, Harsh J wrote:
The right branch is branch-0.3 for Bigtop. You can get more
information upstream at Apache Bigtop itself
(http://bigtop.apache.org).
Branch 0.3 of the same URL Steve posted:
The swiftfs tests need only to run if there's a target filesystem; copying
the s3/s3n tests, something like
property
nametest.fs.swift.name/name
valueswift://your-object-store-herel//value
/property
How does one actually go about making junit tests optional in mvn-land?
Should the
14, 2012 at 9:56 AM, Steve Loughran ste...@hortonworks.com
wrote:
The swiftfs tests need only to run if there's a target filesystem;
copying
the s3/s3n tests, something like
property
nametest.fs.swift.name/name
valueswift://your-object-store-herel//value
/property
How
it easier to turn those and the rackspace ones without sticking my
secrets into an XML file under SCM
Tom
On Mon, Dec 17, 2012 at 10:06 AM, Steve Loughran ste...@hortonworks.com
wrote:
thanks, I'l; have a look. I've always wanted to add the notion of skipped
to test runs -all the way through
On 18 December 2012 09:11, Colin McCabe cmcc...@alumni.cmu.edu wrote:
On Tue, Dec 18, 2012 at 1:05 AM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:
another tactic could be to have specific test projects: test-s3,
test-openstack, test-... which contain nothing but test cases. You'd set
On 18 December 2012 09:05, Colin McCabe cmcc...@alumni.cmu.edu wrote:
I think the way to go is to have one XML file include another.
?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?
configuration xmlns:xi=http://www.w3.org/2001/XInclude;
property
On 18 December 2012 15:00, Simone Leo simone@crs4.it wrote:
it looks like I don't have write permissions on the Hadoop wiki anymore
(account: SimoneLeo).
I've given you access, ping me if it doesn't work
-steve
On 17 December 2012 16:06, Tom White t...@cloudera.com wrote:
There are some tests like the S3 tests that end with Test (e.g.
Jets3tNativeS3FileSystemContractTest) - unlike normal tests which
start with Test. Only those that start with Test are run
automatically (see the surefire
My setup ( I work from home)
# OS/X laptop w/ 30 monitor
# FTTC broadband, 55Mbit/s down, 15+ up -it's the upload bandwidth that
really helps development: http://www.flickr.com/photos/steve_l/8050751551/
# IntelliJ IDEA IDE, settings edited for a 2GB Heap
# Maven on the command line for builds
#
thanks, just seen and commented on this.
IF we're going to have test timeouts, we need a good recommended default
value for all tests except the extra slow ones.
Or
1. we just use our own JUnit fork that sets up a better default value than
0. I don't know how Ken would react to that.
2. we get
On 2 March 2013 03:33, Konstantin Boudnik c...@apache.org wrote:
Windows is so different from _any_ Unix or pseudo-Unix flavors, including
Windows with Cygwin - that even multi-platform Java has hard hard time
dealing with it. This is enough, IMO, to warrant a separate checkpoint.
Cygwin is
On 6 March 2013 23:17, Matt Foley ma...@apache.org wrote:
Hi, I got stuck in other work and did not make the Hadoop 1.2 branch in
February.
Now that release 1.1.2 is out, I'm ready to make the 1.2 branch.
I intend to branch for 1.2 in the next night or two, and at that point will
make the
On 11 March 2013 03:38, Matt Foley ma...@apache.org wrote:
Hi all,
I have created branch-1.2 from branch-1, and propose to cut the first
release candidate for 1.2.0 on Monday 3/18 (a week from tomorrow), or as
soon thereafter as I can achieve a stable build.
Between 1.1.2 and the current
On 13 March 2013 16:31, Thomas Graves tgra...@yahoo-inc.com wrote:
Hello all,
I think enough critical bug fixes have went in to branch-0.23 that warrant
another release. I plan on creating a 0.23.7 release by the end March.
Please vote '+1' to approve this plan. Voting will close on
On 14 March 2013 17:06, Vikas Jadhav vikascjadha...@gmail.com wrote:
for first job ANT download jar from internet
how to build offline using ANT
--
ant needs all the dependencies. Once that first build is done, it will not
need to do it again, as the stuff is cached in your disk somewhere
On 15 March 2013 09:18, springring springr...@126.com wrote:
Hi,
my hadoop version is Hadoop 0.20.2-cdh3u3 and I want to define new
InputFormat in hadoop book , but there is error
class org.apache.hadoop.streaming.WholeFileInputFormat not
org.apache.hadoop.mapred.InputFormat
Hadoop
have you considered joining the u...@hadoop.apache.org and asking the
question there?
On 1 April 2013 17:38, Vikas Jadhav vikascjadha...@gmail.com wrote:
Hi
I want process/store all data pertaining to one reducer.
i want store it in some data structure depending on key for example
On 3 April 2013 15:46, Chandrashekhar Kotekar shekhar.kote...@gmail.comwrote:
Thanks a lot for your help.
You were right. Problem was with Protoc version 1.5 only. I downloaded and
added protoc 1.4 version and now that error is gone. However now I am stuck
at this new error. Now maven is not
On 8 April 2013 16:08, Mohammad Mustaqeem 3m.mustaq...@gmail.com wrote:
Please, tell what I am doing wrong??
Whats the problem??
a lot of these seem to be network-related tests. You can turn off all the
tests; look in BUILDING.TXT at the root of the source tree for the various
operations,
On 18 April 2013 18:32, Noelle Jakusz (c) njak...@vmware.com wrote:
+1
There are quite a few new people, so maybe start a collaborative group
where you can collect notes and steps (videos and articles). I know I would
have some for you that I have created as I have gotten started... it would
On 19 April 2013 23:08, Noelle Jakusz (c) njak...@vmware.com wrote:
I have created an account (noellejakusz) and would like write access to
help with this...
OK, you have write access
On 22 April 2013 14:00, Karthik Kambatla ka...@cloudera.com wrote:
Hadoop devs,
This doc does not intend to propose new policies. The idea is to have one
document that outlines the various compatibility concerns (lots of areas
beyond API compatibility), captures the respective policies that
On 22 April 2013 18:32, Eli Collins e...@cloudera.com wrote:
On Mon, Apr 22, 2013 at 5:42 PM, Steve Loughran ste...@hortonworks.com
wrote:
There's a separate issue that says we make some guarantee that the
behaviour of a interface remains consistent over versions, which is hard
to do
On 23 April 2013 09:13, Chris Smith csmi...@gmail.com wrote:
And there is another scheduler, Dynamic Priority Scheduling, lurking in the
backwater of 0.21.0 that allows users to 'bid' for additional time.
Getting this back into current 1.x may be a great way to understand about
scheduling:
On 23 April 2013 09:00, Steve Loughran ste...@hortonworks.com wrote:
On 22 April 2013 18:32, Eli Collins e...@cloudera.com wrote:
However if a change made FileSystem#close three times slower, this
perhaps a smaller semantic change (eg doesn't change what exceptions
get thrown
On 23 April 2013 11:32, Andrew Purtell apurt...@apache.org wrote:
At the risk of hijacking this conversation a bit, what do you think of the
notion of moving interfaces like Seekable and PositionedReadable into a new
foundational Maven module, perhaps just for such interfaces that define and
On 23 April 2013 17:25, Roman Shaposhnik r...@apache.org wrote:
Hi!
Now that Hadoop 2.0.4-alpha is released I'd like
to open up a discussion on what practical steps
would it take for us as a community to get
Hadoop 2.X from alpha to beta?
There's quite a few preconditions to be met for
you need those patches to remove sun-specific bits in, don't you?
On 25 April 2013 19:23, Amir Sanjar v1san...@us.ibm.com wrote:
Arun, thanks for the update. This is indeed the news we (IBM) have been
waiting for. Please let us know if there is anyway
we can help.
Best Regards
Amir Sanjar
On 29 April 2013 14:20, Amit Sela am...@infolinks.com wrote:
Thanks for the reply Chris!
I'm actually running on Fedora 17... I went ahead and changed to the
forrest findbugs versions you recommended (the log file had an issue with
apache-forrest-0.8), and now when I look at the log and see
Phone# 512-286-8393
Fax# 512-838-8858
[image: Inactive hide details for Steve Loughran ---04/29/2013 05:40:33
PM---you need those patches to remove sun-specific bits in, don]Steve
Loughran ---04/29/2013 05:40:33 PM---you need those patches to remove
sun-specific bits in, don't you
On 1 May 2013 06:33, Thoihen Maibam thoihen...@gmail.com wrote:
Hi All,
Can somebody help me after creating patch what I need to I do. I have seen
one subject 'How to test the patch' but that isn't really helping me. I am
stuck in the below areas.
dev-support/test-patch can test your patch
On 8 May 2013 21:20, st...@stevendyates.com wrote:
Hi Harsh,
Thanks for responding,
I would be interested in what the dev group had in mind for this and I
also have a couple of additional queries ;
I can see that a quick win for this would be to expose the existing Jetty
statistics
On 9 May 2013 20:39, sya...@stevendyates.com wrote:
Unless there are existing bits of this stuff lurking somewhere in the
Hadoop codebase that I haven't noticed, these could be copied into hadoop
core. Reviewing the code as it is would be welcome
On 15 May 2013 10:57, Arun C Murthy a...@hortonworks.com wrote:
Folks,
A considerable number of people have expressed confusion regarding the
recent vote on 2.0.5, beta status etc. given lack of specifics, the voting
itself (validity of the vote itself, whose votes are binding) etc.
IMHO
On 15 May 2013 15:02, Arun C Murthy a...@hortonworks.com wrote:
Roman,
Furthermore, before we rush into finding flaws and scaring kids at night
it would be useful to remember one thing:
Software has *bugs*. We can't block any release till the entire universe
validates it, in fact they won't
On 15 May 2013 23:19, Konstantin Boudnik c...@apache.org wrote:
Guys,
I guess what you're missing is that Bigtop isn't a testing framework for
Hadoop. It is stack framework that verifies that components are dealing
with
each other nicely.
which to me means Some form of integration test
On 21 May 2013 23:47, Jagane Sundar jag...@sundar.org wrote:
I see one significant benefit to having Release Plan votes: Fewer releases
with more members of the community working on any given release.
In turn, fewer Hadoop releases implies less confusion for end users
attempting to download
-effecting operations, such as a recursive delete of a v.
large directory. That remote testing, therefore, helps me find such pains
before it hits the fueld.
You might also try asking Steve Loughran, since he did some great work
recently to try to nail down the exact semantics of FileSystem
are a little ambiguous. With that said, we've joined Steve Loughran in
attempting to clarify these for both the Hadoop 1.0 and the Hadoop 2.0
FileSystem class over at https://issues.apache.org/jira/browse/HADOOP-9371
It seems to me that once we had these semantics defined, it would be good
congratulations! I propose you should celebrate your new rights by
reviewing some of my outstanding patches, such as
https://issues.apache.org/jira/browse/HADOOP-8545
On 28 May 2013 23:07, Aaron T. Myers a...@cloudera.com wrote:
On behalf of the Apache Hadoop PMC, I'd like to announce the
It's up as https://issues.apache.org/jira/browse/HDFS-4866
On 29 May 2013 21:53, Arpit Agarwal aagar...@hortonworks.com wrote:
Ralph, could you please file a Jira? We'll fix it.
-Arpit
On Wed, May 29, 2013 at 9:39 AM, Ralph Castain r...@open-mpi.org wrote:
Hi folks
On line 228 of
On 30 May 2013 22:14, Adam Kawa kawa.a...@gmail.com wrote:
Hi,
When uploading new content (and information about my company), I got the
exception
Sorry, can not save page because rubbelloselotto.de is not allowed in
this wiki.
How could I solve it?
Kind regards,
Adam
the word lotto
API calls are made older FileSystem implementations dont work anymore.
For context, here is a closely related JIRA
https://issues.apache.org/jira/browse/HADOOP-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
On Fri, May 31, 2013 at 6:09 AM, Steve Loughran ste
. If anyone does want to run the test themselves, try the docs
and then escalate to us if there are any problems.
Steve Loughran, Dmitry Mezhenskiy (Mirantis) David Dobbins (Rackspace)
thanks
On 3 June 2013 17:20, Suresh Srinivas sur...@hortonworks.com wrote:
Steve, I will review this. Might need a couple of days though.
On Mon, Jun 3, 2013 at 7:12 AM, Steve Loughran ste...@hortonworks.com
wrote:
Hi,
We've got the HADOOP-8545
https://issues.apache.org/jira/browse
Arun,
What do I do if I have I minor patch that I'd like to get out into 2.1, but
where I don't want to introduce instability into that 2.1 branch if you are
trying to make a beta release? For example,
HADOOP-9651https://issues.apache.org/jira/browse/HADOOP-9651 changes
the exception that
On 26 June 2013 09:17, Arun C Murthy a...@hortonworks.com wrote:
Folks,
The RC tag in svn is here:
http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.0-beta-rc0
The maven artifacts are available via repository.apache.org.
did all the poms go up?
[INFO]
done
On 1 July 2013 19:47, erman pattuk ermanpat...@su.sabanciuniv.edu wrote:
Hi,
I was planning to add our current project, BigSecret, to the powered-by
list of HBase. https://wiki.apache.org/hadoop/PoweredBy
I believe I need to get write permissions to do so. My username is
ermanpattuk.
thanks -I've commented on it briefly.
Could you print the blog post to PDF and attach it too -it ensures that it
will stay with the JIRA forever
On 8 July 2013 06:14, Jean-Baptiste Onofré j...@nanthrax.net wrote:
Hi folks,
Maybe some of you remember the discussion about OSGi support some
Hi,
I've been through suresh's review of the HADOOP-8545 patch and pulled in
the changes -that patch is now ready for review by anyone, testing even
better.
I'd like to get this in not just because we've been working on it for a
long time, but because I'm trying to specify more rigorously what
the Hadoop on Swift work. I have a single-node Swift
setup and will first try running the unit tests against the setup.
I'm guessing the most up-to-date documentation is the
hadoop-openstack/src/site/apt/index.apt.vm
added in HADOOP-8545? Thanks,
Stephen
On Tue, Jul 9, 2013 at 8:15 AM, Steve
create a wiki account then ask for write access we'll set you up
On 10 July 2013 01:56, Akira AJISAKA ajisa...@oss.nttdata.co.jp wrote:
Hi,
I've installed ProtocolBuffer 2.5.0 according to [[wiki:
HowToContribute]]. And that's why I failed to build hadoop and I had to
downgrade protobuf to
On 8 July 2013 19:28, Tim St Clair tstcl...@redhat.com wrote:
Arun,
https://issues.apache.org/jira/browse/HADOOP-9680 / 9623
fixing S3 is something that is niggling me as something I want to sit down
and do once the Swift stuff is in -and once we've clarified some quirks w/
the FS API, and
On 10 July 2013 18:54, Akira AJISAKA ajisa...@oss.nttdata.co.jp wrote:
Thank you for your comments.
create a wiki account then ask for write access we'll set you up
I created a wiki account.
(https://wiki.apache.org/**hadoop/AkiraAjisakahttps://wiki.apache.org/hadoop/AkiraAjisaka
)
On 30 July 2013 14:29, Arun C Murthy a...@hortonworks.com wrote:
Folks,
I've created another release candidate (rc1) for hadoop-2.1.0-beta that I
would like to get released. This RC fixes a number of issues reported on
the previous candidate.
This release represents a *huge* amount of work
would be nice for JDK7 builds to work on OSX -gave up the last time I
tried, though we should see if the mvn plugins have been updated for this
https://issues.apache.org/jira/browse/HADOOP-9350
http://steveloughran.blogspot.co.uk/2013/03/hadoop-java7-and-osx-or-what-is-it.html
It looks like there
all this sorted out, we had some
miscommunication on how to install protoc in the jenkins boxes and instead
having 2.4.1 and 2.5.0 side to side we got only 2.5.0.
By tomorrow we should have things mostly sorted out.
Thanks
On Tue, Aug 13, 2013 at 3:29 PM, Steve Loughran ste...@hortonworks.com
On 16 August 2013 10:05, Andrew Pennebaker apenneba...@42six.com wrote:
Thanks for the link!
I understand a lack of support *for* IPv6, what I don't understand is why
IPv6 must be disabled in order for Hadoop to work. On systems with both
IPv4 and IPv6, I thought IPv4 apps could ignore IPv6
On 16 August 2013 02:50, Jun Ping Du j...@vmware.com wrote:
Hi Tsuyoshi,
I just checked Hadoop wiki on HowToContribute and it points
ProtocolBuffer things to YARN Readme which is already updated to 2.5.0 now.
Thanks,
Junping
I remember spending time on that page trying to document
+1, binding
Review process
# symlink /usr/local/bin/protoc to the homebrew installed 2.5.0 version
# delete all 2.1.0-beta artifacts in the mvn repo:
find ~/.m2 -name 2.1.0-beta -print | xargs rm -rf
# checkout hbase apache/branch-0.95 (commit # b58d596 )
# switch to ASF repo (arun's
Should we go through the JIRA workflow of releasing 2.1-beta, and move all
open targeted-at-2.1-beta issues to something else -2.1.0?
Currently JIRA still views 2.1-beta as unreleased
-steve
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
sure, create a wiki account, email the login name here and we'll give you
write access
On 30 August 2013 02:34, Pere Ferrera ferrerabert...@gmail.com wrote:
Hello,
Datasalt (http://www.datasalt.com/) is a Hadoop consulting company which
released two open-source products on top of Hadoop
commercial support, public / private training and custom Hadoop development.
I get:
Sorry, can not save page because t.co is not allowed in this wiki.
There's actually t.co within datasalt.com but I don't see how to avoid
this. Any suggestion?
On Sun, Sep 1, 2013 at 6:53 AM, Steve Loughran ste
There's now a patch (and instructions of a manual stage) to get hadoop to
build on java7 on OSX
before I commit it, I need to make sure it doesn't break on any other
platforms.
Can anyone with Linux, Windows, PowerPC, Arm, whatever (and free time,
obviously) and an openjdk7 or closed-jdk7, apply
11, 2013 at 1:51 AM, Steve Loughran ste...@hortonworks.com
wrote:
There's now a patch (and instructions of a manual stage) to get hadoop to
build on java7 on OSX
before I commit it, I need to make sure it doesn't break on any other
platforms.
Can anyone with Linux, Windows, PowerPC
Have a look at https://wiki.apache.org/hadoop/HowToDevelopUnitTests , which
(now) explains the miniclusters -which you can use for testing against
in-VM HDFS and YARN cluster simulations. Passing unit tests doesn't mean
your code works in a real cluster, but failing them does mean that it won't
On 17 September 2013 23:05, Eli Collins e...@cloudera.com wrote:
(Looping in Arun since this impacts 2.x releases)
I updated the versions on HADOOP-8040 and sub-tasks to reflect where
the changes have landed. All of these changes (modulo HADOOP-9417)
were merged to branch-2.1 and are in the
On 18 September 2013 12:53, Alejandro Abdelnur t...@cloudera.com wrote:
On Wed, Sep 18, 2013 at 11:29 AM, Steve Loughran ste...@hortonworks.com
wrote:
I'm reluctant for this as while delaying the release, because we are
going
to find problems all the way up the stack -which will require
Hi.
You need to know that we don't really consider Hadoop a good place to learn
about Java or distributed system programming: it is simply too complex.
It's like learning C by writing linux kernel device drivers -so we
explicitly warn against trying to do this
On 21 September 2013 09:19, Sandy Ryza sandy.r...@cloudera.com wrote:
I don't believe there is any reason scheduling decisions need to be coupled
with NodeManager heartbeats. It doesn't sidestep any race conditions
because a NodeManager could die immediately after heartbeating.
historically
+1
pointed my mvn repository (and plugin repository) at the staging site,
added 2.2.0 as the version, ran all my local tests bringing up hbase and
accumulo dynamically.
On 7 October 2013 08:00, Arun C Murthy a...@hortonworks.com wrote:
Folks,
I've created a release candidate (rc0) for
forgot to mention my vote is binding.
to repeat my previous.
+1 binding
I've also now tested it in secure mode
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
for hadoop 2.x, the log4j. properties file for each project should be in
src/test/resources
but there's a risk that tests in sub projects that include
hadoop-commons-test.jar (or other -test JARs, like hadoop-hdfs) may pick up
the version in the test jar ahead of any one that they have locally
1 - 100 of 3324 matches
Mail list logo