hbase-0.1.2 resolves 27 issues including critical fixes for 'missing'
edits and unreliable onlining/offlining of tables. We recommend all
upgrade to this latest version.
To download, please go to http://hadoop.apache.org/hbase/releases.html.
Thanks to all who contributed to this release.
Hi,
I have been working on a problem where I have to process a particular data
and return three varieties of data and then, I have to process each of them
and store each variety of data into separate files.
In order to solve the above problem, I have proposed two solutions. One I
called
Hi,
I have a working 0.15.3 install and am trying to upgrade to 0.16.4. I
want to start clean with an empty filesystem, so I just reformatted
the filesystem instead of using the upgrade option. When I run
start-all.sh, I get a null pointer exception originating from the
Hi,
I have a working 0.15.3 install and am trying to upgrade to 0.16.4. I
want to start clean with an empty filesystem, so I just reformatted
the filesystem instead of using the upgrade option. When I run
start-all.sh, I get a null pointer exception originating from the
Hi,
I've tested this new option -jobconf
stream.non.zero.exit.status.is.failure=true. Seems working but still
not good for me. When mapper/reducer program have read all input data
successfully and fails after that, streaming still finishes successfully
so there are no chances to know about
Does the syslog output from a should-have-failed task contain
something like this?
java.lang.RuntimeException: PipeMapRed.waitOutputThreads():
subprocess failed with code 1
(In particular, I'm curious if it mentions the RuntimeException.)
Tasks that consume all their input and then exit
Hi all,
I've set up a standalone hadoop server , and when I run
bin/hadoop dfs namenode -format
I get the following message ( repeating 10 times ) :
ipc.Client: Retrying connect to server: localhost/127.0.0.1:5
My hadoop-site.xml file is as follows :
?xml version=1.0?
?xml-stylesheet
I suggest mapping localhost to actual IP in my /etc/hosts file and
running it again.
Akshar
On Wed, May 14, 2008 at 9:13 AM, Shimon [EMAIL PROTECTED] wrote:
Hi all,
I've set up a standalone hadoop server , and when I run
bin/hadoop dfs namenode -format
I get the following message (
Agenda for the Hadoop user group meeting on Wednesday 5/21 6:00-7:30 pm
at Yahoo! Mission College:
- Hadoop .17 release - Sameer Paranjpye
- Mahout update - Jeff Eastman
- And plenty of opportunity for networking, discussions
and beer...
Look forward to
Hi,
I saw the note at the end of the message below: Note that
MultipleOutputFormat is available in Hadoop-0.17
Is 0.17 out yet? Can we output multiple files another way?
Cheers Arv
-Original Message-
From: Amar Kamat [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 08, 2008 4:56 AM
Man, yahoo needs to get there act together with their video service (the
videos are still down)! Is there anyway someone can upload these videos to
youTube and provide a link?
Thanks,
Cole
On Wed, Apr 23, 2008 at 11:36 AM, Chris Mattmann
[EMAIL PROTECTED] wrote:
Thanks, Jeremy. Appreciate
I've having some problems creating a new file on HDFS. I am attempting
to do this after my MapReduce job has finished and I am trying to
combine all part-00* files into a single file programmatically. It's
throwing a LeaseExpiredException saying the file I just created doesn't
exist. Any idea
Was there ever any resolution as to if there could be some type of webcam
conferencing or at least a video recording of the meeting for people out of
town?
Thanks,
Cole
On Wed, May 14, 2008 at 3:22 PM, Ajay Anand [EMAIL PROTECTED] wrote:
To clarify, this meeting is intended not just for hadoop
I'm trying to bring up a cluster on EC2 using
(http://wiki.apache.org/hadoop/AmazonEC2) and it seems that 0.17 is the
version to use because of the DNS improvements, etc. Unfortunately, I
cannot find a public AMI with this build. Is there one that I'm not
finding or do I need to create one?
Jeff
Hi Jeff,
There is no public 0.17 AMI yet - we need 0.17 to be released first.
So in the meantime you'll have to build your own.
Tom
On Wed, May 14, 2008 at 8:36 PM, Jeff Eastman
[EMAIL PROTECTED] wrote:
I'm trying to bring up a cluster on EC2 using
(http://wiki.apache.org/hadoop/AmazonEC2)
Hadoop 0.17 hasn't been released yet. I (or Mukund) is hoping to
call a vote this afternoon or tomorrow.
Nige
On May 14, 2008, at 12:36 PM, Jeff Eastman wrote:
I'm trying to bring up a cluster on EC2 using
(http://wiki.apache.org/hadoop/AmazonEC2) and it seems that 0.17 is
the
version to
They haven't been uploaded yet, we are begging and hoping that whoever has
them will post them somewhere. I second Veoh, hadoop rocks.
Cole
On Wed, May 14, 2008 at 4:11 PM, Otis Gospodnetic
[EMAIL PROTECTED] wrote:
I tried finding those Hadoop videos on Veoh, but got 0 hits:
I am going to be arriving at SJC at 3PM.
Anybody want to get started early? I am sure that there is plenty to talk
about.
I hear that there is a Bennigan's just outside the office, but anywhere with
good beer and paper napkins should suffice.
On 5/14/08 12:22 PM, Ajay Anand [EMAIL PROTECTED]
What is the implication of manually forcing name node to leave safemode?
What properties do HDFS lose by doing that?
One gain to that is that the file system will be available for writes
immediately.
Cagdas
--
Best Regards, Cagdas Evren Gerede
Home Page: http://cagdasgerede.info
That really depends on why the name node is in safemode.
If the reason is system startup in which only a few datanodes have reported
in, then the only problem is that some files may not be fully present.
If the reason is some sort of system corruption, it could be a really big
mistake to force
Nobody has any ideas about this?
-Bryan
On May 13, 2008, at 11:27 AM, Bryan Duxbury wrote:
I'm trying to create a java application that writes to HDFS. I have
it set up such that hadoop-0.16.3 is on my machine, and the env
variables HADOOP_HOME and HADOOP_CONF_DIR point to the correct
Thanks Runping.
But, if that is the case, why does it took less time when I ran on a cluster
of size=1. It should have been the same irrespective of whether I am running
on a cluster of size=1 or more. right?
Thanks
Runping Qi wrote:
Your diagnose sounds reasonable.
Since the mappers of
And the conf dir (/Users/bryanduxbury/hadoop-0.16.3/conf) I hope it
is the similar as the one you are using for your hadoop installation.
I'm not sure I understand this. It isn't similar, it's the same as my
hadoop installation. I'm only operating on localhost at the moment.
I'm just
My experience is to call Thread.sleep(100) after calling dfs writes N
(say 1000) times.
-Original Message-
From: Xavier Stevens [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 14, 2008 10:47 AM
To: core-user@hadoop.apache.org
Subject: FileSystem.create
I've having some problems
24 matches
Mail list logo