On Wednesday 26 March 2008, Jeff Eastman wrote:
I personally got a lot of positive feedback and interest in Mahout, so
expect your inbox to explode in the next couple of days.
Sounds great. I was already happy we received quite some traffic after we
published that we would take part in the
Yes, are there any materials for those who could not come to summit? I am
really curious about this summit.
Is the material posted on the hadoop page?
Best Regards,
-C.A.
On Wed, Mar 26, 2008 at 8:48 AM, Isabel Drost [EMAIL PROTECTED]
wrote:
On Wednesday 26 March 2008, Jeff Eastman wrote:
I
Hello everybody,
I've been playing with Hadoop for a few days, and I'm only starting to
explore it's beauty.
In an attempt to learn on the Grep Example, I ended up wondering whether you
can actually extract from within a map, on which file you are currently
running.
e.g. Suppose I want to grep
From here:
http://wiki.apache.org/hadoop/TaskExecutionEnvironment
The following properties are localized for each task's JobConf:
*Name*
*Type*
*Description*
mapred.job.id
String
The job id
mapred.task.id
String
The task id
mapred.task.is.map
boolean
Is this a map task
Owen,
Yes I am using Hadoop 0.16.1 .
No, the jira doesn't relate to my case.
The message Hook previously registered comes up only if I try to access
files on S3 from my java application running on EC2 . The same application runs
smoothly if the input file is copied to image on EC2 and
I wonder if it is related to:
https://issues.apache.org/jira/browse/HADOOP-3027
I think it is - the same problem is fixed for me when using HADOOP-3027.
Tom
HI,
I am developing the simple inverted index program frm the hadoop. My map
function has the output:
word, doc
and the reducer has:
word, list(docs)
Now I want to use one more mapreduce to remove stop and scrub words from
this output. Also in the next stage I would like to have short summay
On Wed, 26 Mar 2008, Amar Kamat wrote:
There are two types of slaves and 2 corresponding masters in Hadoop. The
2 masters are Namenode and JobTracker while the slaves are datanodes and
tasktrackers resp. Each slave when started has a hardcoded master
information in the config that is passed to
Hello,
Is there a hadoop recipes / snippets / cookbook site? I'm thinking
something like the Python Cookbook
(http://aspn.activestate.com/ASPN/Python/Cookbook/) or Django Snippets
(http://www.djangosnippets.org/), where people can post code and
commentary for common tasks.
Best,
Parand
On Mar 26, 2008, at 9:39 AM, Aayush Garg wrote:
HI,
I am developing the simple inverted index program frm the hadoop.
My map
function has the output:
word, doc
and the reducer has:
word, list(docs)
Now I want to use one more mapreduce to remove stop and scrub words
from
this output.
On Mar 26, 2008, at 10:08 AM, Parand Darugar wrote:
Hello,
Is there a hadoop recipes / snippets / cookbook site? I'm thinking
something like the Python Cookbook (http://aspn.activestate.com/
ASPN/Python/Cookbook/) or Django Snippets (http://
www.djangosnippets.org/), where people can post
Hadoop's Wiki sounds like a fine place for this (to start).
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Arun C Murthy [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Wednesday, March 26, 2008 2:07:38 PM
Subject: Re: Hadoop
On Mar 26, 2008, at 11:05 AM, Arun C Murthy wrote:
On Mar 26, 2008, at 9:39 AM, Aayush Garg wrote:
HI,
I am developing the simple inverted index program frm the hadoop.
My map
function has the output:
word, doc
and the reducer has:
word, list(docs)
Now I want to use one more mapreduce
I changed the configuration a little so that the MR jar file now runs on my
local hadoop cluster, but takes input files from S3.
I get the following output:
08/03/26 17:32:39 INFO mapred.FileInputFormat: Total input paths to process :
1
08/03/26 17:32:44 INFO mapred.JobClient: Running
I was wondering
1) what happens if a data node is alive but its harddrive fails? Does it
throw an exception and dies?
2) If It continues to run and continue to do blockreporting, is there a
console showing datanodes with healthy hard drives and unhealthy hard
drives? I know the web server of the
It depends on the failure.
For some failure modes, the disk just becomes very slow.
On 3/26/08 4:39 PM, Cagdas Gerede [EMAIL PROTECTED] wrote:
I was wondering
1) what happens if a data node is alive but its harddrive fails? Does it
throw an exception and dies?
2) If It continues to run
Dear Yahoo, Amazon, and Hadoop Community:
Thank you very much for a very well-done Hadoop Summit. It came as a
complete surprise that a FREE conference would include breakfast, lunch,
snacks, happy hour, and swag - very classy and very nice.
All the presentations and discussions
Slides and video go up next week. It just takes a few days to assemble.
We're glad everyone enjoyed it and was okay with a last minute venue change.
Thanks also to Amazon.com and the NSF (not NFS as I typo'd on the printed
agenda!)
Jeremy
On 3/26/08, Cam Bazz [EMAIL PROTECTED] wrote:
Yes,
On Wed, 26 Mar 2008, Aayush Garg wrote:
HI,
I am developing the simple inverted index program frm the hadoop. My map
function has the output:
word, doc
and the reducer has:
word, list(docs)
Now I want to use one more mapreduce to remove stop and scrub words from
Use distributed cache as
19 matches
Mail list logo