On Wed, 1 Jun 2011 12:48:51 -0700, Alejandro Abdelnur
wrote:
> Do you have all JARs used by your classes in Needed.jar in the DC
classpath
> as well?
needed.jar contains the class Needed, which my mappers need. If the class
Needed calls for another class AlsoNeeded in another jar, wouldn't I ge
John,
Do you have all JARs used by your classes in Needed.jar in the DC classpath
as well?
Are you propagating the delegation token?
Thxs.
Alejandro
On Wed, Jun 1, 2011 at 12:38 PM, John Armstrong wrote:
> On Tue, 31 May 2011 15:09:28 -0400, John Armstrong
> wrote:
> > On Tue, 31 May 2011 12
On Tue, 31 May 2011 15:09:28 -0400, John Armstrong
wrote:
> On Tue, 31 May 2011 12:02:28 -0700, Alejandro Abdelnur
> wrote:
>> What is exactly that does not work?
In the hopes that more information can help, I've dug into the local
filesystems on each of my four nodes and retrieved the job.xml a
On Tue, 31 May 2011 12:02:28 -0700, Alejandro Abdelnur
wrote:
> What is exactly that does not work?
Oozie launches a wrapper MapReduce job to run a Java job J1. Oozie's
/lib/ directory is provided to the classpath of J1 as expected. This part
works.
The Java job J1 configures and launches a Ma
What is exactly that does not work?
Oozie uses DistributeCache as the only mechanism to set classpaths to jobs
and it works fine.
Thanks.
Alejandro
On Mon, May 30, 2011 at 10:22 AM, John Armstrong wrote:
> On Mon, 30 May 2011 09:43:14 -0700, Alejandro Abdelnur
> wrote:
> > If you still want t
On Mon, 30 May 2011 09:43:14 -0700, Alejandro Abdelnur
wrote:
> If you still want to start your MR job from your Java action, then your
> Java
> action should do all the setup the MapReduceMain class does before
starting
> the MR job (this will ensure delegation tokens and distributed cache is
> a
John,
Now I get what you are trying to do.
My recommendation would be:
* Use a Java action to do all the stuff prior to starting your MR job
* Use a mapreduce action to start your MR job
* If you need to propagate properties from the Java action to the MR action
you can use the flag.
If you st
On Fri, 27 May 2011 15:47:23 -0700, Alejandro Abdelnur
wrote:
> John,
>
> If you are using Oozie, dropping all the JARs your MR jobs needs in the
> Oozie WF lib/ directory should suffice. Oozie will make sure all those
JARs
> are in the distributed cache.
That doesn't seem to work. I have this
John,
If you are using Oozie, dropping all the JARs your MR jobs needs in the
Oozie WF lib/ directory should suffice. Oozie will make sure all those JARs
are in the distributed cache.
Alejandro
On Thu, May 26, 2011 at 7:45 AM, John Armstrong wrote:
> Hi, everybody.
>
> I'm running into some dif
sorry, i forgot dat, sorry, jst i am moving to a new thread.
thanks
On Thu, 26 May 2011 23:17:43 +0530, vishnu krishnan
wrote:
> thanks,
>
>
> if am not using using the map/reduce here, that just i directly sent dat
> data to the db, what will be the problems?
Look, I hate to be That Guy, especially on my first day on the list but
would you mind moving to your
thanks,
if am not using using the map/reduce here, that just i directly sent dat
data to the db, what will be the problems?
If it is just a GB then you probably don't need Hadoop, unless there is some
serious processing involved that hasn't been explained or you already have the
data on HDFS, or you happen to have a Hadoop cluster that you have access to
and the amount of data is going to grow in size. Then it could
thanku,
so just i want to take a GB of data and give to the map/reduce, then store
into the database?
--
Vishnu R Krishnan
Software Engineer
Create @ Amrita
Amritapuri
Vishnu,
You have to have a file system that is accessible from all nodes involved to
run Hadoop Map Reduce. This could be NFS if it is a small number of nodes or
even the local file system if you are just running one node. But, with that
said Hadoop is designed to process big data GB, TB, an
am new in map reduce. one think i have to know. can i use the map reduce pgm
without any file system?
16 matches
Mail list logo