Is it possible to get pretty URLs when doing HDFS file browsing via web
browser?
--
--- Get your facts first, then you can distort them as you please.--
I am looking to test hadoop 0.23 or CDH4 beta on my local VM. I am looking
to execute the sample example codes in new architecture, play around with
the containers/resource managers.
Is there any pre-requisite on default memory/CPU/core settings I need to
keep in mind before setting up the VM.
Michael,
Out of the box I am taking this problem as a metadata problem as we have also
faced same kind of issue when connecting tableu with hive and problem has
identified in metadata set up. If your metadata in default apache db i.e. Derby
then jdbc connection doesn't work. As a work around
Thanks bobby, I m looking for something like this. Now the question is
what is the best strategy to do Hot/Hot or Hot/Warm.
I need to consider the CPU and Network bandwidth, also needs to decide from
which layer this replication should start.
Regards,
Abhishek
On Mon, Apr 16, 2012 at 7:08
I have a streaming job that uses a lot of memory. Capacity scheduler
lets me set the mapred.job.map.memory.mb property to something high
like 2560. The job then takes 5 slots (512mb slot) for every map
task. I have noticed that it appears to actually start many java
processes that look like
mapred.fairscheduler.loadmanager - An extension point that lets you
specify a class that determines how many maps and reduces can run on a
given TaskTracker. This class should implement the LoadManager
interface. By default the task caps in the Hadoop config file are
used, but this option could be