Hello list,
Due to some need arose lately I had to start looking into
DBInputFormat. In order to get myself familiar with it I started to goolge
but could not find much about how records get created from splits in case
of DBInputFormat. I went through this
The reason you described is true and I verified it at our enviroment,thank
you very much.
I try to set keep.failed.task.files=true,but all jobs failed due to
MAPREDUCE-5047 https://issues.apache.org/jira/browse/MAPREDUCE-5047 ,because
our hadoop cluster turn on the kerberos. :(
The only thing we
Hi all,
I was wondering if anyone has tried running Hadoop with Java Pathfinder to
do model checking/concurrency testing. I noticed it on the project
suggestions page on the wiki (
http://wiki.apache.org/hadoop/ProjectSuggestions) and was wondering if
anyone has given it a shot. If so, any
Hi All,
I have lost amazon instances of my hadoop cluster. But i had all the data
in aws EBS volumes. So i launched new instances and attached volumes.
But all of the datanode logs keep on print the below lines it cauased to
high IO rate. Due to IO usage i am not able to run any jobs.
Can
They seem to be transferring blocks between one another. This may most
likely be due to under-replication and the NN UI will have numbers on
work left to perform. The inter-DN transfer is controlled by the
balancing bandwidth though, so you can lower that down if you want to,
to cripple it - but
Hi Tim,
I don't think anyone's working on this. You can file a JIRA for the
topic and propose your plan to add pathfinder to the project (not all
of us are aware of what JPF would be so some notes on why it would be
a very helpful addition to Apache Hadoop would also be welcome on the
JIRA).
On
Hi all,
I'm trying to run ant test on a clean Hadoop branch-1 checkout.
ant works fine but when I run ant test I get a lot of failures:
Test org.apache.hadoop.cli.TestCLI FAILED
Test org.apache.hadoop.fs.TestFileUtil FAILED
Test org.apache.hadoop.fs.TestHarFileSystem FAILED
Test
Every file, directory and block in HDFS is represented as an object in the
namenode’s memory, Namenode consume about average of 150 bytes per each
block(object).
On Wed, Apr 24, 2013 at 12:30 PM, Mahesh Balija
balijamahesh@gmail.comwrote:
Can you manually go into the directory configured
Hello,
JT reported that job completed successfully means it job is completed
successfully after resubimiting failed task, For jobs which comtains large
number of Map and Reducer tasks it's quite common some task are failed
from various reason and resubmitted. No need worry for these failure.
Hello,
Your datanode daemon is not running, please check data node logs.
On Fri, Apr 26, 2013 at 11:53 PM, Mohsen B.Sarmadi
mohsen.bsarm...@gmail.com wrote:
Hi,
I am newbi in hadoop,
I am running hadoop on Mac X 10. and i can't load any files in Hdfs.
first of all, i am getting this
Hi Amit,
The common-dev list is more suited for Apache Hadoop
development-related questions, so I've moved it to that and bcc'd
user@. Each failed test also produces a log under the build directory
for the real reason of failure - can you also inspect that to
determine the reason behind the
Hello,
Check you hadoop.tmp.dir and mapred.local.dir configuration and permissions.
On Sat, Apr 27, 2013 at 12:20 AM, Oren Bumgarner oren...@gmail.com wrote:
I have a small hadoop cluster running 1.0.4 and I'm trying to have it
setup so that I can run jobs remotely from a computer on the
Hello Kevin,
In the case:
JobClient client = new JobClient();
JobConf conf - new JobConf(WordCount.class);
Job client(default in local system) picks configuration information by
referring HADOOP_HOME in local system.
if your job configuration like this:
*Configuration conf = new
13 matches
Mail list logo