Is it possible to use mmap alone instead of mlock for given paths in
data-nodes?
I am planning to use MMap for almost all of the files I write to
data-nodes, but obviously cannot pin down the blocks using mlock. When
swapping is very rare, such an option will be very helpful no?
Any help is much
After multiple messages, it says that the job has been completed. I really
wonder if the job has been truly completed or failed.
14/03/20 03:49:04 INFO mapred.JobClient: map 50% reduce 0%
14/03/20 03:49:20 INFO mapred.JobClient: Job complete: job_201403191916_0001
14/03/20 03:49:20 INFO
At the end it says clearly that the job has failed.
On Thu, Mar 20, 2014 at 12:49 PM, Mahmood Naderan nt_mahm...@yahoo.com wrote:
After multiple messages, it says that the job has been completed. I really
wonder if the job has been truly completed or failed.
14/03/20 03:49:04 INFO
I've written two blog post of how to get directory context in hadoop mapper.
http://www.idryman.org/blog/2014/01/26/capture-directory-context-in-hadoop-mapper/
http://www.idryman.org/blog/2014/01/27/capture-path-info-in-hadoop-inputformat-class/
Cheers,
Felix
On Mar 19, 2014, at 10:50 PM,
Hello,
For the intermediate files the mapper generates (which will get sent to the
reducers), when are the checksums generated? Are they generated when a
request comes in for transfer, or are they generated when the file is
initially created? Also, are checksums checked/verified if the file is
Hello everyone,
The HiveThrift Service was started succesfully.
netstat -nl | grep 1
tcp 0 0 0.0.0.0:1 0.0.0.0:*
LISTEN
I am able to read tables from Hive through Tableau. When executing queries
through Tableau I am getting the
I am struggling on this one. Can any one throw some pointers on how to
troubelshoot this issue please?
On Thursday, March 20, 2014 3:09 PM, Raj Hadoop hadoop...@yahoo.com wrote:
Hello everyone,
The HiveThrift Service was started succesfully.
netstat -nl | grep 1
tcp 0
Hi Raj,
There are map-reduce job logs generated if the MapRedTask fails, those
might give some clue.
Thanks,
Szehon
On Thu, Mar 20, 2014 at 12:29 PM, Raj Hadoop hadoop...@yahoo.com wrote:
I am struggling on this one. Can any one throw some pointers on how to
troubelshoot this issue please?
Hi Szehon,
It is not showing on the http://xyzserver:50030/jobtracker.jsp.
I checked this log. and this shows as -
/tmp/root/hive.log
exec.ExecDriver (ExecDriver.java:addInputPaths(853)) - Processing
alias table_emp
exec.ExecDriver (ExecDriver.java:addInputPaths(871)) - Adding input
file
The last line seems to indicate a PrivilegedActionException, maybe you can
look more at the rest of the stack if any to see why.
On Thu, Mar 20, 2014 at 1:59 PM, Raj Hadoop hadoop...@yahoo.com wrote:
Hi Szehon,
It is not showing on the http://xyzserver:50030/jobtracker.jsp.
*I checked
Confirmed that ToolRunner is NOT thread-safe:
*Original code (which runs into problems):*
public static int run(Configuration conf, Tool tool, String[] args)
throws Exception{
if(conf == null) {
conf = new Configuration();
}
GenericOptionsParser parser = new
Just reviewed the code again, you are not really using map-reduce. you are
reading all files in one map process, this is not a normal map-reduce job
works.
Regards,
*Stanley Shi,*
On Thu, Mar 20, 2014 at 1:50 PM, Ranjini Rathinam ranjinibe...@gmail.comwrote:
Hi,
If we give the below code,
Change you mapper to be something like this:
public static class TokenizerMapper extends
MapperObject, Text, Text, IntWritable {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context
Hi, experts:
I want to write an application which use Yarn RPC, is there any examples or
useful links about it ?
Thanks all.
Will
15 matches
Mail list logo