of splits(and in effect the number of
mappers) depending on your input file and input format(in case you are using
fileinputformat or deriving from it)
mapreduce.input.fileinputformat.split.maxsize max number of bytes
mapreduce.input.fileinputformat.split.minsize min number of bytes
-Adi
On Thu, Aug
Caused by: java.io.IOException: error=12, Not enough space
You either do not have enough memory allocated to your hadoop daemons(via
HADOOP_HEAPSIZE) or swap space.
-Adi
On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu guxiaobo1...@gmail.com wrote:
Hi,
I am trying to write a map-reduce job
Please suggest what would be the best way to profile NameNode?
Any specific tools.
http://developer.yahoo.com/hadoop/tutorial/module7.html#monitoring
Ganglia and Nagios(or any other system monitoring you have in place)
-Adi
On Sun, Aug 7, 2011 at 1:40 AM, jagaran das jagaran_...@yahoo.co.in
dfs.heartbeat.interval3Determines datanode heartbeat interval in seconds.
Increasing the heartbeat interval could be an option?
OTOH delay in finding a data node is down might cause data loss.
-Adi
On Sat, Aug 6, 2011 at 5:23 PM, Joe Stein charmal...@allthingshadoop.comwrote:
Does anyone have
the issue uses OpenJDK and
this one does not see this issue as it is able to reuse JVMs.
-Adi
Also... I believe there were noted issues with the .17 JDK. I will look for
a link and post if I can find.
Otherwise, the behaviour I have seen before. Hadoop is detaching from the
JVM and stops
org.apache.hadoop.ipc.Server: IPC Server
handler 37 on 33465 caught: java.nio.channels.ClosedChannelException
2011-05-12 16:02:09,095 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 100 on 33465 caught: java.nio.channels.ClosedChannelException
Thanks.
-Adi
Server VM (build 14.0-b16, mixed mode)
Even if I can get it to reuse JVM it will be grrreat.
-Adi
-Joey
On Thu, May 12, 2011 at 1:39 PM, Adi adi.pan...@gmail.com wrote:
For one long running job we are noticing that the mapper jvms do not exit
even after the mapper is done. Any
are in fact running. And those error messages do
not always show up. Just sometimes. But the processes never get cleaned up.
-Adi
hadoop or linux config suggestions are welcome.
Thanks.
-Adi
the mappers without any memory/swap issues.
-Adi
On Wed, May 11, 2011 at 1:40 PM, Michel Segel michael_se...@hotmail.comwrote:
You have to do the math...
If you have 2gb per mapper, and run 10 mappers per node... That means 20gb
of memory.
Then you have TT and DN running which also take memory
to around 35-36GB.
-Adi
On Wed, May 11, 2011 at 2:16 PM, Ted Dunning tdunn...@maprtech.com wrote:
How is it that 36 processes are not expected if you have configured 48 + 12
= 50 slots available on the machine?
On Wed, May 11, 2011 at 11:11 AM, Adi adi.pan...@gmail.com wrote:
By our
Thanks for your comments Allen.I have added mine inline.
On May 11, 2011, at 11:11 AM, Adi wrote:
By our calculations hadoop should not exceed 70% of memory.
Allocated per node - 48 map slots (24 GB) , 12 reduce slots (6 GB), 1 GB
each for DataNode/TaskTracker and one JobTracker Totalling
a handle to the current Mapper context and write a
message via the Mapper context from the RecordReader?
Any other suggestions on handling bad input data when implementing Custom
InputFormat?
Thanks.
-Adi
13 matches
Mail list logo