I got this selector exception, and all my threads are blocked at
FileSystem.create(...) level, anyone see this issue before, I'm running at
0.18.3.
java.nio.channels.ClosedSelectorException
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:66)
at
to the block size to be split.
Table 6-1 in chapter 06 gives a breakdown of all of the configuration
parameters that affect split size in hadoop 0.19. Alphas are available :)
This is detailed in my book in ch06
On Tue, Apr 21, 2009 at 5:07 PM, javateck javateck javat...@gmail.com
wrote:
anyone
I set my mapred.tasktracker.map.tasks.maximum to 10, but when I run a
task, it's only using 2 out of 10, any way to know why it's only using 2?
thanks
/ 0
*
*
On Tue, Apr 21, 2009 at 1:56 PM, Koji Noguchi knogu...@yahoo-inc.comwrote:
It's probably a silly question, but you do have more than 2 mappers on
your second job?
If yes, I have no idea what's happening.
Koji
-Original Message-
From: javateck javateck [mailto:javat
tasks
should be independent.
On Tue, Apr 21, 2009 at 2:23 PM, Miles Osborne mi...@inf.ed.ac.uk wrote:
is your input data compressed? if so then you will get one mapper per file
Miles
2009/4/21 javateck javateck javat...@gmail.com:
Hi Koji,
Thanks for helping.
I don't know why hadoop
, because I set 10 for
mapred.tasktracker.map.tasks.maximum, and I check the job's conf is also
10, but actual hadoop is just using 2 map jobs.
On Tue, Apr 21, 2009 at 1:20 PM, javateck javateck javat...@gmail.comwrote:
I set my mapred.tasktracker.map.tasks.maximum to 10, but when I run a
task
anyone knows why setting *mapred.tasktracker.map.tasks.maximum* not working?
I set it to 10, but still see only 2 map tasks running when running one job
I'm running some mapreduce, and some jobs has outofmemory errors, and I find
that that the raw data itself also got corrupted, becomes zero bytes, very
strange to me, I did not look very detail into it, but just want to check
quickly with someone with such experience. I'm running at 0.18.3.
thanks
Hi:
I'm trying to use FSDataOutputStream create(Path f, boolean overwrite),
I'm calling create(new Path(somePath), false), but creation still fails
with IOException even when the file does not exist, can someone explain the
behavior?
thanks,
Hi,
does hadoop have any way to append to an existing file? for example, I
wrote some contents to a file, and later on I want to append some more
contents to the file.
thanks,
Hi,
I'm wondering if anyone has solutions about the nonstopped safe mode, any
way to get it around?
thanks,
error: org.apache.hadoop.dfs.SafeModeException: Cannot delete
/mapred/system. Name node is in safe mode.
The ratio of reported blocks 0.4696 has not reached the threshold 0.9990.
Safe
I have a strange issue that when I write to hadoop, I find that the content
is not transferred to hadoop even after a long time, is there any way to
force flush the local temp files to hadoop after writing to hadoop? And when
I shutdown the VM, it's getting flushed.
thanks,
Can someone tell whether a file will occupy one or more blocks? for
example, the default block size is 64MB, and if I save a 4k file to HDFS,
will the 4K file occupy the whole 64MB block alone? so in this case, do I do
need to configure the block size to 10k if most of my files are less than
I think you need to set a property (mapred.jar) inside hadoop-site.xml, then
you don't need to hardcode in your java code, and it will be fine.
But I don't know if there is any way that we can set multiple jars, since a
lot of times our own mapreduce class needs to reference other jars.
On Wed,
reduce
programs without using a jar? I do not want to use hadoop jar ... either.
On Wed, Apr 1, 2009 at 1:10 PM, javateck javateck javat...@gmail.com
wrote:
I think you need to set a property (mapred.jar) inside hadoop-site.xml,
then
you don't need to hardcode in your java code
15 matches
Mail list logo