fileSystem.create(...) is blocked when selector is closed

2009-04-26 Thread javateck javateck
I got this selector exception, and all my threads are blocked at FileSystem.create(...) level, anyone see this issue before, I'm running at 0.18.3. java.nio.channels.ClosedSelectorException at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:66) at

Re: anyone knows why setting mapred.tasktracker.map.tasks.maximum not working?

2009-04-22 Thread javateck javateck
to the block size to be split. Table 6-1 in chapter 06 gives a breakdown of all of the configuration parameters that affect split size in hadoop 0.19. Alphas are available :) This is detailed in my book in ch06 On Tue, Apr 21, 2009 at 5:07 PM, javateck javateck javat...@gmail.com wrote: anyone

mapred.tasktracker.map.tasks.maximum

2009-04-21 Thread javateck javateck
I set my mapred.tasktracker.map.tasks.maximum to 10, but when I run a task, it's only using 2 out of 10, any way to know why it's only using 2? thanks

Re: mapred.tasktracker.map.tasks.maximum

2009-04-21 Thread javateck javateck
/ 0 * * On Tue, Apr 21, 2009 at 1:56 PM, Koji Noguchi knogu...@yahoo-inc.comwrote: It's probably a silly question, but you do have more than 2 mappers on your second job? If yes, I have no idea what's happening. Koji -Original Message- From: javateck javateck [mailto:javat

Re: mapred.tasktracker.map.tasks.maximum

2009-04-21 Thread javateck javateck
tasks should be independent. On Tue, Apr 21, 2009 at 2:23 PM, Miles Osborne mi...@inf.ed.ac.uk wrote: is your input data compressed? if so then you will get one mapper per file Miles 2009/4/21 javateck javateck javat...@gmail.com: Hi Koji, Thanks for helping. I don't know why hadoop

Re: mapred.tasktracker.map.tasks.maximum

2009-04-21 Thread javateck javateck
, because I set 10 for mapred.tasktracker.map.tasks.maximum, and I check the job's conf is also 10, but actual hadoop is just using 2 map jobs. On Tue, Apr 21, 2009 at 1:20 PM, javateck javateck javat...@gmail.comwrote: I set my mapred.tasktracker.map.tasks.maximum to 10, but when I run a task

anyone knows why setting mapred.tasktracker.map.tasks.maximum not working?

2009-04-21 Thread javateck javateck
anyone knows why setting *mapred.tasktracker.map.tasks.maximum* not working? I set it to 10, but still see only 2 map tasks running when running one job

raw files become zero bytes when mapreduce job hit outofmemory error

2009-04-13 Thread javateck javateck
I'm running some mapreduce, and some jobs has outofmemory errors, and I find that that the raw data itself also got corrupted, becomes zero bytes, very strange to me, I did not look very detail into it, but just want to check quickly with someone with such experience. I'm running at 0.18.3. thanks

API: FSDataOutputStream create(Path f, boolean overwrite)

2009-04-12 Thread javateck javateck
Hi: I'm trying to use FSDataOutputStream create(Path f, boolean overwrite), I'm calling create(new Path(somePath), false), but creation still fails with IOException even when the file does not exist, can someone explain the behavior? thanks,

does hadoop have any way to append to an existing file?

2009-04-10 Thread javateck javateck
Hi, does hadoop have any way to append to an existing file? for example, I wrote some contents to a file, and later on I want to append some more contents to the file. thanks,

safemode forever

2009-04-07 Thread javateck javateck
Hi, I'm wondering if anyone has solutions about the nonstopped safe mode, any way to get it around? thanks, error: org.apache.hadoop.dfs.SafeModeException: Cannot delete /mapred/system. Name node is in safe mode. The ratio of reported blocks 0.4696 has not reached the threshold 0.9990. Safe

hadoop 0.18.3 writing not flushing to hadoop server?

2009-04-06 Thread javateck javateck
I have a strange issue that when I write to hadoop, I find that the content is not transferred to hadoop even after a long time, is there any way to force flush the local temp files to hadoop after writing to hadoop? And when I shutdown the VM, it's getting flushed. thanks,

HDFS data block clarification

2009-04-02 Thread javateck javateck
Can someone tell whether a file will occupy one or more blocks? for example, the default block size is 64MB, and if I save a 4k file to HDFS, will the 4K file occupy the whole 64MB block alone? so in this case, do I do need to configure the block size to 10k if most of my files are less than

Re: Running MapReduce without setJar

2009-04-01 Thread javateck javateck
I think you need to set a property (mapred.jar) inside hadoop-site.xml, then you don't need to hardcode in your java code, and it will be fine. But I don't know if there is any way that we can set multiple jars, since a lot of times our own mapreduce class needs to reference other jars. On Wed,

Re: Running MapReduce without setJar

2009-04-01 Thread javateck javateck
reduce programs without using a jar? I do not want to use hadoop jar ... either. On Wed, Apr 1, 2009 at 1:10 PM, javateck javateck javat...@gmail.com wrote: I think you need to set a property (mapred.jar) inside hadoop-site.xml, then you don't need to hardcode in your java code