While your problem is interesting, you need not use FSCK to get block IDs of a file, as thats not the right way to fetch it (its a rather long, should-be-disallowed route). You can leverage the FileSystem API itself to do that. See FileSystem#getFileBlockLocations(…), i.e. http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#getFileBlockLocations(org.apache.hadoop.fs.FileStatus,%20long,%20long) if you use FileSystem APIs, or see FileContext#listLocatedStatus(…) i.e. http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#listLocatedStatus(org.apache.hadoop.fs.Path) if you use FileContext APIs.
Onto your problem though, can you successfully do a `telnet NNHOST 50070` from one of your slave nodes? On Wed, Nov 7, 2012 at 10:18 PM, Sebastian.Lehrack <sebastian.lehr...@physik.uni-muenchen.de> wrote: > Hi, > > I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now, > it's working fine. > Recently, i had to use fsck in a map-process, which leads to a > connection refused error. > I read about this error, that i should check about firewalls and proper > configfiles etc. > The command is only working on the namenode. > If i use the browser for the command, it's working (although also > refused, but because of webusers permission) > I can use telnet to connect to the namenode. > In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried > IP-adress and Hostname. I marked it as final. > I'm still getting this connecting refused error, when using fsck on a > node other then the namenode. > > Any further suggesting would be great. The fsck command is used to check > the numbers of block, in which a file is stored on the hdfs. Maybe > there's another possibility? > > Greetings -- Harsh J