On Tue, May 19, 2009 at 1:00 AM, Foss User wrote:
> On Tue, May 19, 2009 at 12:53 PM, Ravi Phulari
> wrote:
> > If you have hadoop superuser/administrative permissions you can use fsck
> > with correct options to view block report and locations for every block.
> >
> > For further information p
On Tue, May 19, 2009 at 12:53 PM, Ravi Phulari wrote:
> If you have hadoop superuser/administrative permissions you can use fsck
> with correct options to view block report and locations for every block.
>
> For further information please refer -
> http://hadoop.apache.org/core/docs/r0.20.0/comma
On May 19, 2009, at 12:13 AM, Foss User wrote:
I know that if a file is very large, it will be split into blocks and
the blocks would be spread out in various data nodes. I want to know
whether I can find out through GUI or logs exactly where which data
nodes contain which file blocks of a part
If you have hadoop superuser/administrative permissions you can use fsck with
correct options to view block report and locations for every block.
For further information please refer -
http://hadoop.apache.org/core/docs/r0.20.0/commands_manual.html#fsck
On 5/19/09 12:13 AM, "Foss User" wrote
I know that if a file is very large, it will be split into blocks and
the blocks would be spread out in various data nodes. I want to know
whether I can find out through GUI or logs exactly where which data
nodes contain which file blocks of a particular huge text file?