You can use FileSystem.getFileStatus(Path p) which gives you the block size
specific to a file.

On Tue, Feb 28, 2012 at 2:50 AM, Kai Voigt <k...@123.org> wrote:

> "hadoop fsck <filename> -blocks" is something that I think of quickly.
>
> http://hadoop.apache.org/common/docs/current/commands_manual.html#fsckhas 
> more details
>
> Kai
>
> Am 28.02.2012 um 02:30 schrieb Mohit Anchlia:
>
> > How do I verify the block size of a given file? Is there a command?
> >
> > On Mon, Feb 27, 2012 at 7:59 AM, Joey Echeverria <j...@cloudera.com>
> wrote:
> >
> >> dfs.block.size can be set per job.
> >>
> >> mapred.tasktracker.map.tasks.maximum is per tasktracker.
> >>
> >> -Joey
> >>
> >> On Mon, Feb 27, 2012 at 10:19 AM, Mohit Anchlia <mohitanch...@gmail.com
> >
> >> wrote:
> >>> Can someone please suggest if parameters like dfs.block.size,
> >>> mapred.tasktracker.map.tasks.maximum are only cluster wide settings or
> >> can
> >>> these be set per client job configuration?
> >>>
> >>> On Sat, Feb 25, 2012 at 5:43 PM, Mohit Anchlia <mohitanch...@gmail.com
> >>> wrote:
> >>>
> >>>> If I want to change the block size then can I use Configuration in
> >>>> mapreduce job and set it when writing to the sequence file or does it
> >> need
> >>>> to be cluster wide setting in .xml files?
> >>>>
> >>>> Also, is there a way to check the block of a given file?
> >>>>
> >>
> >>
> >>
> >> --
> >> Joseph Echeverria
> >> Cloudera, Inc.
> >> 443.305.9434
> >>
>
> --
> Kai Voigt
> k...@123.org
>
>
>
>
>


-- 
Join me at http://hadoopworkshop.eventbrite.com/

Reply via email to