You have to customize *inputformat *by extending *FileInputFormat *and override methods *getSplits*(JobContext jobc), *computeSplitSize*(long blockSize, long minSize, long maxSize)
On Sat, Jun 20, 2015 at 4:55 AM, Shiyao Ma <[email protected]> wrote: > Hi. > > How to monitor the block transmission log of datanodes? > > > A more detailed example: > > My hdfs block size is 128MB. I have a file stored on hdfs with size > 167.08MB. > > Also, I have a client, requesting the whole file with three splits, e.g., > > hdfs://myserver:9000/myfile:0+58397994 (0-56MB) > > hdfs://myserver:9000/myfile:58397994+58397994 (56MB-112MB) > > hdfs://myserver:9000/myfile:116795988+58397994 (112MB-168MB) > > > The situation is kinda fixed and I cannot modify the split size. > Nevertheless, I'd like to know what block tranmission is happening > under the earth. > -- Regards, ...sudhakara
