Does hive split input table file according the ways hadoop provides which a file is splitted by block size? Then the number of map tasks is decided by the split size.
But I have a table file which size is 196MB,my hdfs block size is 64MB,but I only see 3 map tasks in my web interface.What is the reason?