I have have input data files in local filesystem: /input/path/*.log. But i
don't know nothing about their sizes, number of files etc. If *.log files
are small and there are lots of them it is no reason to start "bin/hadoop
fs -put" for each file, because one start of "bin/hadoop" is
time-expensive.
1. What if i write "bin/hadoop fs -put /input/path/*.log
/hdfs/input/path"? Will "*" be passed to hadoop and hadoop will open all
files one-by-one or "*" will be processed by "/bin/bash"? If second
choise, then what if "expanded" command-line will me too long for bash
itself (more than 32768 symbols for ex? (if files too many))
2. I can merge many small files to "packs" (cat filename >> pack) by 100MB
per "pack" and then putting them into HDFS. But what if number of files
too many and finish size of all data is several GB? So i will need free
space on HDD = (input data size)*2 for that operations...
thank you
- Fast Putting Files to HDFS. pavel kolodin
-