Hi,
I want to store lots of files in HDFS, the file size is <= 2G.
I don't want the file to split into blocks,because I need the whole file while processing it, and I don't want to transfer blocks to one node when processing it. A easy way to do this would be set dfs.write.packet.size to 2G, I wonder if some one has similar experiences or known whether this is practicable.
Will there be performance problems when set the packet size to a big number?

Thanks!
donal

Reply via email to