Hi Ascot, No, especially with the Block ID based datanode layout ( https://issues.apache.org/jira/browse/HDFS-6482) this should no longer be true on HDFS. If you do plan to have millions of files per datanode, you'd do well to familiarize yourself with https://issues.apache.org/jira/browse/HDFS-8791
Cheers, Joep On Fri, Jun 3, 2016 at 11:32 AM, Ascot Moss <[email protected]> wrote: > Hi, > > I read some (old?) articles from Internet about hadoop: > > "Due to the DataNode-NameNode block report mechanism, we cannot exceed > 100-200K blocks (or files) per node, thereby limiting our 10-node cluster > to less than 2M files." > > > > > Is this true in Hadoop v2.x? > > > > Regards > >
