Dear all,
I'm looking for ways to improve the namenode heap size usage of a
800-node 10PB testing Hadoop cluster that stores around 30 million files.
Here's some info:
1 x namenode: 32GB RAM, 24GB heap size
800 x datanode: 8GB RAM, 13TB hdd
33050825 files and directories, 47708724 bloc
Hi On,
The namenode stores the full filesystem image in memory. Looking at
your stats, you have ~30 million files/directories and ~47 million
blocks. That means that on average, each of your files is only ~1.4
blocks in size. One way to lower the pressure on the namenode would
be to store fewer,
Hi,
If I define HDFS to use blocks of 64 MB, and I store in HDFS a 1KB
file, this file will ocupy 64MB in the HDFS?
Thanks,
On 06/10/2011 10:35 AM, Pedro Costa wrote:
Hi,
If I define HDFS to use blocks of 64 MB, and I store in HDFS a 1KB
file, this file will ocupy 64MB in the HDFS?
Thanks,
HDFS is not very efficient storing small files, because each file is
stored in a block (of 64 MB in your case), and the block m
But, how can I say that a 1KB file will only use 1KB of disc space, if
a block is configured has 64MB? In my view, if a 1KB use a block of
64MB, the file will occupy 64MB in the disc.
How can you disassociate a 64MB data block from HDFS of a disk block?
On Fri, Jun 10, 2011 at 5:01 PM, Marcos Or
On Fri, Jun 10, 2011 at 11:05 AM, Pedro Costa wrote:
> Hi,
>
> If I define HDFS to use blocks of 64 MB, and I store in HDFS a 1KB
> file, this file will ocupy 64MB in the HDFS?
>
No, it will occupy something much closer to 1KB than 64MB. There is some
small overhead related to metadata about the
On Fri, Jun 10, 2011 at 8:42 AM, Pedro Costa wrote:
> But, how can I say that a 1KB file will only use 1KB of disc space, if
> a block is configured has 64MB? In my view, if a 1KB use a block of
> 64MB, the file will occupy 64MB in the disc.
A block of HDFS is the unit of distribution and replica
This means that, when HDFS reads 1KB file from the disk, he will put
the data in blocks of 64MB?
On Fri, Jun 10, 2011 at 5:00 PM, Philip Zeyliger wrote:
> On Fri, Jun 10, 2011 at 8:42 AM, Pedro Costa wrote:
>> But, how can I say that a 1KB file will only use 1KB of disc space, if
>> a block is c
On Fri, Jun 10, 2011 at 9:08 AM, Pedro Costa wrote:
> This means that, when HDFS reads 1KB file from the disk, he will put
> the data in blocks of 64MB?
No.
>
> On Fri, Jun 10, 2011 at 5:00 PM, Philip Zeyliger wrote:
>> On Fri, Jun 10, 2011 at 8:42 AM, Pedro Costa wrote:
>>> But, how can I say
On 06/10/2011 04:57 AM, Joey Echeverria wrote:
Hi On,
The namenode stores the full filesystem image in memory. Looking at
your stats, you have ~30 million files/directories and ~47 million
blocks. That means that on average, each of your files is only ~1.4
blocks in size. One way to lower the p
So, I'm not getting how a 1KB file can cost a block of 64MB. Can
anyone explain me?
On Fri, Jun 10, 2011 at 5:13 PM, Philip Zeyliger wrote:
> On Fri, Jun 10, 2011 at 9:08 AM, Pedro Costa wrote:
>> This means that, when HDFS reads 1KB file from the disk, he will put
>> the data in blocks of 64MB?
I am also relatively new to hadoop, so others may feel free to correct me if
am wrong.
NN keeps track of a file by "inode" and the blocks related to that inode. In
your case, since your file size is smaller than the block size, NN will have
only ONE block associated with this inode (assuming only
Pedro,
You need to distinguish between "HDFS" files and blocks, and "Low-level Disk"
files
and blocks.
Large HDFS files are broken into HDFS blocks and stored in multiple Datanodes.
On the Datanodes, each HDFS block is stored as a Low-level Disk file.
So if you have the block size set to 64MB, th
It will only take up ~1KB of local datanode disk space (+ metadata
space such as the CRC32 of every 512 bytes, along with replication @
1KB per replicated block, in this case 2KB) but the real cost is a
block entry in the Namenode --- all block data at the namenode lives
in memory, which is a much
Each "object" (file, directory, and block) uses about 150 bytes of
memory. If you lower the number of files by having larger ones, you
save a modest amount of memory, depending on how many blocks your
existing files use. The real savings comes from having larger files
and a larger block size. Lets
I think the files may have been corrupted when I had initially shut down the
node that was still in decommisiioning mode
Unfortunately I hadn't done the dfsadmin -report any time soon before I had the
incident so I can't be sure that they haven't been there for a while. I always
assumed that th
Good question. I didn't pick up on the fact that fsck disagrees with
dfsadmin. Have you tried a full restart? Maybe somebody's information
is out of date?
-Joey
On Fri, Jun 10, 2011 at 6:22 PM, Robert J Berger wrote:
> I think the files may have been corrupted when I had initially shut down the
I can't really do a full restart unless its the only option.
I did find some old temporary mapred job files that were considered
under-replicated, so I deleted them and the system that was taking forever to
decommision finished decomissioning (not sure if there was really a causal
connection)
It should be safe to run fsck -move. Worst case, corrupt files end up in
/lost+found. The job files are probably related to the under replicated blocks.
The default replication factor for job files is 10 and I noticed you have 9
datanodes.
The under replication would probably also prevented th
19 matches
Mail list logo